WorldWideScience

Sample records for unit process selection

  1. ACTION OF UNIFORM SEARCH ALGORITHM WHEN SELECTING LANGUAGE UNITS IN THE PROCESS OF SPEECH

    Directory of Open Access Journals (Sweden)

    Nekipelova Irina Mikhaylovna

    2013-04-01

    Full Text Available The article is devoted to research of action of uniform search algorithm when selecting by human of language units for speech produce. The process is connected with a speech optimization phenomenon. This makes it possible to shorten the time of cogitation something that human want to say, and to achieve the maximum precision in thoughts expression. The algorithm of uniform search works at consciousness and subconsciousness levels. It favours the forming of automatism produce and perception of speech. Realization of human's cognitive potential in the process of communication starts up complicated mechanism of self-organization and self-regulation of language. In turn, it results in optimization of language system, servicing needs not only human's self-actualization but realization of communication in society. The method of problem-oriented search is used for researching of optimization mechanisms, which are distinctive to speech producing and stabilization of language.

  2. ACTION OF UNIFORM SEARCH ALGORITHM WHEN SELECTING LANGUAGE UNITS IN THE PROCESS OF SPEECH

    Directory of Open Access Journals (Sweden)

    Ирина Михайловна Некипелова

    2013-05-01

    Full Text Available The article is devoted to research of action of uniform search algorithm when selecting by human of language units for speech produce. The process is connected with a speech optimization phenomenon. This makes it possible to shorten the time of cogitation something that human want to say, and to achieve the maximum precision in thoughts expression. The algorithm of uniform search works at consciousness  and subconsciousness levels. It favours the forming of automatism produce and perception of speech. Realization of human's cognitive potential in the process of communication starts up complicated mechanism of self-organization and self-regulation of language. In turn, it results in optimization of language system, servicing needs not only human's self-actualization but realization of communication in society. The method of problem-oriented search is used for researching of optimization mechanisms, which are distinctive to speech producing and stabilization of language.DOI: http://dx.doi.org/10.12731/2218-7405-2013-4-50

  3. Evaluation of Selected Resource Allocation and Scheduling Methods in Heterogeneous Many-Core Processors and Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ciznicki Milosz

    2014-12-01

    Full Text Available Heterogeneous many-core computing resources are increasingly popular among users due to their improved performance over homogeneous systems. Many developers have realized that heterogeneous systems, e.g. a combination of a shared memory multi-core CPU machine with massively parallel Graphics Processing Units (GPUs, can provide significant performance opportunities to a wide range of applications. However, the best overall performance can only be achieved if application tasks are efficiently assigned to different types of processor units in time taking into account their specific resource requirements. Additionally, one should note that available heterogeneous resources have been designed as general purpose units, however, with many built-in features accelerating specific application operations. In other words, the same algorithm or application functionality can be implemented as a different task for CPU or GPU. Nevertheless, from the perspective of various evaluation criteria, e.g. the total execution time or energy consumption, we may observe completely different results. Therefore, as tasks can be scheduled and managed in many alternative ways on both many-core CPUs or GPUs and consequently have a huge impact on the overall computing resources performance, there are needs for new and improved resource management techniques. In this paper we discuss results achieved during experimental performance studies of selected task scheduling methods in heterogeneous computing systems. Additionally, we present a new architecture for resource allocation and task scheduling library which provides a generic application programming interface at the operating system level for improving scheduling polices taking into account a diversity of tasks and heterogeneous computing resources characteristics.

  4. Signal processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Boswell, J.

    1983-01-01

    The architecture of the signal processing unit (SPU) comprises an ROM connected to a program bus, and an input-output bus connected to a data bus and register through a pipeline multiplier accumulator (pmac) and a pipeline arithmetic logic unit (palu), each associated with a random access memory (ram1,2). The system pulse frequency is from 20 mhz. The pmac is further detailed, and has a capability of 20 mega operations per second. There is also a block diagram for the palu, showing interconnections between the register block (rbl), separator for bus (bs), register (reg), shifter (sh) and combination unit. The first and second rams have formats 64*16 and 32*32 bits, respectively. Further data are a 5-v power supply and 2.5 micron n-channel silicon gate mos technology with about 50000 transistors.

  5. COTS software selection process.

    Energy Technology Data Exchange (ETDEWEB)

    Watkins, William M. (Strike Wire Technologies, Louisville, CO); Lin, Han Wei; McClelland, Kelly (U.S. Security Associates, Livermore, CA); Ullrich, Rebecca Ann; Khanjenoori, Soheil; Dalton, Karen; Lai, Anh Tri; Kuca, Michal; Pacheco, Sandra; Shaffer-Gant, Jessica

    2006-05-01

    Today's need for rapid software development has generated a great interest in employing Commercial-Off-The-Shelf (COTS) software products as a way of managing cost, developing time, and effort. With an abundance of COTS software packages to choose from, the problem now is how to systematically evaluate, rank, and select a COTS product that best meets the software project requirements and at the same time can leverage off the current corporate information technology architectural environment. This paper describes a systematic process for decision support in evaluating and ranking COTS software. Performed right after the requirements analysis, this process provides the evaluators with more concise, structural, and step-by-step activities for determining the best COTS software product with manageable risk. In addition, the process is presented in phases that are flexible to allow for customization or tailoring to meet various projects' requirements.

  6. Join Cost for Unit Selection Speech Synthesis

    OpenAIRE

    Vepa, Jithendra

    2004-01-01

    Undoubtedly, state-of-the-art unit selection-based concatenative speech systems produce very high quality synthetic speech. this is due to a large speech database containing many instances of each speech unit, with a varied and natural distribution of prosodic and spectral characteristics. the join cost, which measures how well two units can be joined together is one of the main criteria for selecting appropriate units from this large speech database. The ideal join cost is one that measur...

  7. Flight selection at United Airlines

    Science.gov (United States)

    Traub, W.

    1980-01-01

    Airline pilot selection proceedures are discussed including psychogical and personality tests, psychomotor performance requirements, and flight skills evaluation. Necessary attitude and personality traits are described and an outline of computer selection, testing, and training techniques is given.

  8. The Administrator Selection Process

    Science.gov (United States)

    Griffin, Michael F.

    1974-01-01

    Proposes that education establish for administrators systematic, rigorous, albeit subjective, selection procedures that recognize the principle of organizational democracy and the public nature of the educational enterprise. (Author/DN)

  9. THOR Particle Processing Unit PPU

    Science.gov (United States)

    Federica Marcucci, Maria; Bruno, Roberto; Consolini, Giuseppe; D'Amicis, Raffaella; De Lauretis, Marcello; De Marco, Rossana; De Michelis, Paola; Francia, Patrizia; Laurenza, Monica; Materassi, Massimo; Vellante, Massimo; Valentini, Francesco

    2016-04-01

    Turbulence Heating ObserveR (THOR) is the first mission ever flown in space dedicated to plasma turbulence. On board THOR, data collected by the Turbulent Electron Analyser, the Ion Mass Spectrum analyser and the Cold Solar Wind ion analyser instruments will be processed by a common digital processor unit, the Particle Processing Unit (PPU). PPU architecture will be based on the state of the art space flight processors and will be fully redundant, in order to efficiently and safely handle the data from the numerous sensors of the instruments suite. The approach of a common processing unit for particle instruments is very important for the enabling of an efficient management for correlative plasma measurements, also facilitating interoperation with other instruments on the spacecraft. Moreover, it permits technical and programmatic synergies giving the possibility to optimize and save spacecraft resources.

  10. ARM Mentor Selection Process

    Energy Technology Data Exchange (ETDEWEB)

    Sisterson, D. L. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-10-01

    The Atmospheric Radiation Measurement (ARM) Program was created in 1989 with funding from the U.S. Department of Energy (DOE) to develop several highly instrumented ground stations to study cloud formation processes and their influence on radiative transfer. In 2003, the ARM Program became a national scientific user facility, known as the ARM Climate Research Facility. This scientific infrastructure provides for fixed sites, mobile facilities, an aerial facility, and a data archive available for use by scientists worldwide through the ARM Climate Research Facility—a scientific user facility. The ARM Climate Research Facility currently operates more than 300 instrument systems that provide ground-based observations of the atmospheric column. To keep ARM at the forefront of climate observations, the ARM infrastructure depends heavily on instrument scientists and engineers, also known as lead mentors. Lead mentors must have an excellent understanding of in situ and remote-sensing instrumentation theory and operation and have comprehensive knowledge of critical scale-dependent atmospheric processes. They must also possess the technical and analytical skills to develop new data retrievals that provide innovative approaches for creating research-quality data sets. The ARM Climate Research Facility is seeking the best overall qualified candidate who can fulfill lead mentor requirements in a timely manner.

  11. Learning and Selection Processes

    Directory of Open Access Journals (Sweden)

    Marc Artiga

    2010-06-01

    Full Text Available Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} In this paper I defend a teleological explanation of normativity, i. e., I argue that what an organism (or device is supposed to do is determined by its etiological function. In particular, I present a teleological account of the normativity that arises in learning processes, and I defend it from some objections.

  12. Sexual selection: Another Darwinian process.

    Science.gov (United States)

    Gayon, Jean

    2010-02-01

    the Darwin-Wallace controversy was that most Darwinian biologists avoided the subject of sexual selection until at least the 1950s, Ronald Fisher being a major exception. This controversy still deserves attention from modern evolutionary biologists, because the modern approach inherits from both Darwin and Wallace. The modern approach tends to present sexual selection as a special aspect of the theory of natural selection, although it also recognizes the big difficulties resulting from the inevitable interaction between these two natural processes of selection. And contra Wallace, it considers mate choice as a major process that deserves a proper evolutionary treatment. The paper's conclusion explains why sexual selection can be taken as a test case for a proper assessment of "Darwinism" as a scientific tradition. Darwin's and Wallace's attitudes towards sexual selection reveal two different interpretations of the principle of natural selection: Wallace's had an environmentalist conception of natural selection, whereas Darwin was primarily sensitive to the element of competition involved in the intimate mechanism of any natural process of selection. Sexual selection, which can lack adaptive significance, reveals this exemplarily.

  13. Library Materials: Selection and Processing.

    Science.gov (United States)

    Freeman, Michael; And Others

    This script of a slide-tape presentation, which describes the selection and processing of materials for a university library, includes commentary with indicators for specific slide placement. Distinction is made between books and serial publications and the materials are followed from the ordering decision through processing. The role of the…

  14. Process Selection and Operation of 400 kt/a Urea Unit%400kt/a尿素装置工艺选择及运行

    Institute of Scientific and Technical Information of China (English)

    张同福; 宋子兴; 窦义兵

    2012-01-01

    A comparison is made of two urea units based on carbon dioxide stripping and aqueous solution total recycle processes with respect to their main parameters, operation data and production cost. The results of the comparison show that the urea unit based on carbon dioxide stripping process is better than the aqueous solution total recycle process in terms of operation stability and consumption figures per ton of urea although the investment for the former is higher.%对2套分别采用二氧化碳汽提法和水溶液全循环法工艺技术的尿素装置的主要参数、运行数据和生产成本进行了对比.比较结果表明:二氧化碳汽提法尿素装置除了投资稍高外,无论运行稳定性还是吨尿素消耗都优于水溶液全循环法工艺.

  15. Gould talking past Dawkins on the unit of selection issue.

    Science.gov (United States)

    Istvan, M A

    2013-09-01

    My general aim is to clarify the foundational difference between Stephen Jay Gould and Richard Dawkins concerning what biological entities are the units of selection in the process of evolution by natural selection. First, I recapitulate Gould's central objection to Dawkins's view that genes are the exclusive units of selection. According to Gould, it is absurd for Dawkins to think that genes are the exclusive units of selection when, after all, genes are not the exclusive interactors: those agents directly engaged with, directly impacted by, environmental pressures. Second, I argue that Gould's objection still goes through even when we take into consideration Sterelny and Kitcher's defense of gene selectionism in their admirable paper "The Return of the Gene." Third, I propose a strategy for defending Dawkins that I believe obviates Gould's objection. Drawing upon Elisabeth Lloyd's careful taxonomy of the various understandings of the unit of selection at play in the philosophy of biology literature, my proposal involves realizing that Dawkins endorses a different understanding of the unit of selection than Gould holds him to, an understanding that does not require genes to be the exclusive interactors.

  16. Frequency Selective Surfaces with Nanoparticles Unit Cell

    Directory of Open Access Journals (Sweden)

    Nga Hung Poon

    2015-09-01

    Full Text Available The frequency selective surface (FSS is a periodic structure with filtering performance for optical and microwave signals. The general periodic arrays made with patterned metallic elements can act as an aperture or patch on a substrate. In this work, two kinds of materials were used to produce unit cells with various patterns. Gold nanoparticles of 25 nm diameter were used to form periodic monolayer arrays by a confined photocatalytic oxidation-based surface modification method. As the other material, silver gel was used to create multiple layers of silver. Due to the ultra-thin nature of the self-assembled gold nanoparticle monolayer, it is very easy to penetrate the FSS with terahertz radiation. However, the isolated silver islands made from silver gel form thicker multiple layers and contribute to much higher reflectance. This work demonstrated that multiple silver layers are more suitable than gold nanoparticles for use in the fabrication of FSS structures.

  17. ON DEVELOPING CLEANER ORGANIC UNIT PROCESSES

    Science.gov (United States)

    Organic waste products, potentially harmful to the human health and the environment, are primarily produced in the synthesis stage of manufacturing processes. Many such synthetic unit processes, such as halogenation, oxidation, alkylation, nitration, and sulfonation are common to...

  18. 15 CFR 2301.18 - Selection process.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Selection process. 2301.18 Section 2301.18 Commerce and Foreign Trade Regulations Relating to Telecommunications and Information NATIONAL... PROGRAM Evaluation and Selection Process § 2301.18 Selection process. (a) The PTFP Director will...

  19. Selecting Trade Books for Elementary Science Units.

    Science.gov (United States)

    Rop, Charles J.; Rop, Sheri K.

    2001-01-01

    Explains the importance of using well-chosen trade books for stimulating student interest and motivation in the natural world. Discusses how to assess and select trade books. Lists selected trade books on the life cycles of plants. (YDS)

  20. Innovation During the Supplier Selection Process

    DEFF Research Database (Denmark)

    Pilkington, Alan; Pedraza, Isabel

    2014-01-01

    Established ideas on supplier selection have not moved much from the original premise of how to choose between bidders. Whilst we have added many different tools and refinements to choose between alternative suppliers, its nature has not evolved. We move the original selection process approach...... observed through an ethnographic embedded researcher study has refined the selection process and has two selection stages one for first supply covering tool/process developed and another later for resupply of mature parts. We report the details of the process, those involved, the criteria employed...... and identify benefits and weaknesses of this enhanced selection process....

  1. Modeling of biopharmaceutical processes. Part 2: Process chromatography unit operation

    DEFF Research Database (Denmark)

    Kaltenbrunner, Oliver; McCue, Justin; Engel, Philip;

    2008-01-01

    Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent...

  2. Homology modeling, docking studies and molecular dynamic simulations using graphical processing unit architecture to probe the type-11 phosphodiesterase catalytic site: a computational approach for the rational design of selective inhibitors.

    Science.gov (United States)

    Cichero, Elena; D'Ursi, Pasqualina; Moscatelli, Marco; Bruno, Olga; Orro, Alessandro; Rotolo, Chiara; Milanesi, Luciano; Fossa, Paola

    2013-12-01

    Phosphodiesterase 11 (PDE11) is the latest isoform of the PDEs family to be identified, acting on both cyclic adenosine monophosphate and cyclic guanosine monophosphate. The initial reports of PDE11 found evidence for PDE11 expression in skeletal muscle, prostate, testis, and salivary glands; however, the tissue distribution of PDE11 still remains a topic of active study and some controversy. Given the sequence similarity between PDE11 and PDE5, several PDE5 inhibitors have been shown to cross-react with PDE11. Accordingly, many non-selective inhibitors, such as IBMX, zaprinast, sildenafil, and dipyridamole, have been documented to inhibit PDE11. Only recently, a series of dihydrothieno[3,2-d]pyrimidin-4(3H)-one derivatives proved to be selective toward the PDE11 isoform. In the absence of experimental data about PDE11 X-ray structures, we found interesting to gain a better understanding of the enzyme-inhibitor interactions using in silico simulations. In this work, we describe a computational approach based on homology modeling, docking, and molecular dynamics simulation to derive a predictive 3D model of PDE11. Using a Graphical Processing Unit architecture, it is possible to perform long simulations, find stable interactions involved in the complex, and finally to suggest guideline for the identification and synthesis of potent and selective inhibitors.

  3. Trainable unit selection speech synthesis under statistical framework

    Institute of Scientific and Technical Information of China (English)

    WANG RenHua; DAI LiRong; LING ZhenHua; HU Yu

    2009-01-01

    This paper proposes a trainable unit selection speech synthesis method based on statistical modeling framework. At training stage, acoustic features are extracted from the training database and statistical models are estimated for each feature. During synthesis, the optimal candidate unit sequence is searched out from the database following the maximum likelihood criterion derived from the trained models. Finally, the waveforms of the optimal candidate units are concatenated to produce synthetic speech. Experiment results show that this method can improve the automation of system construction and naturalness of synthetic speech effectively compared with the conventional unit selection synthe-sis method. Furthermore, this paper presents a minimum unit selection error model training criterion according to the characteristics of unit selection speech synthesis and adopts discriminative training for model parameter estimation. This criterion can finally achieve the full automation of system con-struction and improve the naturalness of synthetic speech further.

  4. Information Selection in Intelligence Processing

    Science.gov (United States)

    2011-12-01

    problem of overload.” As another example, Whaley (Whaley, 1974) argues that one of the causes for the Pearl Harbor and Barbarossa strategic surprises is...which becomes more and more important as the Internet evolves. The IR problem and the information selection problem share some similar...all the algorithms tend more towards exploration: the temperature parameter in Softmax is higher (0.12 instead of 0.08), the delta for the VDBE

  5. Data Sorting Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    M. J. Mišić

    2012-06-01

    Full Text Available Graphics processing units (GPUs have been increasingly used for general-purpose computation in recent years. The GPU accelerated applications are found in both scientific and commercial domains. Sorting is considered as one of the very important operations in many applications, so its efficient implementation is essential for the overall application performance. This paper represents an effort to analyze and evaluate the implementations of the representative sorting algorithms on the graphics processing units. Three sorting algorithms (Quicksort, Merge sort, and Radix sort were evaluated on the Compute Unified Device Architecture (CUDA platform that is used to execute applications on NVIDIA graphics processing units. Algorithms were tested and evaluated using an automated test environment with input datasets of different characteristics. Finally, the results of this analysis are briefly discussed.

  6. Selecting a plutonium vitrification process

    Energy Technology Data Exchange (ETDEWEB)

    Jouan, A. [Centre d`Etudes de la Vallee du Rhone, Bagnols sur Ceze (France)

    1996-05-01

    Vitrification of plutonium is one means of mitigating its potential danger. This option is technically feasible, even if it is not the solution advocated in France. Two situations are possible, depending on whether or not the glass matrix also contains fission products; concentrations of up to 15% should be achievable for plutonium alone, whereas the upper limit is 3% in the presence of fission products. The French continuous vitrification process appears to be particularly suitable for plutonium vitrification: its capacity is compatible with the required throughout, and the compact dimensions of the process equipment prevent a criticality hazard. Preprocessing of plutonium metal, to convert it to PuO{sub 2} or to a nitric acid solution, may prove advantageous or even necessary depending on whether a dry or wet process is adopted. The process may involve a single step (vitrification of Pu or PuO{sub 2} mixed with glass frit) or may include a prior calcination step - notably if the plutonium is to be incorporated into a fission product glass. It is important to weigh the advantages and drawbacks of all the possible options in terms of feasibility, safety and cost-effectiveness.

  7. Analysis and Optimization of Central Processing Unit Process Parameters

    Science.gov (United States)

    Kaja Bantha Navas, R.; Venkata Chaitana Vignan, Budi; Durganadh, Margani; Rama Krishna, Chunduri

    2017-05-01

    The rapid growth of computer has made processing more data capable, which increase the heat dissipation. Hence the system unit CPU must be cooled against operating temperature. This paper presents a novel approach for the optimization of operating parameters on Central Processing Unit with single response based on response graph method. These methods have a series of steps from of proposed approach which are capable of decreasing uncertainty caused by engineering judgment in the Taguchi method. Orthogonal Array value was taken from ANSYS report. The method shows a good convergence with the experimental and the optimum process parameters.

  8. Quantum Central Processing Unit and Quantum Algorithm

    Institute of Scientific and Technical Information of China (English)

    王安民

    2002-01-01

    Based on a scalable and universal quantum network, quantum central processing unit, proposed in our previous paper [Chin. Phys. Left. 18 (2001)166], the whole quantum network for the known quantum algorithms,including quantum Fourier transformation, Shor's algorithm and Grover's algorithm, is obtained in a unitied way.

  9. Syllables as Processing Units in Handwriting Production

    Science.gov (United States)

    Kandel, Sonia; Alvarez, Carlos J.; Vallee, Nathalie

    2006-01-01

    This research focused on the syllable as a processing unit in handwriting. Participants wrote, in uppercase letters, words that had been visually presented. The interletter intervals provide information on the timing of motor production. In Experiment 1, French participants wrote words that shared the initial letters but had different syllable…

  10. Graphics processing unit-assisted lossless decompression

    Science.gov (United States)

    Loughry, Thomas A.

    2016-04-12

    Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.

  11. Graphics processing unit-assisted lossless decompression

    Energy Technology Data Exchange (ETDEWEB)

    Loughry, Thomas A.

    2016-04-12

    Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.

  12. ARM Lead Mentor Selection Process

    Energy Technology Data Exchange (ETDEWEB)

    Sisterson, DL

    2013-03-13

    The ARM Climate Research Facility currently operates more than 300 instrument systems that provide ground-based observations of the atmospheric column. To keep ARM at the forefront of climate observations, the ARM infrastructure depends heavily on instrument scientists and engineers, also known as Instrument Mentors. Instrument Mentors must have an excellent understanding of in situ and remote-sensing instrumentation theory and operation and have comprehensive knowledge of critical scale-dependent atmospheric processes. They also possess the technical and analytical skills to develop new data retrievals that provide innovative approaches for creating research-quality data sets.

  13. 44 CFR 150.7 - Selection process.

    Science.gov (United States)

    2010-10-01

    ... 44 Emergency Management and Assistance 1 2010-10-01 2010-10-01 false Selection process. 150.7 Section 150.7 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF... Selection process. (a) President's Award. Nominations for the President's Award shall be reviewed,...

  14. 7 CFR 3570.68 - Selection process.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Selection process. 3570.68 Section 3570.68 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING SERVICE, DEPARTMENT OF AGRICULTURE COMMUNITY PROGRAMS Community Facilities Grant Program § 3570.68 Selection process. Each...

  15. Roughness parameter selection for novel manufacturing processes.

    Science.gov (United States)

    Ham, M; Powers, B M

    2014-01-01

    This work proposes a method of roughness parameter (RP) selection for novel manufacturing processes or processes where little knowledge exists about which RPs are important. The method selects a single parameter to represent a group of highly correlated parameters. Single point incremental forming (SPIF) is used as the case study for the manufacturing process. This methodology was successful in reducing the number of RPs investigated from 18 to 8 in the case study. © Wiley Periodicals, Inc.

  16. The effects of selective decontamination in Dutch Intensive Care Units

    NARCIS (Netherlands)

    Oostdijk, E.A.N.

    2013-01-01

    Infections are an important complication in the treatment of critical ill patients in Intensive Care Units (ICUs) and are associated with increased mortality, morbidity and health care costs. Selective Decontamination of the Digestive Tract (SDD) and Selective Oropharyngeal Decontamination (SOD) are

  17. Numerical Integration with Graphical Processing Unit for QKD Simulation

    Science.gov (United States)

    2014-03-27

    existing and proposed Quantum Key Distribution (QKD) systems. This research investigates using graphical processing unit ( GPU ) technology to more...Time Pad GPU graphical processing unit API application programming interface CUDA Compute Unified Device Architecture SIMD single-instruction-stream...and can be passed by value or reference [2]. 2.3 Graphical Processing Units Programming with graphical processing unit ( GPU ) requires a different

  18. An integrated model for supplier selection process

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In today's highly competitive manufacturing environment, the supplier selection process becomes one of crucial activities in supply chain management. In order to select the best supplier(s) it is not only necessary to continuously tracking and benchmarking performance of suppliers but also to make a tradeoff between tangible and intangible factors some of which may conflict. In this paper an integration of case-based reasoning (CBR), analytical network process (ANP) and linear programming (LP) is proposed to solve the supplier selection problem.

  19. Material and process selection using product examples

    DEFF Research Database (Denmark)

    Lenau, Torben Anker

    2002-01-01

    that designers often limit their selection of materials and processes to a few well-known ones. Designers need to expand the solution space by considering more materials and processes. But they have to be convinced that the materials and processes are likely candidates that are worth investing time in exploring...... a search engine, and through hyperlinks can relevant materials and processes be explored. Realising that designers are very sensitive to user interfaces do all descriptions of materials, processes and products include graphical descriptions, i.e. pictures or computer graphics.......The objective of the paper is to suggest a different procedure for selecting materials and processes within the product development work. The procedure includes using product examples in order to increase the number of alternative materials and processes that is considered. Product examples can...

  20. Material and process selection using product examples

    DEFF Research Database (Denmark)

    Lenau, Torben Anker

    2001-01-01

    that designers often limit their selection of materials and processes to a few well-known ones. Designers need to expand the solution space by considering more materials and processes. But they have to be convinced that the materials and processes are likely candidates that are worth investing time in exploring...... a search engine, and through hyperlinks can relevant materials and processes be explored. Realising that designers are very sensitive to user interfaces do all descriptions of materials, processes and products include graphical descriptions, i.e. pictures or computer graphics.......The objective of the paper is to suggest a different procedure for selecting materials and processes within the product development work. The procedure includes using product examples in order to increase the number of alternative materials and processes that is considered. Product examples can...

  1. Selective hydrogenation processes in steam cracking

    Energy Technology Data Exchange (ETDEWEB)

    Bender, M.; Schroeter, M.K.; Hinrichs, M.; Makarczyk, P. [BASF SE, Ludwigshafen (Germany)

    2010-12-30

    Hydrogen is the key elixir used to trim the quality of olefinic and aromatic product slates from steam crackers. Being co-produced in excess amounts in the thermal cracking process a small part of the hydrogen is consumed in the ''cold part'' of a steam cracker to selectively hydrogenate unwanted, unsaturated hydrocarbons. The compositions of the various steam cracker product streams are adjusted by these processes to the outlet specifications. This presentation gives an overview over state-of-art selective hydrogenation technologies available from BASF for these processes. (Published in summary form only) (orig.)

  2. Temperature of the Central Processing Unit

    Directory of Open Access Journals (Sweden)

    Ivan Lavrov

    2016-10-01

    Full Text Available Heat is inevitably generated in the semiconductors during operation. Cooling in a computer, and in its main part – the Central Processing Unit (CPU, is crucial, allowing the proper functioning without overheating, malfunctioning, and damage. In order to estimate the temperature as a function of time, it is important to solve the differential equations describing the heat flow and to understand how it depends on the physical properties of the system. This project aims to answer these questions by considering a simplified model of the CPU + heat sink. A similarity with the electrical circuit and certain methods from electrical circuit analysis are discussed.

  3. Material and process selection using product examples

    DEFF Research Database (Denmark)

    Lenau, Torben Anker

    2001-01-01

    The objective of the paper is to suggest a different procedure for selecting materials and processes within the product development work. The procedure includes using product examples in order to increase the number of alternative materials and processes that is considered. Product examples can...... communicate information about materials and processes in a very concentrated and effective way. The product examples represent desired material properties but also includes information that can not be associated directly to the material, e.g. functional or perceived attributes. Previous studies suggest...... that designers often limit their selection of materials and processes to a few well-known ones. Designers need to expand the solution space by considering more materials and processes. But they have to be convinced that the materials and processes are likely candidates that are worth investing time in exploring...

  4. Graphics Processing Unit Assisted Thermographic Compositing

    Science.gov (United States)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2013-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques.

  5. Relativistic hydrodynamics on graphics processing units

    CERN Document Server

    Sikorski, Jan; Porter-Sobieraj, Joanna; Słodkowski, Marcin; Krzyżanowski, Piotr; Książek, Natalia; Duda, Przemysław

    2016-01-01

    Hydrodynamics calculations have been successfully used in studies of the bulk properties of the Quark-Gluon Plasma, particularly of elliptic flow and shear viscosity. However, there are areas (for instance event-by-event simulations for flow fluctuations and higher-order flow harmonics studies) where further advancement is hampered by lack of efficient and precise 3+1D~program. This problem can be solved by using Graphics Processing Unit (GPU) computing, which offers unprecedented increase of the computing power compared to standard CPU simulations. In this work, we present an implementation of 3+1D ideal hydrodynamics simulations on the Graphics Processing Unit using Nvidia CUDA framework. MUSTA-FORCE (MUlti STAge, First ORder CEntral, with a~slope limiter and MUSCL reconstruction) and WENO (Weighted Essentially Non-Oscillating) schemes are employed in the simulations, delivering second (MUSTA-FORCE), fifth and seventh (WENO) order of accuracy. Third order Runge-Kutta scheme was used for integration in the t...

  6. Neural inhibition enables selection during language processing.

    Science.gov (United States)

    Snyder, Hannah R; Hutchison, Natalie; Nyhus, Erika; Curran, Tim; Banich, Marie T; O'Reilly, Randall C; Munakata, Yuko

    2010-09-21

    Whether grocery shopping or choosing words to express a thought, selecting between options can be challenging, especially for people with anxiety. We investigate the neural mechanisms supporting selection during language processing and its breakdown in anxiety. Our neural network simulations demonstrate a critical role for competitive, inhibitory dynamics supported by GABAergic interneurons. As predicted by our model, we find that anxiety (associated with reduced neural inhibition) impairs selection among options and associated prefrontal cortical activity, even in a simple, nonaffective verb-generation task, and the GABA agonist midazolam (which increases neural inhibition) improves selection, whereas retrieval from semantic memory is unaffected when selection demands are low. Neural inhibition is key to choosing our words.

  7. Ancestral process and diffusion model with selection

    CERN Document Server

    Mano, Shuhei

    2008-01-01

    The ancestral selection graph in population genetics introduced by Krone and Neuhauser (1997) is an analogue to the coalescent genealogy. The number of ancestral particles, backward in time, of a sample of genes is an ancestral process, which is a birth and death process with quadratic death and linear birth rate. In this paper an explicit form of the number of ancestral particle is obtained, by using the density of the allele frequency in the corresponding diffusion model obtained by Kimura (1955). It is shown that fixation is convergence of the ancestral process to the stationary measure. The time to fixation of an allele is studied in terms of the ancestral process.

  8. Accelerating the Fourier split operator method via graphics processing units

    CERN Document Server

    Bauke, Heiko

    2010-01-01

    Current generations of graphics processing units have turned into highly parallel devices with general computing capabilities. Thus, graphics processing units may be utilized, for example, to solve time dependent partial differential equations by the Fourier split operator method. In this contribution, we demonstrate that graphics processing units are capable to calculate fast Fourier transforms much more efficiently than traditional central processing units. Thus, graphics processing units render efficient implementations of the Fourier split operator method possible. Performance gains of more than an order of magnitude as compared to implementations for traditional central processing units are reached in the solution of the time dependent Schr\\"odinger equation and the time dependent Dirac equation.

  9. Magnetohydrodynamics simulations on graphics processing units

    CERN Document Server

    Wong, Hon-Cheng; Feng, Xueshang; Tang, Zesheng

    2009-01-01

    Magnetohydrodynamics (MHD) simulations based on the ideal MHD equations have become a powerful tool for modeling phenomena in a wide range of applications including laboratory, astrophysical, and space plasmas. In general, high-resolution methods for solving the ideal MHD equations are computationally expensive and Beowulf clusters or even supercomputers are often used to run the codes that implemented these methods. With the advent of the Compute Unified Device Architecture (CUDA), modern graphics processing units (GPUs) provide an alternative approach to parallel computing for scientific simulations. In this paper we present, to the authors' knowledge, the first implementation to accelerate computation of MHD simulations on GPUs. Numerical tests have been performed to validate the correctness of our GPU MHD code. Performance measurements show that our GPU-based implementation achieves speedups of 2 (1D problem with 2048 grids), 106 (2D problem with 1024^2 grids), and 43 (3D problem with 128^3 grids), respec...

  10. Graphics Processing Units for HEP trigger systems

    Science.gov (United States)

    Ammendola, R.; Bauce, M.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Fantechi, R.; Fiorini, M.; Giagu, S.; Gianoli, A.; Lamanna, G.; Lonardo, A.; Messina, A.; Neri, I.; Paolucci, P. S.; Piandani, R.; Pontisso, L.; Rescigno, M.; Simula, F.; Sozzi, M.; Vicini, P.

    2016-07-01

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  11. Kernel density estimation using graphical processing unit

    Science.gov (United States)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  12. Graphics Processing Units for HEP trigger systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R. [INFN Sezione di Roma “Tor Vergata”, Via della Ricerca Scientifica 1, 00133 Roma (Italy); Bauce, M. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); Biagioni, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); Chiozzi, S.; Cotta Ramusino, A. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Fantechi, R. [INFN Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); CERN, Geneve (Switzerland); Fiorini, M. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Giagu, S. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); Gianoli, A. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Lamanna, G., E-mail: gianluca.lamanna@cern.ch [INFN Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN Laboratori Nazionali di Frascati, Via Enrico Fermi 40, 00044 Frascati (Roma) (Italy); Lonardo, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); Messina, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); and others

    2016-07-11

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  13. The Process of Marketing Segmentation Strategy Selection

    OpenAIRE

    Ionel Dumitru

    2007-01-01

    The process of marketing segmentation strategy selection represents the essence of strategical marketing. We present hereinafter the main forms of the marketing statategy segmentation: undifferentiated marketing, differentiated marketing, concentrated marketing and personalized marketing. In practice, the companies use a mix of these marketing segmentation methods in order to maximize the proffit and to satisfy the consumers’ needs.

  14. The Process of Marketing Segmentation Strategy Selection

    OpenAIRE

    Ionel Dumitru

    2007-01-01

    The process of marketing segmentation strategy selection represents the essence of strategical marketing. We present hereinafter the main forms of the marketing statategy segmentation: undifferentiated marketing, differentiated marketing, concentrated marketing and personalized marketing. In practice, the companies use a mix of these marketing segmentation methods in order to maximize the proffit and to satisfy the consumers’ needs.

  15. Energy Efficient Iris Recognition With Graphics Processing Units

    National Research Council Canada - National Science Library

    Rakvic, Ryan; Broussard, Randy; Ngo, Hau

    2016-01-01

    .... In the past few years, however, this growth has slowed for central processing units (CPUs). Instead, there has been a shift to multicore computing, specifically with the general purpose graphic processing units (GPUs...

  16. Strategies for Stabilizing Nitrogenous Compounds in ECLSS Wastewater: Top-Down System Design and Unit Operation Selection with Focus on Bio-Regenerative Processes for Short and Long Term Scenarios

    Science.gov (United States)

    Lunn, Griffin M.

    2011-01-01

    Water recycling and eventual nutrient recovery is crucial for surviving in or past low earth orbit. New approaches and syste.m architecture considerations need to be addressed to meet current and future system requirements. This paper proposes a flexible system architecture that breaks down pretreatment , steps into discrete areas where multiple unit operations can be considered. An overview focusing on the urea and ammonia conversion steps allows an analysis on each process's strengths and weaknesses and synergy with upstream and downstream processing. Process technologies to be covered include chemical pretreatment, biological urea hydrolysis, chemical urea hydrolysis, combined nitrification-denitrification, nitrate nitrification, anammox denitrification, and regenerative ammonia absorption through struvite formation. Biological processes are considered mainly for their ability to both maximize water recovery and to produce nutrients for future plant systems. Unit operations can be considered for traditional equivalent system mass requirements in the near term or what they can provide downstream in the form of usable chemicals or nutrients for the long term closed-loop ecological control and life support system. Optimally this would allow a system to meet the former but to support the latter without major modification.

  17. Selective effects of nicotine on attentional processes.

    Science.gov (United States)

    Mancuso, G; Warburton, D M; Mélen, M; Sherwood, N; Tirelli, E

    1999-09-01

    It is now well established from electrophysiological and behavioural evidence that nicotine has effects on information processing. The results are usually explained either by a primary effect of nicotine or by a reversal effect of a nicotine-induced, abstinence deficit. In addition, there is dispute about the cognitive processes underlying the changes in performance. This study has approached the first question by using the nicotine patch, in order to administer nicotine chronically. In addition, we examined the effects of nicotine on attention with a selection of tests which assessed the intensity and selectivity features of attention, using the Random Letter Generation test, the Flexibility of Attention test and the Stroop test. Nicotine enhanced the speed of number generation and the speed of processing in both the control and interference conditions of the Stroop test. There were no effects on attentional switching of the Flexibility of Attention test. The results are consistent with the hypothesis that nicotine mainly improves the intensity feature of attention, rather than the selectivity feature.

  18. Generation unit selection via capital asset pricing model for generation planning

    Energy Technology Data Exchange (ETDEWEB)

    Romy Cahyadi; K. Jo Min; Chung-Hsiao Wang; Nick Abi-Samra [College of Engineering, Ames, IA (USA)

    2003-11-01

    The USA's electric power industry is undergoing substantial regulatory and organizational changes. Such changes introduce substantial financial risk in generation planning. In order to incorporate the financial risk into the capital investment decision process of generation planning, this paper develops and analyses a generation unit selection process via the capital asset pricing model (CAPM). In particular, utilizing realistic data on gas-fired, coal-fired, and wind power generation units, the authors show which and how concrete steps can be taken for generation planning purposes. It is hoped that the generation unit selection process will help utilities in the area of effective and efficient generation planning when financial risks are considered. 20 refs., 14 tabs.

  19. Generation unit selection via capital asset pricing model for generation planning

    Energy Technology Data Exchange (ETDEWEB)

    Cahyadi, Romy; Jo Min, K. [College of Engineering, Ames, IA (United States); Chunghsiao Wang [LG and E Energy Corp., Louisville, KY (United States); Abi-Samra, Nick [Electric Power Research Inst., Palo Alto, CA (United States)

    2003-07-01

    The electric power industry in many parts of U.S.A. is undergoing substantial regulatory and organizational changes. Such changes introduce substantial financial risk in generation planning. In order to incorporate the financial risk into the capital investment decision process of generation planning, in this paper, we develop and analyse a generation unit selection process via the capital asset pricing model (CAPM). In particular, utilizing realistic data on gas-fired, coal-fired, and wind power generation units, we show which and how concrete steps can be taken for generation planning purposes. It is hoped that the generation unit selection process developed in this paper will help utilities in the area of effective and efficient generation planning when financial risks are considered. (Author)

  20. The First Prototype for the FastTracker Processing Unit

    CERN Document Server

    Andreani, A; The ATLAS collaboration; Beretta, M; Bogdan, M; Citterio, M; Alberti, F; Giannetti, P; Lanza, A; Magalotti, D; Piendibene, M; Shochet, M; Stabile, A; Tang, J; Tompkins, L

    2012-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity and the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present the first prototype of a new Processing Unit, the core of the FastTracker processor for Atlas, whose computing power is such that a couple of hundreds of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV in the ATLAS events up to Phase II instantaneous luminosities (5×1034 cm-2 s-1) with an event input rate of 100 kHz and a latency below hundreds of microseconds. We plan extremely powerful, very compact and low consumption units for the far future, essential to increase efficiency and purity of the Level 2 selected samples through the intensive use of tracking. This strategy requires massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generall...

  1. Processing, testing and selecting blood components.

    Science.gov (United States)

    Jones, Alister; Heyes, Jennifer

    Transfusion of blood components can be an essential and lifesaving treatment for many patients. However, components must comply with a number of national requirements to ensure they are safe and fit for use. Transfusion of incorrect blood components can lead to mortality and morbidity in patients, which is why patient testing and blood selection are important. This second article in our five-part series on blood transfusion outlines the requirements for different blood components, the importance of the ABO and RhD blood group systems and the processes that ensure the correct blood component is issued to each patient.

  2. Selected papers on noise and stochastic processes

    CERN Document Server

    Wax, Nelson

    1954-01-01

    Six classic papers on stochastic process, selected to meet the needs of physicists, applied mathematicians, and engineers. Contents: 1.Chandrasekhar, S.: Stochastic Problems in Physics and Astronomy. 2. Uhlenbeck, G. E. and Ornstein, L. S.: On the Theory of the Browninan Motion. 3. Ming Chen Wang and Uhlenbeck, G. E.: On the Theory of the Browninan Motion II. 4. Rice, S. O.: Mathematical Analysis of Random Noise. 5. Kac, Mark: Random Walk and the Theory of Brownian Motion. 6. Doob, J. L.: The Brownian Movement and Stochastic Equations. Unabridged republication of the Dover reprint (1954). Pre

  3. Integration Process for the Habitat Demonstration Unit

    Science.gov (United States)

    Gill, Tracy; Merbitz, Jerad; Kennedy, Kriss; Tn, Terry; Toups, Larry; Howe, A. Scott; Smitherman, David

    2011-01-01

    The Habitat Demonstration Unit (HDU) is an experimental exploration habitat technology and architecture test platform designed for analog demonstration activities. The HDU previously served as a test bed for testing technologies and sub-systems in a terrestrial surface environment. in 2010 in the Pressurized Excursion Module (PEM) configuration. Due to the amount of work involved to make the HDU project successful, the HDU project has required a team to integrate a variety of contributions from NASA centers and outside collaborators The size of the team and number of systems involved With the HDU makes Integration a complicated process. However, because the HDU shell manufacturing is complete, the team has a head start on FY--11 integration activities and can focus on integrating upgrades to existing systems as well as integrating new additions. To complete the development of the FY-11 HDU from conception to rollout for operations in July 2011, a cohesive integration strategy has been developed to integrate the various systems of HDU and the payloads. The highlighted HDU work for FY-11 will focus on performing upgrades to the PEM configuration, adding the X-Hab as a second level, adding a new porch providing the astronauts a larger work area outside the HDU for EVA preparations, and adding a Hygiene module. Together these upgrades result in a prototype configuration of the Deep Space Habitat (DSH), an element under evaluation by NASA's Human Exploration Framework Team (HEFT) Scheduled activates include early fit-checks and the utilization of a Habitat avionics test bed prior to installation into HDU. A coordinated effort to utilize modeling and simulation systems has aided in design and integration concept development. Modeling tools have been effective in hardware systems layout, cable routing, sub-system interface length estimation and human factors analysis. Decision processes on integration and use of all new subsystems will be defined early in the project to

  4. Model selection for Poisson processes with covariates

    CERN Document Server

    Sart, Mathieu

    2011-01-01

    We observe $n$ inhomogeneous Poisson processes with covariates and aim at estimating their intensities. To handle this problem, we assume that the intensity of each Poisson process is of the form $s (\\cdot, x)$ where $x$ is the covariate and where $s$ is an unknown function. We propose a model selection approach where the models are used to approximate the multivariate function $s$. We show that our estimator satisfies an oracle-type inequality under very weak assumptions both on the intensities and the models. By using an Hellinger-type loss, we establish non-asymptotic risk bounds and specify them under various kind of assumptions on the target function $s$ such as being smooth or composite. Besides, we show that our estimation procedure is robust with respect to these assumptions.

  5. 3-D analysis of grain selection process

    Science.gov (United States)

    Arao, Tomoka; Esaka, Hisao; Shinozuka, Kei

    2012-07-01

    It is known that the grain selection plays an important role in the manufacturing process for turbine blades. There are some analytical or numerical models to treat the grain selection. However, the detailed mechanism of grain selection in 3-D is still uncertain. Therefore, an experimental research work using Al-Cu alloy has been carried out in order to understand the grain selection in 3-D.A mold made by Al2O3 was heated to 600 °C ( = liquids temperature of the alloy) and was set on a water-colded copper chill plate. Molten Al-20 wt%Cu alloy was cast into the mold and unidirectional solidified ingot was prepared. The size of ingot was approximately phi25×65H mm. To obtain the thermal history, 4 thermocouples were placed in the mold. It is confirmed that the alloy solidified unidirectionally from bottom to top. Solidified structure on a longitudinal cross section was observed and unidirectional solidification up to 40 mm was ensured. EBSD analysis has been performed on horizontal cross section at an interval of ca.200 μm. These observations were carried out 7-5 mm from the bottom surface. Crystallographic orientation of primary Al phase and size of solidified grains were characterized. A large solidified grain, the crystallographic orientation of which is approximately along heat flow direction, is observed near the lowest cross section. The area of grain decreased as solidification proceeded. On the other hand, it is found that the area of grain increased.

  6. Temporally selective processing of communication signals by auditory midbrain neurons

    DEFF Research Database (Denmark)

    Elliott, Taffeta M; Christensen-Dalsgaard, Jakob; Kelley, Darcy B

    2011-01-01

    of auditory neurons in the laminar nucleus of the torus semicircularis (TS) of X. laevis specializes in encoding vocalization click rates. We recorded single TS units while pure tones, natural calls, and synthetic clicks were presented directly to the tympanum via a vibration-stimulation probe. Synthesized...... click rates ranged from 4 to 50 Hz, the rate at which the clicks begin to overlap. Frequency selectivity and temporal processing were characterized using response-intensity curves, temporal-discharge patterns, and autocorrelations of reduplicated responses to click trains. Characteristic frequencies...

  7. Mesh-particle interpolations on graphics processing units and multicore central processing units.

    Science.gov (United States)

    Rossinelli, Diego; Conti, Christian; Koumoutsakos, Petros

    2011-06-13

    Particle-mesh interpolations are fundamental operations for particle-in-cell codes, as implemented in vortex methods, plasma dynamics and electrostatics simulations. In these simulations, the mesh is used to solve the field equations and the gradients of the fields are used in order to advance the particles. The time integration of particle trajectories is performed through an extensive resampling of the flow field at the particle locations. The computational performance of this resampling turns out to be limited by the memory bandwidth of the underlying computer architecture. We investigate how mesh-particle interpolation can be efficiently performed on graphics processing units (GPUs) and multicore central processing units (CPUs), and we present two implementation techniques. The single-precision results for the multicore CPU implementation show an acceleration of 45-70×, depending on system size, and an acceleration of 85-155× for the GPU implementation over an efficient single-threaded C++ implementation. In double precision, we observe a performance improvement of 30-40× for the multicore CPU implementation and 20-45× for the GPU implementation. With respect to the 16-threaded standard C++ implementation, the present CPU technique leads to a performance increase of roughly 2.8-3.7× in single precision and 1.7-2.4× in double precision, whereas the GPU technique leads to an improvement of 9× in single precision and 2.2-2.8× in double precision.

  8. Low cost solar array project production process and equipment task. A Module Experimental Process System Development Unit (MEPSDU)

    Science.gov (United States)

    1981-01-01

    Technical readiness for the production of photovoltaic modules using single crystal silicon dendritic web sheet material is demonstrated by: (1) selection, design and implementation of solar cell and photovoltaic module process sequence in a Module Experimental Process System Development Unit; (2) demonstration runs; (3) passing of acceptance and qualification tests; and (4) achievement of a cost effective module.

  9. Selective detachment process in column flotation froth

    Energy Technology Data Exchange (ETDEWEB)

    Honaker, R.Q.; Ozsever, A.V.; Parekh, B.K. [University of Kentucky, Lexington, KY (United States). Dept. of Mining Engineering

    2006-05-15

    The selectivity in flotation columns involving the separation of particles of varying degrees of floatability is based on differential flotation rates in the collection zone, reflux action between the froth and collection zones, and differential detachment rates in the froth zone. Using well-known theoretical models describing the separation process and experimental data, froth zone and overall flotation recovery values were quantified for particles in an anthracite coal that have a wide range of floatability potential. For highly floatable particles, froth recovery had a very minimal impact on overall recovery while the recovery of weakly floatable material was decreased substantially by reductions in froth recovery values. In addition, under carrying-capacity limiting conditions, selectivity was enhanced by the preferential detachment of the weakly floatable material. Based on this concept, highly floatable material was added directly into the froth zone when treating the anthracite coal. The enriched froth phase reduced the product ash content of the anthracite product by five absolute percentage points while maintaining a constant recovery value.

  10. Polymer Field-Theory Simulations on Graphics Processing Units

    CERN Document Server

    Delaney, Kris T

    2012-01-01

    We report the first CUDA graphics-processing-unit (GPU) implementation of the polymer field-theoretic simulation framework for determining fully fluctuating expectation values of equilibrium properties for periodic and select aperiodic polymer systems. Our implementation is suitable both for self-consistent field theory (mean-field) solutions of the field equations, and for fully fluctuating simulations using the complex Langevin approach. Running on NVIDIA Tesla T20 series GPUs, we find double-precision speedups of up to 30x compared to single-core serial calculations on a recent reference CPU, while single-precision calculations proceed up to 60x faster than those on the single CPU core. Due to intensive communications overhead, an MPI implementation running on 64 CPU cores remains two times slower than a single GPU.

  11. Processing metallic glasses by selective laser melting

    Directory of Open Access Journals (Sweden)

    Simon Pauly

    2013-01-01

    Full Text Available Metallic glasses and their descendants, the so-called bulk metallic glasses (BMGs, can be regarded as frozen liquids with a high resistance to crystallization. The lack of a conventional structure turns them into a material exhibiting near-theoretical strength, low Young's modulus and large elasticity. These unique mechanical properties can be only obtained when the metallic melts are rapidly cooled to bypass the nucleation and growth of crystals. Most of the commonly known and used processing routes, such as casting, melt spinning or gas atomization, have intrinsic limitations regarding the complexity and dimensions of the geometries. Here, it is shown that selective laser melting (SLM, which is usually used to process conventional metallic alloys and polymers, can be applied to implement complex geometries and components from an Fe-base metallic glass. This approach is in principle viable for a large variety of metallic alloys and paves the way for the novel synthesis of materials and the development of parts with advanced functional and structural properties without limitations in size and intricacy.

  12. Preimplantation genetic diagnosis for gender selection in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Colls, P.; Silver, L.; Olivera, G.; Weier, J.; Escudero, T.; Goodall, N.; Tomkin, G.; Munne, S.

    2009-08-20

    Preimplantation genetic diagnosis (PGD) of gender selection for non medical reasons has been considered an unethical procedure by several authors and agencies in the Western society on the basis of disrupting the sex ratio, being discriminatory againsts women and disposal of normal embryos of the non desired gender. In this study, the analysis of a large series of PGD procedures for gender selection from a wide geographical area in the United States, shows that in general there is no deviation in preference towards any specific gender except for a preference of males in some ethnic populations of Chinese, Indian and Middle Eastern origin that represent a small percentage of the US population. In cases where only normal embryos of the non-desired gender are available, 45.5% of the couples elect to cancel the transfer, while 54.5% of them are open to have transferred embryos of the non-desired gender, this fact being strongly linked to cultural and ethnical background of the parents. In addition this study adds some evidence to the proposition that in couples with previous children of a given gender there is no biological predisposition towards producing embryos of that same gender. Based on these facts, it seems that objections to gender selection formulated by ethics committees and scientific societies are not well-founded.

  13. Proton Testing of Advanced Stellar Compass Digital Processing Unit

    DEFF Research Database (Denmark)

    Thuesen, Gøsta; Denver, Troelz; Jørgensen, Finn E

    1999-01-01

    The Advanced Stellar Compass Digital Processing Unit was radiation tested with 300 MeV protons at Proton Irradiation Facility (PIF), Paul Scherrer Institute, Switzerland.......The Advanced Stellar Compass Digital Processing Unit was radiation tested with 300 MeV protons at Proton Irradiation Facility (PIF), Paul Scherrer Institute, Switzerland....

  14. Selective visual attention in object detection processes

    Science.gov (United States)

    Paletta, Lucas; Goyal, Anurag; Greindl, Christian

    2003-03-01

    Object detection is an enabling technology that plays a key role in many application areas, such as content based media retrieval. Attentive cognitive vision systems are here proposed where the focus of attention is directed towards the most relevant target. The most promising information is interpreted in a sequential process that dynamically makes use of knowledge and that enables spatial reasoning on the local object information. The presented work proposes an innovative application of attention mechanisms for object detection which is most general in its understanding of information and action selection. The attentive detection system uses a cascade of increasingly complex classifiers for the stepwise identification of regions of interest (ROIs) and recursively refined object hypotheses. While the most coarse classifiers are used to determine first approximations on a region of interest in the input image, more complex classifiers are used for more refined ROIs to give more confident estimates. Objects are modelled by local appearance based representations and in terms of posterior distributions of the object samples in eigenspace. The discrimination function to discern between objects is modeled by a radial basis functions (RBF) network that has been compared with alternative networks and been proved consistent and superior to other artifical neural networks for appearance based object recognition. The experiments were led for the automatic detection of brand objects in Formula One broadcasts within the European Commission's cognitive vision project DETECT.

  15. Optimized Technology for Residuum Processing in the ARGG Unit

    Institute of Scientific and Technical Information of China (English)

    Pan Luoqi; Yuan hongxing; Nie Baiqiu

    2006-01-01

    The influence of feedstock property on operation in the FCC unit was studied to identify the cause leading to deteriorated products distribution related with increasingly heavier feedstock for the ARGG unit. In order to maximize the economic benefits of the ARGG unit a string of measures, including the modification of catalyst formulation, retention of high catalyst activity, application of mixed termination agents to control the reaction temperature and once-through operation, and optimization of catalyst regeneration technique, were adopted to adapt the ARGG unit to processing of the heavy feedstock with its carbon residue equating to 7% on an average. The heavy oil processing technology has brought about apparent economic benefits.

  16. Criteria of selection of basic linguacultural units within the context of cross-cultural communication

    Directory of Open Access Journals (Sweden)

    Khalupo Olga Ivanovna

    2016-03-01

    Full Text Available The article considers the problems associated with the analysis and selection of basic linguacultural units that are necessary for a more effective cross-cultural communication. To make the process of dialogue between different cultures and languages more appropriate and productive, it is necessary to possess certain knowledge, skills, which are acquired by man in the process of learning. Important in our opinion in this area are the mastery means which will prepare the person to communicate in a different communicative space. Such means, in our opinion, are the basic linguacultural units, which are considered as carriers of information and expression of cultural identity. They inform choice contributes to a worldview, understanding linguacultural picture of the world community. The basis of selection of linguistic units on the following criteria: the information content, functionality, sufficiency, cultural identity, realism, pivotal importance to the basic sense, social and cultural significance. Application of the proposed criteria allowing more appropriate to make the selection of the material and linguacultural integral components of the scope of cross-cultural interaction, which are characterized by their relevant material necessary for an adequate understanding of the processes occurring in different communicative space.

  17. Process for selecting engineering tools : applied to selecting a SysML tool.

    Energy Technology Data Exchange (ETDEWEB)

    De Spain, Mark J.; Post, Debra S. (Sandia National Laboratories, Livermore, CA); Taylor, Jeffrey L.; De Jong, Kent

    2011-02-01

    Process for Selecting Engineering Tools outlines the process and tools used to select a SysML (Systems Modeling Language) tool. The process is general in nature and users could use the process to select most engineering tools and software applications.

  18. Process for selecting engineering tools : applied to selecting a SysML tool.

    Energy Technology Data Exchange (ETDEWEB)

    De Spain, Mark J.; Post, Debra S. (Sandia National Laboratories, Livermore, CA); Taylor, Jeffrey L.; De Jong, Kent

    2011-02-01

    Process for Selecting Engineering Tools outlines the process and tools used to select a SysML (Systems Modeling Language) tool. The process is general in nature and users could use the process to select most engineering tools and software applications.

  19. Selection of Technical Routes for Resid Processing

    Institute of Scientific and Technical Information of China (English)

    Hu Weiqing

    2006-01-01

    With the increasing trend of heavy crudes supply with deteriorated quality and demand for clean fuels, deep processing of residuum, in particular the processing of low-grade resid, has become the main source for enhancing economic benefits of oil refiners. This article has discussed the technology for processing of different resids and the advantages and disadvantages of the combination processes for resid processing, while pinpointing the directions for development and application of technologies for resid processing in China.

  20. Theory of Selection Operators on Hyperspaces and Multivalued Stochastic Processes

    Institute of Scientific and Technical Information of China (English)

    高勇; 张文修

    1994-01-01

    In this paper, a new concept of selection operators on hyperspaces (subsets spaces) is introduced, and the existence theorems for several kinds of selection operators are proved. Using the methods of selection operators, we give a selection characterization of identically distributed multivalued random variables and completely solve the vector-valued selection problem for sequences of multivalued random variables converging in distribution. The regular selections and Markov selections for multivalued stochastic processes are studied, and a discretization theorem for multivalued Markov processes is established. A theorem on the asymptotic martingale selections for compact and convex multivalued asymptotic martingale is proved.

  1. Parallelization of heterogeneous reactor calculations on a graphics processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Malofeev, V. M., E-mail: vm-malofeev@mail.ru; Pal’shin, V. A. [National Research Center Kurchatov Institute (Russian Federation)

    2016-12-15

    Parallelization is applied to the neutron calculations performed by the heterogeneous method on a graphics processing unit. The parallel algorithm of the modified TREC code is described. The efficiency of the parallel algorithm is evaluated.

  2. Diffusion tensor fiber tracking on graphics processing units.

    Science.gov (United States)

    Mittmann, Adiel; Comunello, Eros; von Wangenheim, Aldo

    2008-10-01

    Diffusion tensor magnetic resonance imaging has been successfully applied to the process of fiber tracking, which determines the location of fiber bundles within the human brain. This process, however, can be quite lengthy when run on a regular workstation. We present a means of executing this process by making use of the graphics processing units of computers' video cards, which provide a low-cost parallel execution environment that algorithms like fiber tracking can benefit from. With this method we have achieved performance gains varying from 14 to 40 times on common computers. Because of accuracy issues inherent to current graphics processing units, we define a variation index in order to assess how close the results obtained with our method are to those generated by programs running on the central processing units of computers. This index shows that results produced by our method are acceptable when compared to those of traditional programs.

  3. Business Process Compliance through Reusable Units of Compliant Processes

    NARCIS (Netherlands)

    Shumm, D.; Turetken, O.; Kokash, N.; Elgammal, A.; Leymann, F.; Heuvel, J. van den

    2010-01-01

    Compliance management is essential for ensuring that organizational business processes and supporting information systems are in accordance with a set of prescribed requirements originating from laws, regulations, and various legislative or technical documents such as Sarbanes-Oxley Act or ISO 17799

  4. 7 CFR 1469.6 - Enrollment criteria and selection process.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Enrollment criteria and selection process. 1469.6... General Provisions § 1469.6 Enrollment criteria and selection process. (a) Selection and funding of... land to degradation; (iv) State or national conservation and environmental issues e.g., location of...

  5. Creativity: Intuitive processing outperforms deliberative processing in creative idea selection

    NARCIS (Netherlands)

    Zhu, Y.; Ritter, S.M.; Müller, B.C.N.; Dijksterhuis, A.J.

    2017-01-01

    Creative ideas are highly valued, and various techniques have been designed to maximize the generation of creative ideas. However, for actual implementation of creative ideas, the most creative ideas must be recognized and selected from a pool of ideas. Although idea generation and idea selection ar

  6. Creativity: Intuitive processing outperforms deliberative processing in creative idea selection

    NARCIS (Netherlands)

    Zhu, Y.; Ritter, S.M.; Müller, B.C.N.; Dijksterhuis, A.J.

    2017-01-01

    Creative ideas are highly valued, and various techniques have been designed to maximize the generation of creative ideas. However, for actual implementation of creative ideas, the most creative ideas must be recognized and selected from a pool of ideas. Although idea generation and idea selection

  7. Behavioral Objectives for Selected Units in Business Education.

    Science.gov (United States)

    Hill, Richard K., Ed.; Schmidt, B. June, Ed.

    This is a catalog of behavioral objectives organized by units. Each unit contains an outline of the content, a goal statement, and general and specific objectives. The catalog contains a total of 48 units on: business behavior and psychology; business law; business math; business principles and organization; business terminology; communication and…

  8. High Input Voltage, Silicon Carbide Power Processing Unit Performance Demonstration

    Science.gov (United States)

    Bozak, Karin E.; Pinero, Luis R.; Scheidegger, Robert J.; Aulisio, Michael V.; Gonzalez, Marcelo C.; Birchenough, Arthur G.

    2015-01-01

    A silicon carbide brassboard power processing unit has been developed by the NASA Glenn Research Center in Cleveland, Ohio. The power processing unit operates from two sources: a nominal 300 Volt high voltage input bus and a nominal 28 Volt low voltage input bus. The design of the power processing unit includes four low voltage, low power auxiliary supplies, and two parallel 7.5 kilowatt (kW) discharge power supplies that are capable of providing up to 15 kilowatts of total power at 300 to 500 Volts (V) to the thruster. Additionally, the unit contains a housekeeping supply, high voltage input filter, low voltage input filter, and master control board, such that the complete brassboard unit is capable of operating a 12.5 kilowatt Hall effect thruster. The performance of the unit was characterized under both ambient and thermal vacuum test conditions, and the results demonstrate exceptional performance with full power efficiencies exceeding 97%. The unit was also tested with a 12.5kW Hall effect thruster to verify compatibility and output filter specifications. With space-qualified silicon carbide or similar high voltage, high efficiency power devices, this would provide a design solution to address the need for high power electric propulsion systems.

  9. What Is the Unit of Visual Attention? Object for Selection, but Boolean Map for Access

    Science.gov (United States)

    Huang, Liqiang

    2010-01-01

    In the past 20 years, numerous theories and findings have suggested that the unit of visual attention is the object. In this study, I first clarify 2 different meanings of unit of visual attention, namely the unit of access in the sense of measurement and the unit of selection in the sense of division. In accordance with this distinction, I argue…

  10. Adaptive-optics Optical Coherence Tomography Processing Using a Graphics Processing Unit*

    Science.gov (United States)

    Shafer, Brandon A.; Kriske, Jeffery E.; Kocaoglu, Omer P.; Turner, Timothy L.; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T.

    2015-01-01

    Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability. PMID:25570838

  11. Adaptive-optics optical coherence tomography processing using a graphics processing unit.

    Science.gov (United States)

    Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T

    2014-01-01

    Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.

  12. Subcontractor Selection Using Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Vesile Sinem Arikan Kargi

    2012-07-01

    Full Text Available Turkish textile firms work under a heavily competitive atmosphere in terms of prices due to globalization. Firms have to take into consideration several criteria like cost, quality and delivery-on-time in order to survive the global market conditions and to maintain profitability. To meet these criteria, contractor companies have to select the best subcontractor. Therefore, the selection of subcontractor for the contractor company is a problem. The aim of this study is to solve the problem of Yeşim Textile, a contractor company, about the selection of the best subcontractor for its customer Nike. To solve the problem, firstly, the main criteria and relevant sub-criteria, which are of importance for Yeşim and Nike, were defined. Then, authorities from the firms were interviewed in order to formulate pairwise comparison matrices using the Saaty’s importance scale. In a sense, this matrix is the model of this study. The model, named AHP, was analyzed using the Expert Choice software. Best subcontractors for Yeşim were determined based on the model results. In addition, these results were analyzed for the firm’s decision makers.

  13. Debris Control at Hydraulic Structures in Selected Areas of the United States and Europe

    Science.gov (United States)

    2007-11-02

    Selected Areas of the United States and Europe by N. Wallerstein , C. R. Thome, University of Nottingham S. R. Abt, Colorado State University Approved...December 1997 Debris Control at Hydraulic Structures in Selected Areas of the United States and Europe by N. Wallerstein , C. R. Thome Department... Wallerstein , N. Debris control at hydraulic structures in selected areas of the United States and Europe / by N. Wallerstein , C.R. Thome, S.R. Abt

  14. Unit Operations for the Food Industry: Equilibrium Processes & Mechanical Operations

    OpenAIRE

    Guiné, Raquel

    2013-01-01

    Unit operations are an area of engineering that is at the same time very fascinating and most essential for the industry in general and the food industry in particular. This book was prepared in a way to achieve simultaneously the academic and practical perspectives. It is organized into two parts: the unit operations based on equilibrium processes and the mechanical operations. Each topic starts with a presentation of the fundamental concepts and principles, followed by a discussion of ...

  15. Formalizing the Process of Constructing Chains of Lexical Units

    Directory of Open Access Journals (Sweden)

    Grigorij Chetverikov

    2015-06-01

    Full Text Available Formalizing the Process of Constructing Chains of Lexical Units The paper investigates mathematical aspects of describing the construction of chains of lexical units on the basis of finite-predicate algebra. Analyzing the construction peculiarities is carried out and application of the method of finding the power of linear logical transformation for removing characteristic words of a dictionary entry is given. Analysis and perspectives of the results of the study are provided.

  16. Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification

    NARCIS (Netherlands)

    Miao, Yongwu; Sloep, Peter; Koper, Rob

    2009-01-01

    Miao, Y., Sloep, P. B., & Koper, R. (2008). Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification. Presentation at the ICWL 2008 conference. August, 20, 2008, Jinhua, China.

  17. Folklore and the College Selection Process Revisited

    Science.gov (United States)

    Caruso, Pete

    2012-01-01

    This paper is a response to Clinton F. Conrad's article, "Beyond the Folklore." Conrad's strategy for assessing undergraduate quality echoes the sentiments espoused by many admission and college counseling professionals over the years at various workshops for students and families that focus on navigating the process. As transcendent as the…

  18. Titanium processing using selective laser sintering

    Science.gov (United States)

    Harlan, Nicole Renee

    1999-11-01

    A materials development workstation specifically designed to test high temperature metal and metal-matrix composites for direct selective laser sintering (SLS) was constructed. Using the workstation, a titanium-aluminum alloy was sintered into single layer coupons to demonstrate the feasibility of producing titanium components using direct SLS. A combination of low temperature indirect SLS and colloidal infiltration was used to create "partially-stabilized" zirconia molds for titanium casting. The base material, stabilized zirconia mixed with a copolymer, was laser sintered into the desired mold geometry. The copolymer was pyrolyzed and replaced by a zirconia precursor. The flexural strength and surface roughness of the SLS-produced casting molds were sufficient for titanium casting trials. A laser-scanned human femur was used as the basis for a mold design and technology demonstration. Titanium castings produced from SLS molds exhibited typical as-cast microstructures and an average surface roughness (Ra) of 8 mum.

  19. Using Card Games to Simulate the Process of Natural Selection

    Science.gov (United States)

    Grilliot, Matthew E.; Harden, Siegfried

    2014-01-01

    In 1858, Darwin published "On the Origin of Species by Means of Natural Selection." His explanation of evolution by natural selection has become the unifying theme of biology. We have found that many students do not fully comprehend the process of evolution by natural selection. We discuss a few simple games that incorporate hands-on…

  20. Solvent selection methodology for pharmaceutical processes: Solvent swap

    DEFF Research Database (Denmark)

    Papadakis, Emmanouil; Kumar Tula, Anjan; Gani, Rafiqul

    2016-01-01

    A method for the selection of appropriate solvents for the solvent swap task in pharmaceutical processes has been developed. This solvent swap method is based on the solvent selection method of Gani et al. (2006) and considers additional selection criteria such as boiling point difference, volati...

  1. Intermediate product selection and blending in the food processing industry

    DEFF Research Database (Denmark)

    Kilic, Onur A.; Akkerman, Renzo; van Donk, Dirk Pieter

    2013-01-01

    This study addresses a capacitated intermediate product selection and blending problem typical for two-stage production systems in the food processing industry. The problem involves the selection of a set of intermediates and end-product recipes characterising how those selected intermediates...

  2. Process for selective grinding of coal

    Science.gov (United States)

    Venkatachari, Mukund K.; Benz, August D.; Huettenhain, Horst

    1991-01-01

    A process for preparing coal for use as a fuel. Forming a coal-water slurry having solid coal particles with a particle size not exceeding about 80 microns, transferring the coal-water slurry to a solid bowl centrifuge, and operating same to classify the ground coal-water slurry to provide a centrate containing solid particles with a particle size distribution of from about 5 microns to about 20 microns and a centrifuge cake of solids having a particle size distribution of from about 10 microns to about 80 microns. The classifer cake is reground and mixed with fresh feed to the solid bowl centrifuge for additional classification.

  3. Impact of selection of cord blood units from the United States and swiss registries on the cost of banking operations.

    Science.gov (United States)

    Bart, Thomas; Boo, Michael; Balabanova, Snejana; Fischer, Yvonne; Nicoloso, Grazia; Foeken, Lydia; Oudshoorn, Machteld; Passweg, Jakob; Tichelli, Andre; Kindler, Vincent; Kurtzberg, Joanne; Price, Thomas; Regan, Donna; Shpall, Elizabeth J; Schwabe, Rudolf

    2013-02-01

    Over the last 2 decades, cord blood (CB) has become an important source of blood stem cells. Clinical experience has shown that CB is a viable source for blood stem cells in the field of unrelated hematopoietic blood stem cell transplantation. Studies of CB units (CBUs) stored and ordered from the US (National Marrow Donor Program (NMDP) and Swiss (Swiss Blood Stem Cells (SBSQ)) CB registries were conducted to assess whether these CBUs met the needs of transplantation patients, as evidenced by units being selected for transplantation. These data were compared to international banking and selection data (Bone Marrow Donors Worldwide (BMDW), World Marrow Donor Association (WMDA)). Further analysis was conducted on whether current CB banking practices were economically viable given the units being selected from the registries for transplant. It should be mentioned that our analysis focused on usage, deliberately omitting any information about clinical outcomes of CB transplantation. A disproportionate number of units with high total nucleated cell (TNC) counts are selected, compared to the distribution of units by TNC available. Therefore, the decision to use a low threshold for banking purposes cannot be supported by economic analysis and may limit the economic viability of future public CB banking. We suggest significantly raising the TNC level used to determine a bankable unit. A level of 125 × 10(7) TNCs, maybe even 150 × 10(7) TNCs, might be a viable banking threshold. This would improve the return on inventory investments while meeting transplantation needs based on current selection criteria.

  4. COST ESTIMATION MODELS FOR DRINKING WATER TREATMENT UNIT PROCESSES

    Science.gov (United States)

    Cost models for unit processes typically utilized in a conventional water treatment plant and in package treatment plant technology are compiled in this paper. The cost curves are represented as a function of specified design parameters and are categorized into four major catego...

  5. Determinants of profitability of smallholder palm oil processing units ...

    African Journals Online (AJOL)

    ... of profitability of smallholder palm oil processing units in Ogun state, Nigeria. ... as well as their geographical spread covering the entire land space of the state. ... The F-ratio value is statistically significant (P<0.01) implying that the model is ...

  6. Reflector antenna analysis using physical optics on Graphics Processing Units

    DEFF Research Database (Denmark)

    Borries, Oscar Peter; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd

    2014-01-01

    The Physical Optics approximation is a widely used asymptotic method for calculating the scattering from electrically large bodies. It requires significant computational work and little memory, and is thus well suited for application on a Graphics Processing Unit. Here, we investigate...... the performance of an implementation and demonstrate that while there are some implementational pitfalls, a careful implementation can result in impressive improvements....

  7. Utilizing Graphics Processing Units for Network Anomaly Detection

    Science.gov (United States)

    2012-09-13

    matching system using deterministic finite automata and extended finite automata resulting in a speedup of 9x over the CPU implementation [SGO09]. Kovach ...pages 14–18, 2009. [Kov10] Nicholas S. Kovach . Accelerating malware detection via a graphics processing unit, 2010. http://www.dtic.mil/dtic/tr

  8. Acceleration of option pricing technique on graphics processing units

    NARCIS (Netherlands)

    Zhang, B.; Oosterlee, C.W.

    2010-01-01

    The acceleration of an option pricing technique based on Fourier cosine expansions on the Graphics Processing Unit (GPU) is reported. European options, in particular with multiple strikes, and Bermudan options will be discussed. The influence of the number of terms in the Fourier cosine series expan

  9. Acceleration of option pricing technique on graphics processing units

    NARCIS (Netherlands)

    Zhang, B.; Oosterlee, C.W.

    2014-01-01

    The acceleration of an option pricing technique based on Fourier cosine expansions on the graphics processing unit (GPU) is reported. European options, in particular with multiple strikes, and Bermudan options will be discussed. The influence of the number of terms in the Fourier cosine series expan

  10. The Added Value of the Project Selection Process

    Directory of Open Access Journals (Sweden)

    Adel Oueslati

    2016-06-01

    Full Text Available The project selection process comes in the first stage of the overall project management life cycle. It does have a very important impact on organization success. The present paper provides defi nitions of the basic concepts and tools related to the project selection process. It aims to stress the added value of this process for the entire organization success. The mastery of the project selection process is the right way for any organization to ensure that it will do the right project with the right resources at the right time and within the right priorities

  11. Selective decontamination of the digestive tract and selective oropharyngeal decontamination in intensive care unit patients : a cost-effectiveness analysis

    NARCIS (Netherlands)

    Oostdijk, Evelien A. N.; de Wit, G. A.; Bakker, Marina; de Smet, Anne-Marie; Bonten, M. J. M.

    2013-01-01

    Objective: To determine costs and effects of selective digestive tract decontamination (SDD) and selective oropharyngeal decontamination (SOD) as compared with standard care (ie, no SDD/SOD (SC)) from a healthcare perspective in Dutch Intensive Care Units (ICUs). Design: A post hoc analysis of a pre

  12. Fast Pyrolysis Process Development Unit for Validating Bench Scale Data

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Robert C. [Iowa State Univ., Ames, IA (United States). Biorenewables Research Lab.. Center for Sustainable Environmental Technologies. Bioeconomy Inst.; Jones, Samuel T. [Iowa State Univ., Ames, IA (United States). Biorenewables Research Lab.. Center for Sustainable Environmental Technologies. Bioeconomy Inst.

    2010-03-31

    The purpose of this project was to prepare and operate a fast pyrolysis process development unit (PDU) that can validate experimental data generated at the bench scale. In order to do this, a biomass preparation system, a modular fast pyrolysis fluidized bed reactor, modular gas clean-up systems, and modular bio-oil recovery systems were designed and constructed. Instrumentation for centralized data collection and process control were integrated. The bio-oil analysis laboratory was upgraded with the addition of analytical equipment needed to measure C, H, O, N, S, P, K, and Cl. To provide a consistent material for processing through the fluidized bed fast pyrolysis reactor, the existing biomass preparation capabilities of the ISU facility needed to be upgraded. A stationary grinder was installed to reduce biomass from bale form to 5-10 cm lengths. A 25 kg/hr rotary kiln drier was installed. It has the ability to lower moisture content to the desired level of less than 20% wt. An existing forage chopper was upgraded with new screens. It is used to reduce biomass to the desired particle size of 2-25 mm fiber length. To complete the material handling between these pieces of equipment, a bucket elevator and two belt conveyors must be installed. The bucket elevator has been installed. The conveyors are being procured using other funding sources. Fast pyrolysis bio-oil, char and non-condensable gases were produced from an 8 kg/hr fluidized bed reactor. The bio-oil was collected in a fractionating bio-oil collection system that produced multiple fractions of bio-oil. This bio-oil was fractionated through two separate, but equally important, mechanisms within the collection system. The aerosols and vapors were selectively collected by utilizing laminar flow conditions to prevent aerosol collection and electrostatic precipitators to collect the aerosols. The vapors were successfully collected through a selective condensation process. The combination of these two mechanisms

  13. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  14. Selecting public relations personnel of hospitals by analytic network process.

    Science.gov (United States)

    Liao, Sen-Kuei; Chang, Kuei-Lun

    2009-01-01

    This study describes the use of analytic network process (ANP) in the Taiwanese hospital public relations personnel selection process. Starting with interviewing 48 practitioners and executives in north Taiwan, we collected selection criteria. Then, we retained the 12 critical criteria that were mentioned above 40 times by theses respondents, including: interpersonal skill, experience, negotiation, language, ability to follow orders, cognitive ability, adaptation to environment, adaptation to company, emotion, loyalty, attitude, and Response. Finally, we discussed with the 20 executives to take these important criteria into three perspectives to structure the hierarchy for hospital public relations personnel selection. After discussing with practitioners and executives, we find that selecting criteria are interrelated. The ANP, which incorporates interdependence relationships, is a new approach for multi-criteria decision-making. Thus, we apply ANP to select the most optimal public relations personnel of hospitals. An empirical study of public relations personnel selection problems in Taiwan hospitals is conducted to illustrate how the selection procedure works.

  15. Halfphones: a backoff mechanism for Diphone Unit Selection Synthesis

    CSIR Research Space (South Africa)

    Louw, JA

    2006-11-01

    Full Text Available . This lack of naturalness can be attributed, at least in part, to the limited set of units from which speech ischosen, coupled with the need to prosodically modify the speech signal of each diphone. Diphone Backoff mechanisms in text-to-speech provide a means...

  16. Point process models for household distributions within small areal units

    Directory of Open Access Journals (Sweden)

    Zack W. Almquist

    2012-06-01

    Full Text Available Spatio-demographic data sets are increasingly available worldwide, permitting ever more realistic modeling and analysis of social processes ranging from mobility to disease trans- mission. The information provided by these data sets is typically aggregated by areal unit, for reasons of both privacy and administrative cost. Unfortunately, such aggregation does not permit fine-grained assessment of geography at the level of individual households. In this paper, we propose to partially address this problem via the development of point pro- cess models that can be used to effectively simulate the location of individual households within small areal units.

  17. Risk calculations in the manufacturing technology selection process

    DEFF Research Database (Denmark)

    Farooq, S.; O'Brien, C.

    2010-01-01

    and supports an industrial manager in achieving objective and comprehensive decisions regarding selection of a manufacturing technology. Originality/value - The paper explains the process of risk calculation in manufacturing technology selection by dividing the decision-making environment into manufacturing......Purpose - The purpose of this paper is to present result obtained from a developed technology selection framework and provide a detailed insight into the risk calculations and their implications in manufacturing technology selection process. Design/methodology/approach - The results illustrated...... in the paper are the outcome of an action research study that was conducted in an aerospace company. Findings - The paper highlights the role of risk calculations in manufacturing technology selection process by elaborating the contribution of risk associated with manufacturing technology alternatives...

  18. Natural Selection Is a Sorting Process: What Does that Mean?

    Science.gov (United States)

    Price, Rebecca M.

    2013-01-01

    To learn why natural selection acts only on existing variation, students categorize processes as either creative or sorting. This activity helps students confront the misconception that adaptations evolve because species need them.

  19. Process for selected gas oxide removal by radiofrequency catalysts

    Science.gov (United States)

    Cha, Chang Y.

    1993-01-01

    This process to remove gas oxides from flue gas utilizes adsorption on a char bed subsequently followed by radiofrequency catalysis enhancing such removal through selected reactions. Common gas oxides include SO.sub.2 and NO.sub.x.

  20. Accelerated space object tracking via graphic processing unit

    Science.gov (United States)

    Jia, Bin; Liu, Kui; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    In this paper, a hybrid Monte Carlo Gauss mixture Kalman filter is proposed for the continuous orbit estimation problem. Specifically, the graphic processing unit (GPU) aided Monte Carlo method is used to propagate the uncertainty of the estimation when the observation is not available and the Gauss mixture Kalman filter is used to update the estimation when the observation sequences are available. A typical space object tracking problem using the ground radar is used to test the performance of the proposed algorithm. The performance of the proposed algorithm is compared with the popular cubature Kalman filter (CKF). The simulation results show that the ordinary CKF diverges in 5 observation periods. In contrast, the proposed hybrid Monte Carlo Gauss mixture Kalman filter achieves satisfactory performance in all observation periods. In addition, by using the GPU, the computational time is over 100 times less than that using the conventional central processing unit (CPU).

  1. Ising Processing Units: Potential and Challenges for Discrete Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Coffrin, Carleton James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Nagarajan, Harsha [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bent, Russell Whitford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-05

    The recent emergence of novel computational devices, such as adiabatic quantum computers, CMOS annealers, and optical parametric oscillators, presents new opportunities for hybrid-optimization algorithms that leverage these kinds of specialized hardware. In this work, we propose the idea of an Ising processing unit as a computational abstraction for these emerging tools. Challenges involved in using and bench- marking these devices are presented, and open-source software tools are proposed to address some of these challenges. The proposed benchmarking tools and methodology are demonstrated by conducting a baseline study of established solution methods to a D-Wave 2X adiabatic quantum computer, one example of a commercially available Ising processing unit.

  2. A Universal Quantum Network Quantum Central Processing Unit

    Institute of Scientific and Technical Information of China (English)

    WANG An-Min

    2001-01-01

    A new construction scheme of a universal quantum network which is compatible with the known quantum gate- assembly schemes is proposed. Our quantum network is standard, easy-assemble, reusable, scalable and even potentially programmable. Moreover, we can construct a whole quantum network to implement the generalquantum algorithm and quantum simulation procedure. In the above senses, it is a realization of the quantum central processing unit.

  3. Accelerating Malware Detection via a Graphics Processing Unit

    Science.gov (United States)

    2010-09-01

    Processing Unit . . . . . . . . . . . . . . . . . . 4 PE Portable Executable . . . . . . . . . . . . . . . . . . . . . 4 COFF Common Object File Format...operating systems for the future [Szo05]. The PE format is an updated version of the common object file format ( COFF ) [Mic06]. Microsoft released a new...pro.mspx, Accessed July 2010, 2001. 79 Mic06. Microsoft. Common object file format ( coff ). MSDN, November 2006. Re- vision 4.1. Mic07a. Microsoft

  4. An Architecture of Deterministic Quantum Central Processing Unit

    OpenAIRE

    Xue, Fei; Chen, Zeng-Bing; Shi, Mingjun; Zhou, Xianyi; Du, Jiangfeng; Han, Rongdian

    2002-01-01

    We present an architecture of QCPU(Quantum Central Processing Unit), based on the discrete quantum gate set, that can be programmed to approximate any n-qubit computation in a deterministic fashion. It can be built efficiently to implement computations with any required accuracy. QCPU makes it possible to implement universal quantum computation with a fixed, general purpose hardware. Thus the complexity of the quantum computation can be put into the software rather than the hardware.

  5. BitTorrent Processing Unit BPU发展观望

    Institute of Scientific and Technical Information of China (English)

    Zone; 杨原青

    2007-01-01

    在电脑发展的早期,无论是运算处理、还是图形处理、还是输入、输出处理,都由CPU(Central Processing Unit,中央处理器)一力承担,然而随着处理专用化发展,1999年NVIDIA率先将图形处理独立出来,提出了GPU(Graphics Processing unit,绘图处理单元)概念。八年过去,现在GPU已经成为图形处理的中坚力量,并让所玩家耳熟能详。而近期,台湾2家公刊则提出了BPU(BitTorrent Processing Unit,BT处理单元)概念。下面,就让我们一起看看,这款极为新鲜的概念产品。

  6. Evolvability Is an Evolved Ability: The Coding Concept as the Arch-Unit of Natural Selection.

    Science.gov (United States)

    Janković, Srdja; Ćirković, Milan M

    2016-03-01

    Physical processes that characterize living matter are qualitatively distinct in that they involve encoding and transfer of specific types of information. Such information plays an active part in the control of events that are ultimately linked to the capacity of the system to persist and multiply. This algorithmicity of life is a key prerequisite for its Darwinian evolution, driven by natural selection acting upon stochastically arising variations of the encoded information. The concept of evolvability attempts to define the total capacity of a system to evolve new encoded traits under appropriate conditions, i.e., the accessible section of total morphological space. Since this is dependent on previously evolved regulatory networks that govern information flow in the system, evolvability itself may be regarded as an evolved ability. The way information is physically written, read and modified in living cells (the "coding concept") has not changed substantially during the whole history of the Earth's biosphere. This biosphere, be it alone or one of many, is, accordingly, itself a product of natural selection, since the overall evolvability conferred by its coding concept (nucleic acids as information carriers with the "rulebook of meanings" provided by codons, as well as all the subsystems that regulate various conditional information-reading modes) certainly played a key role in enabling this biosphere to survive up to the present, through alterations of planetary conditions, including at least five catastrophic events linked to major mass extinctions. We submit that, whatever the actual prebiotic physical and chemical processes may have been on our home planet, or may, in principle, occur at some time and place in the Universe, a particular coding concept, with its respective potential to give rise to a biosphere, or class of biospheres, of a certain evolvability, may itself be regarded as a unit (indeed the arch-unit) of natural selection.

  7. Evolvability Is an Evolved Ability: The Coding Concept as the Arch-Unit of Natural Selection

    Science.gov (United States)

    Janković, Srdja; Ćirković, Milan M.

    2016-03-01

    Physical processes that characterize living matter are qualitatively distinct in that they involve encoding and transfer of specific types of information. Such information plays an active part in the control of events that are ultimately linked to the capacity of the system to persist and multiply. This algorithmicity of life is a key prerequisite for its Darwinian evolution, driven by natural selection acting upon stochastically arising variations of the encoded information. The concept of evolvability attempts to define the total capacity of a system to evolve new encoded traits under appropriate conditions, i.e., the accessible section of total morphological space. Since this is dependent on previously evolved regulatory networks that govern information flow in the system, evolvability itself may be regarded as an evolved ability. The way information is physically written, read and modified in living cells (the "coding concept") has not changed substantially during the whole history of the Earth's biosphere. This biosphere, be it alone or one of many, is, accordingly, itself a product of natural selection, since the overall evolvability conferred by its coding concept (nucleic acids as information carriers with the "rulebook of meanings" provided by codons, as well as all the subsystems that regulate various conditional information-reading modes) certainly played a key role in enabling this biosphere to survive up to the present, through alterations of planetary conditions, including at least five catastrophic events linked to major mass extinctions. We submit that, whatever the actual prebiotic physical and chemical processes may have been on our home planet, or may, in principle, occur at some time and place in the Universe, a particular coding concept, with its respective potential to give rise to a biosphere, or class of biospheres, of a certain evolvability, may itself be regarded as a unit (indeed the arch-unit) of natural selection.

  8. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    Science.gov (United States)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  9. A review of channel selection algorithms for EEG signal processing

    Science.gov (United States)

    Alotaiby, Turky; El-Samie, Fathi E. Abd; Alshebeili, Saleh A.; Ahmad, Ishtiaq

    2015-12-01

    Digital processing of electroencephalography (EEG) signals has now been popularly used in a wide variety of applications such as seizure detection/prediction, motor imagery classification, mental task classification, emotion classification, sleep state classification, and drug effects diagnosis. With the large number of EEG channels acquired, it has become apparent that efficient channel selection algorithms are needed with varying importance from one application to another. The main purpose of the channel selection process is threefold: (i) to reduce the computational complexity of any processing task performed on EEG signals by selecting the relevant channels and hence extracting the features of major importance, (ii) to reduce the amount of overfitting that may arise due to the utilization of unnecessary channels, for the purpose of improving the performance, and (iii) to reduce the setup time in some applications. Signal processing tools such as time-domain analysis, power spectral estimation, and wavelet transform have been used for feature extraction and hence for channel selection in most of channel selection algorithms. In addition, different evaluation approaches such as filtering, wrapper, embedded, hybrid, and human-based techniques have been widely used for the evaluation of the selected subset of channels. In this paper, we survey the recent developments in the field of EEG channel selection methods along with their applications and classify these methods according to the evaluation approach.

  10. Selection of Fuel by Using Analytical Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Asilata M. Damle,

    2015-04-01

    Full Text Available Selection of fuel is a very important and critical decision one has to make. Various criteria are to be considered while selecting a fuel. Some of important criteria are Fuel Economy, Availability of fuel, Pollution from vehicle, Maintenance of the vehicle. Selection of best fuel is a complex situation. It needs a multi-criteria analysis. Earlier, the solution to the problem were found by applying classical numerical methods which took into account only technical and economic merits of the various alternatives. By applying multi-criteria tools, it is possible to obtain more realistic results. This paper gives a systematic analysis for selection of fuel by using Analytical Hierarchy Process (AHP. This is a multi-criteria decision making process. By using AHP we can select the fuel by comparing various factors in a mathematical model. This is a scientific method to find out the best fuel by making pairwise comparisons.

  11. Process chain modeling and selection in an additive manufacturing context

    DEFF Research Database (Denmark)

    Thompson, Mary Kathryn; Stolfi, Alessandro; Mischkot, Michael

    2016-01-01

    can compete with traditional process chains for small production runs. Combining both types of technology added cost but no benefit in this case. The new process chain model can be used to explain the results and support process selection, but process chain prototyping is still important for rapidly......This paper introduces a new two-dimensional approach to modeling manufacturing process chains. This approach is used to consider the role of additive manufacturing technologies in process chains for a part with micro scale features and no internal geometry. It is shown that additive manufacturing...

  12. Selection process for trade study: Graphite Composite Primary Structure (GCPS)

    Science.gov (United States)

    Greenberg, H. S.

    1994-01-01

    This TA 2 document describes the selection process that will be used to identify the most suitable structural configuration for an SSTO winged vehicle capable of delivering 25,000 lbs to a 220 nm circular orbit at 51.6 degree inclination. The most suitable unpressurized graphite composite structures and material selections is within this configuration and will be the prototype design for subsequent design and analysis and the basis for the design and fabrication of payload bay, wing, and thrust structure full scale test articles representing segments of the prototype structures. The selection process for this TA 2 trade study is the same as that for the TA 1 trade study. As the trade study progresses additional insight may result in modifications to the selection criteria within this process. Such modifications will result in an update of this document as appropriate.

  13. Comparison of selection methods to deduce natural background levels for groundwater units

    NARCIS (Netherlands)

    Griffioen, J.; Passier, H.F.; Klein, J.

    2008-01-01

    Establishment of natural background levels (NBL) for groundwater is commonly performed to serve as reference when assessing the contamination status of groundwater units. We compare various selection methods to establish NBLs using groundwater quality data forfour hydrogeologically different areas i

  14. Decontamination of cephalosporin-resistant Enterobacteriaceae during selective digestive tract decontamination in intensive care units

    NARCIS (Netherlands)

    Oostdijk, E.A.; Smet, A.M. de; Kesecioglu, J.; Bonten, M.J.; Hoeven, J.G. van der; Pickkers, P.; Sturm, P.D.; Voss, A.

    2012-01-01

    OBJECTIVES: Prevalences of cephalosporin-resistant Enterobacteriaceae are increasing globally, especially in intensive care units (ICUs). The effect of selective digestive tract decontamination (SDD) on the eradication of cephalosporin-resistant Enterobacteriaceae from the intestinal tract is unknow

  15. Decontamination of cephalosporin-resistant Enterobacteriaceae during selective digestive tract decontamination in intensive care units

    NARCIS (Netherlands)

    Oostdijk, Evelien A. N.; de Smet, Anne Marie G. A.; Kesecioglu, Jozef; Bonten, Marc J. M.

    2012-01-01

    Prevalences of cephalosporin-resistant Enterobacteriaceae are increasing globally, especially in intensive care units (ICUs). The effect of selective digestive tract decontamination (SDD) on the eradication of cephalosporin-resistant Enterobacteriaceae from the intestinal tract is unknown. We quanti

  16. Comparison of selection methods to deduce natural background levels for groundwater units

    NARCIS (Netherlands)

    Griffioen, J.; Passier, H.F.; Klein, J.

    2008-01-01

    Establishment of natural background levels (NBL) for groundwater is commonly performed to serve as reference when assessing the contamination status of groundwater units. We compare various selection methods to establish NBLs using groundwater quality data forfour hydrogeologically different areas

  17. Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification

    NARCIS (Netherlands)

    Miao, Yongwu; Sloep, Peter; Koper, Rob

    2008-01-01

    Miao, Y., Sloep, P. B., & Koper, R. (2008). Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification. In F. W. B. Li, J. Zhao, T. K. Shih, R. W. H. Lau, Q. Li & D. McLeod (Eds.), Advances in Web Based Learning - Proceedings of the 7th

  18. Concurrent materials and process selection in conceptual design

    Energy Technology Data Exchange (ETDEWEB)

    Kleban, Stephen D.; Knorovsky, Gerald A.

    2000-08-16

    A method for concurrent selection of materials and a joining process based on product requirements using a knowledge-based, constraint satisfaction approach facilitates the product design and manufacturing process. Using a Windows-based computer video display and a data base of materials and their properties, the designer can ascertain the preferred composition of two parts based on various operating/environmental constraints such as load, temperature, lifetime, etc. Optimum joinder of the two parts may simultaneously be determined using a joining process data base based upon the selected composition of the components as well as the operating/environmental constraints.

  19. Fast calculation of HELAS amplitudes using graphics processing unit (GPU)

    CERN Document Server

    Hagiwara, K; Okamura, N; Rainwater, D L; Stelzer, T

    2009-01-01

    We use the graphics processing unit (GPU) for fast calculations of helicity amplitudes of physics processes. As our first attempt, we compute $u\\overline{u}\\to n\\gamma$ ($n=2$ to 8) processes in $pp$ collisions at $\\sqrt{s} = 14$TeV by transferring the MadGraph generated HELAS amplitudes (FORTRAN) into newly developed HEGET ({\\bf H}ELAS {\\bf E}valuation with {\\bf G}PU {\\bf E}nhanced {\\bf T}echnology) codes written in CUDA, a C-platform developed by NVIDIA for general purpose computing on the GPU. Compared with the usual CPU programs, we obtain 40-150 times better performance on the GPU.

  20. Redox processes and water quality of selected principal aquifer systems

    Science.gov (United States)

    McMahon, P.B.; Chapelle, F.H.

    2008-01-01

    Reduction/oxidation (redox) conditions in 15 principal aquifer (PA) systems of the United States, and their impact on several water quality issues, were assessed from a large data base collected by the National Water-Quality Assessment Program of the USGS. The logic of these assessments was based on the observed ecological succession of electron acceptors such as dissolved oxygen, nitrate, and sulfate and threshold concentrations of these substrates needed to support active microbial metabolism. Similarly, the utilization of solid-phase electron acceptors such as Mn(IV) and Fe(III) is indicated by the production of dissolved manganese and iron. An internally consistent set of threshold concentration criteria was developed and applied to a large data set of 1692 water samples from the PAs to assess ambient redox conditions. The indicated redox conditions then were related to the occurrence of selected natural (arsenic) and anthropogenic (nitrate and volatile organic compounds) contaminants in ground water. For the natural and anthropogenic contaminants assessed in this study, considering redox conditions as defined by this framework of redox indicator species and threshold concentrations explained many water quality trends observed at a regional scale. An important finding of this study was that samples indicating mixed redox processes provide information on redox heterogeneity that is useful for assessing common water quality issues. Given the interpretive power of the redox framework and given that it is relatively inexpensive and easy to measure the chemical parameters included in the framework, those parameters should be included in routine water quality monitoring programs whenever possible.

  1. Introduction to gas lasers with emphasis on selective excitation processes

    CERN Document Server

    Willett, Colin S

    1974-01-01

    Introduction to Gas Lasers: Population Inversion Mechanisms focuses on important processes in gas discharge lasers and basic atomic collision processes that operate in a gas laser. Organized into six chapters, this book first discusses the historical development and basic principles of gas lasers. Subsequent chapters describe the selective excitation processes in gas discharges and the specific neutral, ionized and molecular laser systems. This book will be a valuable reference on the behavior of gas-discharge lasers to anyone already in the field.

  2. Economic Comparison of Selected Processing Alternatives for Alfalfa

    OpenAIRE

    Bates, Dan J.

    1992-01-01

    Processing alfalfa for export is of significant interest to areas like Millard County, the largest hay-producing county in Utah. In the past year there have ix been significant reductions in the price of hay as a result of increased supplies in the central and western United States. This thesis analyzes the benefits and costs of processing alfalfa into cubes and recompressed bales in order to enter the export market. Costs of production were estimated through the use of enterprise budgets ...

  3. Additive Manufacturing Processes: Selective Laser Melting, Electron Beam Melting and Binder Jetting-Selection Guidelines.

    Science.gov (United States)

    Gokuldoss, Prashanth Konda; Kolla, Sri; Eckert, Jürgen

    2017-06-19

    Additive manufacturing (AM), also known as 3D printing or rapid prototyping, is gaining increasing attention due to its ability to produce parts with added functionality and increased complexities in geometrical design, on top of the fact that it is theoretically possible to produce any shape without limitations. However, most of the research on additive manufacturing techniques are focused on the development of materials/process parameters/products design with different additive manufacturing processes such as selective laser melting, electron beam melting, or binder jetting. However, we do not have any guidelines that discuss the selection of the most suitable additive manufacturing process, depending on the material to be processed, the complexity of the parts to be produced, or the design considerations. Considering the very fact that no reports deal with this process selection, the present manuscript aims to discuss the different selection criteria that are to be considered, in order to select the best AM process (binder jetting/selective laser melting/electron beam melting) for fabricating a specific component with a defined set of material properties.

  4. Additive Manufacturing Processes: Selective Laser Melting, Electron Beam Melting and Binder Jetting—Selection Guidelines

    Directory of Open Access Journals (Sweden)

    Prashanth Konda Gokuldoss

    2017-06-01

    Full Text Available Additive manufacturing (AM, also known as 3D printing or rapid prototyping, is gaining increasing attention due to its ability to produce parts with added functionality and increased complexities in geometrical design, on top of the fact that it is theoretically possible to produce any shape without limitations. However, most of the research on additive manufacturing techniques are focused on the development of materials/process parameters/products design with different additive manufacturing processes such as selective laser melting, electron beam melting, or binder jetting. However, we do not have any guidelines that discuss the selection of the most suitable additive manufacturing process, depending on the material to be processed, the complexity of the parts to be produced, or the design considerations. Considering the very fact that no reports deal with this process selection, the present manuscript aims to discuss the different selection criteria that are to be considered, in order to select the best AM process (binder jetting/selective laser melting/electron beam melting for fabricating a specific component with a defined set of material properties.

  5. Additive Manufacturing Processes: Selective Laser Melting, Electron Beam Melting and Binder Jetting—Selection Guidelines

    Science.gov (United States)

    Konda Gokuldoss, Prashanth; Kolla, Sri; Eckert, Jürgen

    2017-01-01

    Additive manufacturing (AM), also known as 3D printing or rapid prototyping, is gaining increasing attention due to its ability to produce parts with added functionality and increased complexities in geometrical design, on top of the fact that it is theoretically possible to produce any shape without limitations. However, most of the research on additive manufacturing techniques are focused on the development of materials/process parameters/products design with different additive manufacturing processes such as selective laser melting, electron beam melting, or binder jetting. However, we do not have any guidelines that discuss the selection of the most suitable additive manufacturing process, depending on the material to be processed, the complexity of the parts to be produced, or the design considerations. Considering the very fact that no reports deal with this process selection, the present manuscript aims to discuss the different selection criteria that are to be considered, in order to select the best AM process (binder jetting/selective laser melting/electron beam melting) for fabricating a specific component with a defined set of material properties. PMID:28773031

  6. Product- and Process Units in the CRITT Translation Process Research Database

    DEFF Research Database (Denmark)

    Carl, Michael

    The first version of the "Translation Process Research Database" (TPR DB v1.0) was released In August 2012, containing logging data of more than 400 translation and text production sessions. The current version of the TPR DB, (v1.4), contains data from more than 940 sessions, which represents more...... than 300 hours of text production. The database provides the raw logging data, as well as Tables of pre-processed product- and processing units. The TPR-DB includes various types of simple and composed product and process units that are intended to support the analysis and modelling of human text...... reception, production, and translation processes. In this talk I describe some of the functions and features of the TPR-DB v1.4, and how they can be deployed in empirical human translation process research....

  7. Use of general purpose graphics processing units with MODFLOW.

    Science.gov (United States)

    Hughes, Joseph D; White, Jeremy T

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.

  8. Fast analytical scatter estimation using graphics processing units.

    Science.gov (United States)

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  9. Supplier Selection in Dynamic Environment using Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Prince Agarwal

    2014-08-01

    Full Text Available In today’s highly competitive business environment, the rapidly changing customer demands and with the advent of enterprise wide information systems, the managers are bound to think beyond the conventional business processes and devise new ways to squeeze out costs and improve the performance without compromising on the quality at the same time. Supplier evaluation and selection is one such area which determines the success of any manufacturing firm. Supplier selection is the problem wherein the company decides which vendor to select to have that strategic and operational advantage of meeting the customers’ varying demands and fight the fierce competition. This paper presents a simple model based on Analytic Hierarchy Process (AHP to help decision makers in supplier evaluation and selection, taking into account the firm’s requirements. The article is intended to help new scholars and researchers understand the AHP model and see different facets in first sight.

  10. An International Perspective on Pharmacy Student Selection Policies and Processes.

    Science.gov (United States)

    Shaw, John; Kennedy, Julia; Jensen, Maree; Sheridan, Janie

    2015-10-25

    Objective. To reflect on selection policies and procedures for programs at pharmacy schools that are members of an international alliance of universities (Universitas 21). Methods. A questionnaire on selection policies and procedures was distributed to admissions directors at participating schools. Results. Completed questionnaires were received from 7 schools in 6 countries. Although marked differences were noted in the programs in different countries, there were commonalities in the selection processes. There was an emphasis on previous academic performance, especially in science subjects. With one exception, all schools had some form of interview, with several having moved to multiple mini-interviews in recent years. Conclusion. The majority of pharmacy schools in this survey relied on traditional selection processes. While there was increasing use of multiple mini-interviews, the authors suggest that additional new approaches may be required in light of the changing nature of the profession.

  11. Congestion estimation technique in the optical network unit registration process.

    Science.gov (United States)

    Kim, Geunyong; Yoo, Hark; Lee, Dongsoo; Kim, Youngsun; Lim, Hyuk

    2016-07-01

    We present a congestion estimation technique (CET) to estimate the optical network unit (ONU) registration success ratio for the ONU registration process in passive optical networks. An optical line terminal (OLT) estimates the number of collided ONUs via the proposed scheme during the serial number state. The OLT can obtain congestion level among ONUs to be registered such that this information may be exploited to change the size of a quiet window to decrease the collision probability. We verified the efficiency of the proposed method through simulation and experimental results.

  12. Heterogeneous Multicore Parallel Programming for Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Francois Bodin

    2009-01-01

    Full Text Available Hybrid parallel multicore architectures based on graphics processing units (GPUs can provide tremendous computing power. Current NVIDIA and AMD Graphics Product Group hardware display a peak performance of hundreds of gigaflops. However, exploiting GPUs from existing applications is a difficult task that requires non-portable rewriting of the code. In this paper, we present HMPP, a Heterogeneous Multicore Parallel Programming workbench with compilers, developed by CAPS entreprise, that allows the integration of heterogeneous hardware accelerators in a unintrusive manner while preserving the legacy code.

  13. Porting a Hall MHD Code to a Graphic Processing Unit

    Science.gov (United States)

    Dorelli, John C.

    2011-01-01

    We present our experience porting a Hall MHD code to a Graphics Processing Unit (GPU). The code is a 2nd order accurate MUSCL-Hancock scheme which makes use of an HLL Riemann solver to compute numerical fluxes and second-order finite differences to compute the Hall contribution to the electric field. The divergence of the magnetic field is controlled with Dedner?s hyperbolic divergence cleaning method. Preliminary benchmark tests indicate a speedup (relative to a single Nehalem core) of 58x for a double precision calculation. We discuss scaling issues which arise when distributing work across multiple GPUs in a CPU-GPU cluster.

  14. Line-by-line spectroscopic simulations on graphics processing units

    Science.gov (United States)

    Collange, Sylvain; Daumas, Marc; Defour, David

    2008-01-01

    We report here on software that performs line-by-line spectroscopic simulations on gases. Elaborate models (such as narrow band and correlated-K) are accurate and efficient for bands where various components are not simultaneously and significantly active. Line-by-line is probably the most accurate model in the infrared for blends of gases that contain high proportions of H 2O and CO 2 as this was the case for our prototype simulation. Our implementation on graphics processing units sustains a speedup close to 330 on computation-intensive tasks and 12 on memory intensive tasks compared to implementations on one core of high-end processors. This speedup is due to data parallelism, efficient memory access for specific patterns and some dedicated hardware operators only available in graphics processing units. It is obtained leaving most of processor resources available and it would scale linearly with the number of graphics processing units in parallel machines. Line-by-line simulation coupled with simulation of fluid dynamics was long believed to be economically intractable but our work shows that it could be done with some affordable additional resources compared to what is necessary to perform simulations on fluid dynamics alone. Program summaryProgram title: GPU4RE Catalogue identifier: ADZY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 62 776 No. of bytes in distributed program, including test data, etc.: 1 513 247 Distribution format: tar.gz Programming language: C++ Computer: x86 PC Operating system: Linux, Microsoft Windows. Compilation requires either gcc/g++ under Linux or Visual C++ 2003/2005 and Cygwin under Windows. It has been tested using gcc 4.1.2 under Ubuntu Linux 7.04 and using Visual C

  15. Selection of Temporal Lags When Modeling Economic and Financial Processes.

    Science.gov (United States)

    Matilla-Garcia, Mariano; Ojeda, Rina B; Marin, Manuel Ruiz

    2016-10-01

    This paper suggests new nonparametric statistical tools and procedures for modeling linear and nonlinear univariate economic and financial processes. In particular, the tools presented help in selecting relevant lags in the model description of a general linear or nonlinear time series; that is, nonlinear models are not a restriction. The tests seem to be robust to the selection of free parameters. We also show that the test can be used as a diagnostic tool for well-defined models.

  16. Multicriteria framework for selecting a process modelling language

    Science.gov (United States)

    Scanavachi Moreira Campos, Ana Carolina; Teixeira de Almeida, Adiel

    2016-01-01

    The choice of process modelling language can affect business process management (BPM) since each modelling language shows different features of a given process and may limit the ways in which a process can be described and analysed. However, choosing the appropriate modelling language for process modelling has become a difficult task because of the availability of a large number modelling languages and also due to the lack of guidelines on evaluating, and comparing languages so as to assist in selecting the most appropriate one. This paper proposes a framework for selecting a modelling language in accordance with the purposes of modelling. This framework is based on the semiotic quality framework (SEQUAL) for evaluating process modelling languages and a multicriteria decision aid (MCDA) approach in order to select the most appropriate language for BPM. This study does not attempt to set out new forms of assessment and evaluation criteria, but does attempt to demonstrate how two existing approaches can be combined so as to solve the problem of selection of modelling language. The framework is described in this paper and then demonstrated by means of an example. Finally, the advantages and disadvantages of using SEQUAL and MCDA in an integrated manner are discussed.

  17. Selection of process alternatives for lignocellulosic bioethanol production using a MILP approach.

    Science.gov (United States)

    Scott, Felipe; Venturini, Fabrizio; Aroca, Germán; Conejeros, Raúl

    2013-11-01

    This work proposes a decision-making framework for the selection of processes and unit operations for lignocellulosic bioethanol production. Process alternatives are described by its capital and operating expenditures, its contribution to process yield and technological availability information. A case study in second generation ethanol production using Eucalyptus globulus as raw material is presented to test the developed process synthesis tool. Results indicate that production cost does not necessarily decrease when yield increases. Hence, optimal processes can be found at the inflexion point of total costs and yield. The developed process synthesis tool provides results with an affordable computational cost, existing optimization tools and an easy-to-upgrade description of the process alternatives. These features made this tool suitable for process screening when incomplete information regarding process alternatives is available.

  18. Active microchannel fluid processing unit and method of making

    Science.gov (United States)

    Bennett, Wendy D [Kennewick, WA; Martin, Peter M [Kennewick, WA; Matson, Dean W [Kennewick, WA; Roberts, Gary L [West Richland, WA; Stewart, Donald C [Richland, WA; Tonkovich, Annalee Y [Pasco, WA; Zilka, Jennifer L [Pasco, WA; Schmitt, Stephen C [Dublin, OH; Werner, Timothy M [Columbus, OH

    2001-01-01

    The present invention is an active microchannel fluid processing unit and method of making, both relying on having (a) at least one inner thin sheet; (b) at least one outer thin sheet; (c) defining at least one first sub-assembly for performing at least one first unit operation by stacking a first of the at least one inner thin sheet in alternating contact with a first of the at least one outer thin sheet into a first stack and placing an end block on the at least one inner thin sheet, the at least one first sub-assembly having at least a first inlet and a first outlet; and (d) defining at least one second sub-assembly for performing at least one second unit operation either as a second flow path within the first stack or by stacking a second of the at least one inner thin sheet in alternating contact with second of the at least one outer thin sheet as a second stack, the at least one second sub-assembly having at least a second inlet and a second outlet.

  19. Attribute based selection of thermoplastic resin for vacuum infusion process

    DEFF Research Database (Denmark)

    Prabhakaran, R.T. Durai; Lystrup, Aage; Løgstrup Andersen, Tom

    2011-01-01

    The composite industry looks toward a new material system (resins) based on thermoplastic polymers for the vacuum infusion process, similar to the infusion process using thermosetting polymers. A large number of thermoplastics are available in the market with a variety of properties suitable...... for different engineering applications, and few of those are available in a not yet polymerised form suitable for resin infusion. The proper selection of a new resin system among these thermoplastic polymers is a concern for manufactures in the current scenario and a special mathematical tool would...... be beneficial. In this paper, the authors introduce a new decision making tool for resin selection based on significant attributes. This article provides a broad overview of suitable thermoplastic material systems for vacuum infusion process available in today’s market. An illustrative example—resin selection...

  20. Accelerating Radio Astronomy Cross-Correlation with Graphics Processing Units

    CERN Document Server

    Clark, M A; Greenhill, L J

    2011-01-01

    We present a highly parallel implementation of the cross-correlation of time-series data using graphics processing units (GPUs), which is scalable to hundreds of independent inputs and suitable for the processing of signals from "Large-N" arrays of many radio antennas. The computational part of the algorithm, the X-engine, is implementated efficiently on Nvidia's Fermi architecture, sustaining up to 79% of the peak single precision floating-point throughput. We compare performance obtained for hardware- and software-managed caches, observing significantly better performance for the latter. The high performance reported involves use of a multi-level data tiling strategy in memory and use of a pipelined algorithm with simultaneous computation and transfer of data from host to device memory. The speed of code development, flexibility, and low cost of the GPU implementations compared to ASIC and FPGA implementations have the potential to greatly shorten the cycle of correlator development and deployment, for case...

  1. Significantly reducing registration time in IGRT using graphics processing units

    DEFF Research Database (Denmark)

    Noe, Karsten Østergaard; Denis de Senneville, Baudouin; Tanderup, Kari

    2008-01-01

    Purpose/Objective For online IGRT, rapid image processing is needed. Fast parallel computations using graphics processing units (GPUs) have recently been made more accessible through general purpose programming interfaces. We present a GPU implementation of the Horn and Schunck method...... respiration phases in a free breathing volunteer and 41 anatomical landmark points in each image series. The registration method used is a multi-resolution GPU implementation of the 3D Horn and Schunck algorithm. It is based on the CUDA framework from Nvidia. Results On an Intel Core 2 CPU at 2.4GHz each...... registration took 30 minutes. On an Nvidia Geforce 8800GTX GPU in the same machine this registration took 37 seconds, making the GPU version 48.7 times faster. The nine image series of different respiration phases were registered to the same reference image (full inhale). Accuracy was evaluated on landmark...

  2. Fast free-form deformation using graphics processing units.

    Science.gov (United States)

    Modat, Marc; Ridgway, Gerard R; Taylor, Zeike A; Lehmann, Manja; Barnes, Josephine; Hawkes, David J; Fox, Nick C; Ourselin, Sébastien

    2010-06-01

    A large number of algorithms have been developed to perform non-rigid registration and it is a tool commonly used in medical image analysis. The free-form deformation algorithm is a well-established technique, but is extremely time consuming. In this paper we present a parallel-friendly formulation of the algorithm suitable for graphics processing unit execution. Using our approach we perform registration of T1-weighted MR images in less than 1 min and show the same level of accuracy as a classical serial implementation when performing segmentation propagation. This technology could be of significant utility in time-critical applications such as image-guided interventions, or in the processing of large data sets. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  3. Conceptual Design for the Pilot-Scale Plutonium Oxide Processing Unit in the Radiochemical Processing Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Lumetta, Gregg J.; Meier, David E.; Tingey, Joel M.; Casella, Amanda J.; Delegard, Calvin H.; Edwards, Matthew K.; Jones, Susan A.; Rapko, Brian M.

    2014-08-05

    This report describes a conceptual design for a pilot-scale capability to produce plutonium oxide for use as exercise and reference materials, and for use in identifying and validating nuclear forensics signatures associated with plutonium production. This capability is referred to as the Pilot-scale Plutonium oxide Processing Unit (P3U), and it will be located in the Radiochemical Processing Laboratory at the Pacific Northwest National Laboratory. The key unit operations are described, including plutonium dioxide (PuO2) dissolution, purification of the Pu by ion exchange, precipitation, and conversion to oxide by calcination.

  4. Exploiting graphics processing units for computational biology and bioinformatics.

    Science.gov (United States)

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.

  5. Graphics processing units in bioinformatics, computational biology and systems biology.

    Science.gov (United States)

    Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela

    2016-07-08

    Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools. © The Author 2016. Published by Oxford University Press.

  6. Application of concept selection methodology in IC process design

    Science.gov (United States)

    Kim, Myung-Kul

    1993-01-01

    Search for an effective methodology practical in IC manufacturing process development led to trial of quantitative 'concept selection' methodology in selecting the 'best' alternative for interlevel dielectric (ILD) processes. A cross-functional team selected multi-criteria with scoring guidelines to be used in the definition of the 'best'. The project was targeted for the 3 level metal backend process for sub-micron gate array product. The outcome of the project showed that the maturity of the alternatives has strong influence on the scores, because scores on the adopted criteria such as yield, reliability and maturity will depend on the maturity of a particular process. At the same time, the project took longer than expected since it required data for the multiple criteria. These observations suggest that adopting a simpler procedure that can analyze total inherent controllability of a process would be more effective. The methodology of the DFS (design for simplicity) tools used in analyzing the manufacturability of such electronics products as computers, phones and other consumer electronics products could be used as an 'analogy' in constructing an evaluation method for IC processes that produce devices used in those electronics products. This could be done by focusing on the basic process operation elements rather than the layers that are being built.

  7. Implementation of Phonetic Context Variable Length Unit Selection Module for Malay Text to Speech

    Directory of Open Access Journals (Sweden)

    Tian-Swee Tan

    2008-01-01

    Full Text Available Problem statement: The main problem with current Malay Text-To-Speech (MTTS synthesis system is the poor quality of the generated speech sound due to the inability of traditional TTS system to provide multiple choices of unit for generating more accurate synthesized speech. Approach: This study proposes a phonetic context variable length unit selection MTTS system that is capable of providing more natural and accurate unit selection for synthesized speech. It implemented a phonetic context algorithm for unit selection for MTTS. The unit selection method (without phonetic context may encounter the problem of selecting the speech unit from different sources and affect the quality of concatenation. This study proposes the design of speech corpus and unit selection method according to phonetic context so that it can select a string of continuous phoneme from same source instead of individual phoneme from different sources. This can further reduce the concatenation point and increase the quality of concatenation. The speech corpus was transcribed according to phonetic context to preserve the phonetic information. This method utilizes word base concatenation method. Firstly it will search through the speech corpus for the target word, if the target is found; it will be used for concatenation. If the word does not exist, then it will construct the words from phoneme sequence. Results: This system had been tested with 40 participants in Mean Opinion Score (MOS listening test with the average rates for naturalness, pronunciation and intelligibility are 3.9, 4.1 and 3.9. Conclusion/Recommendation: Through this study, a very first version of Corpus-based MTTS has been designed; it has improved the naturalness, pronunciation and intelligibility of synthetic speech. But it still has some lacking that need to be perfected such as the prosody module to support the phrasing analysis and intonation of input text to match with the waveform modifier.

  8. Integrated watershed-scale response to climate change for selected basins across the United States

    Science.gov (United States)

    Markstrom, Steven L.; Hay, Lauren E.; Ward-Garrison, D. Christian; Risley, John C.; Battaglin, William A.; Bjerklie, David M.; Chase, Katherine J.; Christiansen, Daniel E.; Dudley, Robert W.; Hunt, Randall J.; Koczot, Kathryn M.; Mastin, Mark C.; Regan, R. Steven; Viger, Roland J.; Vining, Kevin C.; Walker, John F.

    2012-01-01

    A study by the U.S. Geological Survey (USGS) evaluated the hydrologic response to different projected carbon emission scenarios of the 21st century using a hydrologic simulation model. This study involved five major steps: (1) setup, calibrate and evaluated the Precipitation Runoff Modeling System (PRMS) model in 14 basins across the United States by local USGS personnel; (2) acquire selected simulated carbon emission scenarios from the World Climate Research Programme's Coupled Model Intercomparison Project; (3) statistical downscaling of these scenarios to create PRMS input files which reflect the future climatic conditions of these scenarios; (4) generate PRMS projections for the carbon emission scenarios for the 14 basins; and (5) analyze the modeled hydrologic response. This report presents an overview of this study, details of the methodology, results from the 14 basin simulations, and interpretation of these results. A key finding is that the hydrological response of the different geographical regions of the United States to potential climate change may be different, depending on the dominant physical processes of that particular region. Also considered is the tremendous amount of uncertainty present in the carbon emission scenarios and how this uncertainty propagates through the hydrologic simulations.

  9. Fundamental Aspects of Selective Melting Additive Manufacturing Processes

    Energy Technology Data Exchange (ETDEWEB)

    van Swol, Frank B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Miller, James E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-12-01

    Certain details of the additive manufacturing process known as selective laser melting (SLM) affect the performance of the final metal part. To unleash the full potential of SLM it is crucial that the process engineer in the field receives guidance about how to select values for a multitude of process variables employed in the building process. These include, for example, the type of powder (e.g., size distribution, shape, type of alloy), orientation of the build axis, the beam scan rate, the beam power density, the scan pattern and scan rate. The science-based selection of these settings con- stitutes an intrinsically challenging multi-physics problem involving heating and melting a metal alloy, reactive, dynamic wetting followed by re-solidification. In addition, inherent to the process is its considerable variability that stems from the powder packing. Each time a limited number of powder particles are placed, the stacking is intrinsically different from the previous, possessing a different geometry, and having a different set of contact areas with the surrounding particles. As a result, even if all other process parameters (scan rate, etc) are exactly the same, the shape and contact geometry and area of the final melt pool will be unique to that particular configuration. This report identifies the most important issues facing SLM, discusses the fundamental physics associated with it and points out how modeling can support the additive manufacturing efforts.

  10. Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)

    Science.gov (United States)

    Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.

    2016-05-01

    This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.

  11. MASSIVELY PARALLEL LATENT SEMANTIC ANALYSES USING A GRAPHICS PROCESSING UNIT

    Energy Technology Data Exchange (ETDEWEB)

    Cavanagh, J.; Cui, S.

    2009-01-01

    Latent Semantic Analysis (LSA) aims to reduce the dimensions of large term-document datasets using Singular Value Decomposition. However, with the ever-expanding size of datasets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. A graphics processing unit (GPU) can solve some highly parallel problems much faster than a traditional sequential processor or central processing unit (CPU). Thus, a deployable system using a GPU to speed up large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a PC cluster. Due to the GPU’s application-specifi c architecture, harnessing the GPU’s computational prowess for LSA is a great challenge. We presented a parallel LSA implementation on the GPU, using NVIDIA® Compute Unifi ed Device Architecture and Compute Unifi ed Basic Linear Algebra Subprograms software. The performance of this implementation is compared to traditional LSA implementation on a CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1 000x1 000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran fi ve to six times faster than the CPU version. The large variation is due to architectural benefi ts of the GPU for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  12. Intermediate product selection and blending in the food processing industry

    DEFF Research Database (Denmark)

    Kilic, Onur A.; Akkerman, Renzo; van Donk, Dirk Pieter

    2013-01-01

    This study addresses a capacitated intermediate product selection and blending problem typical for two-stage production systems in the food processing industry. The problem involves the selection of a set of intermediates and end-product recipes characterising how those selected intermediates...... are blended into end products to minimise the total operational costs under production and storage capacity limitations. A comprehensive mixed-integer linear model is developed for the problem. The model is applied on a data set collected from a real-life case. The trade-offs between capacity limitations...... and operational costs are analysed, and the effects of different types of cost parameters and capacity limitations on the selection of intermediates and end-product recipes are investigated....

  13. Evidence of a sensory processing unit in the mammalian macula

    Science.gov (United States)

    Chimento, T. C.; Ross, M. D.

    1996-01-01

    We cut serial sections through the medial part of the rat vestibular macula for transmission electron microscopic (TEM) examination, computer-assisted 3-D reconstruction, and compartmental modeling. The ultrastructural research showed that many primary vestibular neurons have an unmyelinated segment, often branched, that extends between the heminode (putative site of the spike initiation zone) and the expanded terminal(s) (calyx, calyces). These segments, termed the neuron branches, and the calyces frequently have spine-like processes of various dimensions with bouton endings that morphologically are afferent, efferent, or reciprocal to other macular neural elements. The major questions posed by this study were whether small details of morphology, such as the size and location of neuronal processes or synapses, could influence the output of a vestibular afferent, and whether a knowledge of morphological details could guide the selection of values for simulation parameters. The conclusions from our simulations are (1) values of 5.0 k omega cm2 for membrane resistivity and 1.0 nS for synaptic conductance yield simulations that best match published physiological results; (2) process morphology has little effect on orthodromic spread of depolarization from the head (bouton) to the spike initiation zone (SIZ); (3) process morphology has no effect on antidromic spread of depolarization to the process head; (4) synapses do not sum linearly; (5) synapses are electrically close to the SIZ; and (6) all whole-cell simulations should be run with an active SIZ.

  14. Process for selecting polyhydroxyalkanoate (PHA) producing micro-organisms

    NARCIS (Netherlands)

    Van Loosdrecht, M.C.M.; Kleerebezem, R.; Jian, Y.; Johnson, K.

    2009-01-01

    The invention relates to a process for selecting a polyhydroxyalkanoate (PHA) producing micro-organism from a natural source comprising a variety of micro-organisms, comprising steps of preparing a fermentation broth comprising the natural source and nutrients in water; creating and maintaining

  15. Process for selecting polyhydroxyalkanoate (PHA) producing micro-organisms

    NARCIS (Netherlands)

    Van Loosdrecht, M.C.M.; Kleerebezem, R.; Jian, Y.; Johnson, K.

    2009-01-01

    The invention relates to a process for selecting a polyhydroxyalkanoate (PHA) producing micro-organism from a natural source comprising a variety of micro-organisms, comprising steps of preparing a fermentation broth comprising the natural source and nutrients in water; creating and maintaining aero

  16. Efficiency and Effectiveness of a Resident Assistant Selection Process.

    Science.gov (United States)

    Broitman, Thomas

    The American phenomenon of "more is better" extends a value-loaded concept implicit in budget preparation. At any university, the scope, magnitude and cost of a residence hall assistant program selection process is a metaphor to illustrate efficiency and effectiveness of human resources. In order to discover a more efficient and…

  17. Efficient magnetohydrodynamic simulations on graphics processing units with CUDA

    Science.gov (United States)

    Wong, Hon-Cheng; Wong, Un-Hong; Feng, Xueshang; Tang, Zesheng

    2011-10-01

    Magnetohydrodynamic (MHD) simulations based on the ideal MHD equations have become a powerful tool for modeling phenomena in a wide range of applications including laboratory, astrophysical, and space plasmas. In general, high-resolution methods for solving the ideal MHD equations are computationally expensive and Beowulf clusters or even supercomputers are often used to run the codes that implemented these methods. With the advent of the Compute Unified Device Architecture (CUDA), modern graphics processing units (GPUs) provide an alternative approach to parallel computing for scientific simulations. In this paper we present, to the best of the author's knowledge, the first implementation of MHD simulations entirely on GPUs with CUDA, named GPU-MHD, to accelerate the simulation process. GPU-MHD supports both single and double precision computations. A series of numerical tests have been performed to validate the correctness of our code. Accuracy evaluation by comparing single and double precision computation results is also given. Performance measurements of both single and double precision are conducted on both the NVIDIA GeForce GTX 295 (GT200 architecture) and GTX 480 (Fermi architecture) graphics cards. These measurements show that our GPU-based implementation achieves between one and two orders of magnitude of improvement depending on the graphics card used, the problem size, and the precision when comparing to the original serial CPU MHD implementation. In addition, we extend GPU-MHD to support the visualization of the simulation results and thus the whole MHD simulation and visualization process can be performed entirely on GPUs.

  18. Accelerating sparse linear algebra using graphics processing units

    Science.gov (United States)

    Spagnoli, Kyle E.; Humphrey, John R.; Price, Daniel K.; Kelmelis, Eric J.

    2011-06-01

    The modern graphics processing unit (GPU) found in many standard personal computers is a highly parallel math processor capable of over 1 TFLOPS of peak computational throughput at a cost similar to a high-end CPU with excellent FLOPS-to-watt ratio. High-level sparse linear algebra operations are computationally intense, often requiring large amounts of parallel operations and would seem a natural fit for the processing power of the GPU. Our work is on a GPU accelerated implementation of sparse linear algebra routines. We present results from both direct and iterative sparse system solvers. The GPU execution model featured by NVIDIA GPUs based on CUDA demands very strong parallelism, requiring between hundreds and thousands of simultaneous operations to achieve high performance. Some constructs from linear algebra map extremely well to the GPU and others map poorly. CPUs, on the other hand, do well at smaller order parallelism and perform acceptably during low-parallelism code segments. Our work addresses this via hybrid a processing model, in which the CPU and GPU work simultaneously to produce results. In many cases, this is accomplished by allowing each platform to do the work it performs most naturally. For example, the CPU is responsible for graph theory portion of the direct solvers while the GPU simultaneously performs the low level linear algebra routines.

  19. GENETIC ALGORITHM ON GENERAL PURPOSE GRAPHICS PROCESSING UNIT: PARALLELISM REVIEW

    Directory of Open Access Journals (Sweden)

    A.J. Umbarkar

    2013-01-01

    Full Text Available Genetic Algorithm (GA is effective and robust method for solving many optimization problems. However, it may take more runs (iterations and time to get optimal solution. The execution time to find the optimal solution also depends upon the niching-technique applied to evolving population. This paper provides the information about how various authors, researchers, scientists have implemented GA on GPGPU (General purpose Graphics Processing Units with and without parallelism. Many problems have been solved on GPGPU using GA. GA is easy to parallelize because of its SIMD nature and therefore can be implemented well on GPGPU. Thus, speedup can definitely be achieved if bottleneck in GAs are identified and implemented effectively on GPGPU. Paper gives review of various applications solved using GAs on GPGPU with the future scope in the area of optimization.

  20. Centralization of Intensive Care Units: Process Reengineering in a Hospital

    Directory of Open Access Journals (Sweden)

    Arun Kumar

    2010-03-01

    Full Text Available Centralization of intensive care units (ICUs is a concept that has been around for several decades and the OECD countries have led the way in adopting this in their operations. Singapore Hospital was built in 1981, before the concept of centralization of ICUs took off. The hospital's ICUs were never centralized and were spread out across eight different blocks with the specialization they were associated with. Coupled with the acquisitions of the new concept of centralization and its benefits, the hospital recognizes the importance of having a centralized ICU to better handle major disasters. Using simulation models, this paper attempts to study the feasibility of centralization of ICUs in Singapore Hospital, subject to space constraints. The results will prove helpful to those who consider reengineering the intensive care process in hospitals.

  1. Simulating Lattice Spin Models on Graphics Processing Units

    CERN Document Server

    Levy, Tal; Rabani, Eran; 10.1021/ct100385b

    2012-01-01

    Lattice spin models are useful for studying critical phenomena and allow the extraction of equilibrium and dynamical properties. Simulations of such systems are usually based on Monte Carlo (MC) techniques, and the main difficulty is often the large computational effort needed when approaching critical points. In this work, it is shown how such simulations can be accelerated with the use of NVIDIA graphics processing units (GPUs) using the CUDA programming architecture. We have developed two different algorithms for lattice spin models, the first useful for equilibrium properties near a second-order phase transition point and the second for dynamical slowing down near a glass transition. The algorithms are based on parallel MC techniques, and speedups from 70- to 150-fold over conventional single-threaded computer codes are obtained using consumer-grade hardware.

  2. Molecular Dynamics Simulation of Macromolecules Using Graphics Processing Unit

    CERN Document Server

    Xu, Ji; Ge, Wei; Yu, Xiang; Yang, Xiaozhen; Li, Jinghai

    2010-01-01

    Molecular dynamics (MD) simulation is a powerful computational tool to study the behavior of macromolecular systems. But many simulations of this field are limited in spatial or temporal scale by the available computational resource. In recent years, graphics processing unit (GPU) provides unprecedented computational power for scientific applications. Many MD algorithms suit with the multithread nature of GPU. In this paper, MD algorithms for macromolecular systems that run entirely on GPU are presented. Compared to the MD simulation with free software GROMACS on a single CPU core, our codes achieve about 10 times speed-up on a single GPU. For validation, we have performed MD simulations of polymer crystallization on GPU, and the results observed perfectly agree with computations on CPU. Therefore, our single GPU codes have already provided an inexpensive alternative for macromolecular simulations on traditional CPU clusters and they can also be used as a basis to develop parallel GPU programs to further spee...

  3. Integrating post-Newtonian equations on graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Herrmann, Frank; Tiglio, Manuel [Department of Physics, Center for Fundamental Physics, and Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Silberholz, John [Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Bellone, Matias [Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Cordoba 5000 (Argentina); Guerberoff, Gustavo, E-mail: tiglio@umd.ed [Facultad de Ingenieria, Instituto de Matematica y Estadistica ' Prof. Ing. Rafael Laguardia' , Universidad de la Republica, Montevideo (Uruguay)

    2010-02-07

    We report on early results of a numerical and statistical study of binary black hole inspirals. The two black holes are evolved using post-Newtonian approximations starting with initially randomly distributed spin vectors. We characterize certain aspects of the distribution shortly before merger. In particular we note the uniform distribution of black hole spin vector dot products shortly before merger and a high correlation between the initial and final black hole spin vector dot products in the equal-mass, maximally spinning case. More than 300 million simulations were performed on graphics processing units, and we demonstrate a speed-up of a factor 50 over a more conventional CPU implementation. (fast track communication)

  4. Air pollution modelling using a graphics processing unit with CUDA

    CERN Document Server

    Molnar, Ferenc; Meszaros, Robert; Lagzi, Istvan; 10.1016/j.cpc.2009.09.008

    2010-01-01

    The Graphics Processing Unit (GPU) is a powerful tool for parallel computing. In the past years the performance and capabilities of GPUs have increased, and the Compute Unified Device Architecture (CUDA) - a parallel computing architecture - has been developed by NVIDIA to utilize this performance in general purpose computations. Here we show for the first time a possible application of GPU for environmental studies serving as a basement for decision making strategies. A stochastic Lagrangian particle model has been developed on CUDA to estimate the transport and the transformation of the radionuclides from a single point source during an accidental release. Our results show that parallel implementation achieves typical acceleration values in the order of 80-120 times compared to CPU using a single-threaded implementation on a 2.33 GHz desktop computer. Only very small differences have been found between the results obtained from GPU and CPU simulations, which are comparable with the effect of stochastic tran...

  5. PO*WW*ER mobile treatment unit process hazards analysis

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, R.B.

    1996-06-01

    The objective of this report is to demonstrate that a thorough assessment of the risks associated with the operation of the Rust Geotech patented PO*WW*ER mobile treatment unit (MTU) has been performed and documented. The MTU was developed to treat aqueous mixed wastes at the US Department of Energy (DOE) Albuquerque Operations Office sites. The MTU uses evaporation to separate organics and water from radionuclides and solids, and catalytic oxidation to convert the hazardous into byproducts. This process hazards analysis evaluated a number of accident scenarios not directly related to the operation of the MTU, such as natural phenomena damage and mishandling of chemical containers. Worst case accident scenarios were further evaluated to determine the risk potential to the MTU and to workers, the public, and the environment. The overall risk to any group from operation of the MTU was determined to be very low; the MTU is classified as a Radiological Facility with low hazards.

  6. Iterative Methods for MPC on Graphical Processing Units

    DEFF Research Database (Denmark)

    2012-01-01

    The high oating point performance and memory bandwidth of Graphical Processing Units (GPUs) makes them ideal for a large number of computations which often arises in scientic computing, such as matrix operations. GPUs achieve this performance by utilizing massive par- allelism, which requires...... on their applicability for GPUs. We examine published techniques for iterative methods in interior points methods (IPMs) by applying them to simple test cases, such as a system of masses connected by springs. Iterative methods allows us deal with the ill-conditioning occurring in the later iterations of the IPM as well...... as to avoid the use of dense matrices, which may be too large for the limited memory capacity of current graphics cards....

  7. Graphics processing units accelerated semiclassical initial value representation molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Tamascelli, Dario; Dambrosio, Francesco Saverio [Dipartimento di Fisica, Università degli Studi di Milano, via Celoria 16, 20133 Milano (Italy); Conte, Riccardo [Department of Chemistry and Cherry L. Emerson Center for Scientific Computation, Emory University, Atlanta, Georgia 30322 (United States); Ceotto, Michele, E-mail: michele.ceotto@unimi.it [Dipartimento di Chimica, Università degli Studi di Milano, via Golgi 19, 20133 Milano (Italy)

    2014-05-07

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly.

  8. Graphics Processing Units and High-Dimensional Optimization.

    Science.gov (United States)

    Zhou, Hua; Lange, Kenneth; Suchard, Marc A

    2010-08-01

    This paper discusses the potential of graphics processing units (GPUs) in high-dimensional optimization problems. A single GPU card with hundreds of arithmetic cores can be inserted in a personal computer and dramatically accelerates many statistical algorithms. To exploit these devices fully, optimization algorithms should reduce to multiple parallel tasks, each accessing a limited amount of data. These criteria favor EM and MM algorithms that separate parameters and data. To a lesser extent block relaxation and coordinate descent and ascent also qualify. We demonstrate the utility of GPUs in nonnegative matrix factorization, PET image reconstruction, and multidimensional scaling. Speedups of 100 fold can easily be attained. Over the next decade, GPUs will fundamentally alter the landscape of computational statistics. It is time for more statisticians to get on-board.

  9. Graphics Processing Unit Enhanced Parallel Document Flocking Clustering

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL; ST Charles, Jesse Lee [ORNL

    2010-01-01

    Analyzing and clustering documents is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to generate results in a reasonable amount of time. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. In this paper, we have conducted research to exploit this archi- tecture and apply its strengths to the flocking based document clustering problem. Using the CUDA platform from NVIDIA, we developed a doc- ument flocking implementation to be run on the NVIDIA GEFORCE GPU. Performance gains ranged from thirty-six to nearly sixty times improvement of the GPU over the CPU implementation.

  10. Implementing wide baseline matching algorithms on a graphics processing unit.

    Energy Technology Data Exchange (ETDEWEB)

    Rothganger, Fredrick H.; Larson, Kurt W.; Gonzales, Antonio Ignacio; Myers, Daniel S.

    2007-10-01

    Wide baseline matching is the state of the art for object recognition and image registration problems in computer vision. Though effective, the computational expense of these algorithms limits their application to many real-world problems. The performance of wide baseline matching algorithms may be improved by using a graphical processing unit as a fast multithreaded co-processor. In this paper, we present an implementation of the difference of Gaussian feature extractor, based on the CUDA system of GPU programming developed by NVIDIA, and implemented on their hardware. For a 2000x2000 pixel image, the GPU-based method executes nearly thirteen times faster than a comparable CPU-based method, with no significant loss of accuracy.

  11. Minimization of entropy production in separate and connected process units

    Energy Technology Data Exchange (ETDEWEB)

    Roesjorde, Audun

    2004-08-01

    The objective of this thesis was to further develop a methodology for minimizing the entropy production of single and connected chemical process units. When chemical process equipment is designed and operated at the lowest entropy production possible, the energy efficiency of the equipment is enhanced. We have found for single process units that the entropy production could be reduced with up to 20-40%, given the degrees of freedom in the optimization. In processes, our results indicated that even bigger reductions were possible. The states of minimum entropy production were studied and important painter's for obtaining significant reductions in the entropy production were identified. Both from sustain ability and economical viewpoints knowledge of energy efficient design and operation are important. In some of the systems we studied, nonequilibrium thermodynamics was used to model the entropy production. In Chapter 2, we gave a brief introduction to different industrial applications of nonequilibrium thermodynamics. The link between local transport phenomena and overall system description makes nonequilibrium thermodynamics a useful tool for understanding design of chemical process units. We developed the methodology of minimization of entropy production in several steps. First, we analyzed and optimized the entropy production of single units: Two alternative concepts of adiabatic distillation; diabatic and heat-integrated distillation, were analyzed and optimized in Chapter 3 to 5. In diabatic distillation, heat exchange is allowed along the column, and it is this feature that increases the energy efficiency of the distillation column. In Chapter 3, we found how a given area of heat transfer should be optimally distributed among the trays in a column separating a mixture of propylene and propane. The results showed that heat exchange was most important on the trays close to the re boiler and condenser. In Chapter 4 and 5, we studied how the entropy

  12. Selective CO Methanation Catalysts for Fuel Processing Applications

    Energy Technology Data Exchange (ETDEWEB)

    Dagle, Robert A.; Wang, Yong; Xia, Guanguang G.; Strohm, James J.; Holladay, Jamie D.; Palo, Daniel R.

    2007-07-15

    Abstract Selective CO methanation as a strategy for CO removal in micro fuel processing applications was investigated over Ru-based catalysts. Ru loading, pretreatment and reduction conditions, and choice of support were shown to affect catalyst activity, selectivity, and stability. Even operating at a gas-hourly-space-velocity as high as 13,500 hr-1, a 3%Ru/Al2O3 catalyst was able to lower CO in a reformate to less than 100 ppm over a wide temperature range from 240oC to 285 oC, while keeping hydrogen consumption below 10%.

  13. The perfect photo book: hints for the image selection process

    Science.gov (United States)

    Fageth, Reiner; Schmidt-Sacht, Wulf

    2007-01-01

    An ever increasing amount of digital images are being captured. This increase is due to several reasons. People are afraid of not "capturing the moment" and pressing the shutter is not directly liked to costs as was the case with silver halide photography. This behaviour seems to be convenient but can result in a dilemma for the consumer. This paper presents tools designed to help the consumer overcome the time consuming image selection process while turning the chore of selecting the images for prints or placing them automatically into a photo book into a fun experience.

  14. Amorphous solid dispersions: Rational selection of a manufacturing process.

    Science.gov (United States)

    Vasconcelos, Teófilo; Marques, Sara; das Neves, José; Sarmento, Bruno

    2016-05-01

    Amorphous products and particularly amorphous solid dispersions are currently one of the most exciting areas in the pharmaceutical field. This approach presents huge potential and advantageous features concerning the overall improvement of drug bioavailability. Currently, different manufacturing processes are being developed to produce amorphous solid dispersions with suitable robustness and reproducibility, ranging from solvent evaporation to melting processes. In the present paper, laboratorial and industrial scale processes were reviewed, and guidelines for a rationale selection of manufacturing processes were proposed. This would ensure an adequate development (laboratorial scale) and production according to the good manufacturing practices (GMP) (industrial scale) of amorphous solid dispersions, with further implications on the process validations and drug development pipeline. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Evaluation of the selective detachment process in flotation froth

    Energy Technology Data Exchange (ETDEWEB)

    Honaker, R.Q.; Ozsever, A.V. [University of Kentucky, Lexington, KY (United States). Dept. for Mining Engineering

    2003-10-01

    The improved selectivity between particles of varying degrees of hydrophobicity in flotation froths has been well documented in literature, especially in the deep froths utilized in flotation columns. The phenomenon is believed to be due to the selective detachment process whereby the least hydrophobic particles are released from the bubble surface upon bubble coalescence. To quantify the selective detachment process, column flotation experiments were performed under various operating conditions that provided varying amounts of reflux between the froth and collection zones. The flotation column incorporated the ability to provide instantaneous stoppage of the process streams and separation between the collection and froth zones after ensuring steady-state operation of the column. The samples collected from the two zones and process streams were evaluated to quantify the flotation rate distribution of the particles comprising each sample. The flotation rate was used as an indicator of the degree of hydrophobicity and thus a relative measure of the binding force between the particle and bubble in the froth zone. The flotation rate data was used as input into well known flotation models to obtain the froth zone recovery rate and the quantity of material that refluxes between the collection and froth zones.

  16. Selective CO methanation catalysts for fuel processing applications

    Energy Technology Data Exchange (ETDEWEB)

    Dagle, Robert A.; Wang, Yong; Xia, Guan-Guang; Strohm, James J.; Holladay, Jamelyn [Pacific Northwest National Laboratory, 902 Battle Boulevard, P.O. Box 999, Richland, WA 99352 (United States); Palo, Daniel R. [Pacific Northwest National Laboratory, 902 Battle Boulevard, P.O. Box 999, Richland, WA 99352 (United States); Microproducts Breakthrough Institute, P.O. Box 2330, Corvallis, OR 97339 (United States)

    2007-07-15

    Selective CO methanation as a strategy for CO removal in fuel processing applications was investigated over Ru-based catalysts. Ru metal loading and crystallite size were shown to affect catalyst activity and selectivity. Even operating at a gas-hourly-space-velocity as high as 13,500 h{sup -1}, a 3% Ru/Al{sub 2}O{sub 3} catalyst with a 34.2 nm crystallite was shown to be capable of reducing CO in a reformate to less than 100 ppm over a wide temperature range from 240 to 280 C, while keeping hydrogen consumption below 10%. We present the effects of metal loading, preparation method, and crystallite size on performance for Ru-based catalysts in the selective methanation of CO in the presence of H{sub 2} and CO{sub 2}. (author)

  17. The ATLAS Fast Tracker Processing Units - track finding and fitting

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00384270; The ATLAS collaboration; Alison, John; Ancu, Lucian Stefan; Andreani, Alessandro; Annovi, Alberto; Beccherle, Roberto; Beretta, Matteo; Biesuz, Nicolo Vladi; Bogdan, Mircea Arghir; Bryant, Patrick; Calabro, Domenico; Citraro, Saverio; Crescioli, Francesco; Dell'Orso, Mauro; Donati, Simone; Gentsos, Christos; Giannetti, Paola; Gkaitatzis, Stamatios; Gramling, Johanna; Greco, Virginia; Horyn, Lesya Anna; Iovene, Alessandro; Kalaitzidis, Panagiotis; Kim, Young-Kee; Kimura, Naoki; Kordas, Kostantinos; Kubota, Takashi; Lanza, Agostino; Liberali, Valentino; Luciano, Pierluigi; Magnin, Betty; Sakellariou, Andreas; Sampsonidis, Dimitrios; Saxon, James; Shojaii, Seyed Ruhollah; Sotiropoulou, Calliope Louisa; Stabile, Alberto; Swiatlowski, Maximilian; Volpi, Guido; Zou, Rui; Shochet, Mel

    2016-01-01

    The Fast Tracker is a hardware upgrade to the ATLAS trigger and data-acquisition system, with the goal of providing global track reconstruction by the start of the High Level Trigger starts. The Fast Tracker can process incoming data from the whole inner detector at full first level trigger rate, up to 100 kHz, using custom electronic boards. At the core of the system is a Processing Unit installed in a VMEbus crate, formed by two sets of boards: the Associative Memory Board and a powerful rear transition module called the Auxiliary card, while the second set is the Second Stage board. The associative memories perform the pattern matching looking for correlations within the incoming data, compatible with track candidates at coarse resolution. The pattern matching task is performed using custom application specific integrated circuits, called associative memory chips. The auxiliary card prepares the input and reject bad track candidates obtained from from the Associative Memory Board using the full precision a...

  18. The ATLAS Fast TracKer Processing Units

    CERN Document Server

    Krizka, Karol; The ATLAS collaboration

    2016-01-01

    The Fast Tracker is a hardware upgrade to the ATLAS trigger and data-acquisition system, with the goal of providing global track reconstruction by the start of the High Level Trigger starts. The Fast Tracker can process incoming data from the whole inner detector at full first level trigger rate, up to 100 kHz, using custom electronic boards. At the core of the system is a Processing Unit installed in a VMEbus crate, formed by two sets of boards: the Associative Memory Board and a powerful rear transition module called the Auxiliary card, while the second set is the Second Stage board. The associative memories perform the pattern matching looking for correlations within the incoming data, compatible with track candidates at coarse resolution. The pattern matching task is performed using custom application specific integrated circuits, called associative memory chips. The auxiliary card prepares the input and reject bad track candidates obtained from from the Associative Memory Board using the full precision a...

  19. Critical-like self-organization and natural selection: two facets of a single evolutionary process?

    Science.gov (United States)

    Halley, Julianne D; Winkler, David A

    2008-05-01

    We argue that critical-like dynamics self-organize relatively easily in non-equilibrium systems, and that in biological systems such dynamics serve as templates upon which natural selection builds further elaborations. These critical-like states can be modified by natural selection in two fundamental ways, reflecting the selective advantage (if any) of heritable variations either among avalanche participants or among whole systems. First, reproducing (avalanching) units can differentiate, as units adopt systematic behavioural variations. Second, whole systems that are exposed to natural selection can become increasingly or decreasingly critical. We suggest that these interactions between SOC-like dynamics and natural selection have profound consequences for biological systems because they could have facilitated the evolution of division of labour, compartmentalization and computation, key features of biological systems. The logical conclusion of these ideas is that the fractal geometry of nature is anything but coincidental, and that natural selection is itself a fractal process, occurring on many temporal and spatial scales.

  20. Beowulf Distributed Processing and the United States Geological Survey

    Science.gov (United States)

    Maddox, Brian G.

    2002-01-01

    Introduction In recent years, the United States Geological Survey's (USGS) National Mapping Discipline (NMD) has expanded its scientific and research activities. Work is being conducted in areas such as emergency response research, scientific visualization, urban prediction, and other simulation activities. Custom-produced digital data have become essential for these types of activities. High-resolution, remotely sensed datasets are also seeing increased use. Unfortunately, the NMD is also finding that it lacks the resources required to perform some of these activities. Many of these projects require large amounts of computer processing resources. Complex urban-prediction simulations, for example, involve large amounts of processor-intensive calculations on large amounts of input data. This project was undertaken to learn and understand the concepts of distributed processing. Experience was needed in developing these types of applications. The idea was that this type of technology could significantly aid the needs of the NMD scientific and research programs. Porting a numerically intensive application currently being used by an NMD science program to run in a distributed fashion would demonstrate the usefulness of this technology. There are several benefits that this type of technology can bring to the USGS's research programs. Projects can be performed that were previously impossible due to a lack of computing resources. Other projects can be performed on a larger scale than previously possible. For example, distributed processing can enable urban dynamics research to perform simulations on larger areas without making huge sacrifices in resolution. The processing can also be done in a more reasonable amount of time than with traditional single-threaded methods (a scaled version of Chester County, Pennsylvania, took about fifty days to finish its first calibration phase with a single-threaded program). This paper has several goals regarding distributed processing

  1. Density functional theory calculation on many-cores hybrid central processing unit-graphic processing unit architectures.

    Science.gov (United States)

    Genovese, Luigi; Ospici, Matthieu; Deutsch, Thierry; Méhaut, Jean-François; Neelov, Alexey; Goedecker, Stefan

    2009-07-21

    We present the implementation of a full electronic structure calculation code on a hybrid parallel architecture with graphic processing units (GPUs). This implementation is performed on a free software code based on Daubechies wavelets. Such code shows very good performances, systematic convergence properties, and an excellent efficiency on parallel computers. Our GPU-based acceleration fully preserves all these properties. In particular, the code is able to run on many cores which may or may not have a GPU associated, and thus on parallel and massive parallel hybrid machines. With double precision calculations, we may achieve considerable speedup, between a factor of 20 for some operations and a factor of 6 for the whole density functional theory code.

  2. Decision making software for effective selection of treatment train alternative for wastewater using analytical hierarchy process.

    Science.gov (United States)

    Prasad, A D; Tembhurkar, A R

    2013-10-01

    Proper selection of treatment process and synthesis of treatment train is complex engineering activity requires crucial decision making during planning and designing of any Wastewater Treatment Plant (WWTP). Earlier studies on process selection mainly considered cost as the most important selection criteria and number of studies focused on cost optimization models using dynamic programming, geometric programming and nonlinear programming. However, it has been noticed that traditional cost analysis alone cannot be applied to evaluate Treatment Train (TT) alternatives, as number of important non-tangible factors cannot be easily expressed in monetary units. Recently researches focus on use of multi-criteria technique for selection of treatment process. AHP provides a powerful tool for multi-hierarchy and multi-variable system overcoming limitation of traditional techniques. The AHP model designed to facilitate proper decision making and reduce the margin of errors during optimization due to number of parameters in the hierarchy levels has been used in this study. About 14 important factors and 13 sub factors were identified for the selection of treatment alternatives for wastewater and sludge stream although cost is one of the most important selection criteria. The present paper provides details of developing a soft-tool called "ProSelArt" using an AHP model aiding for proper decision making.

  3. Selection processes in a citrus hybrid population using RAPD markers

    Directory of Open Access Journals (Sweden)

    Oliveira Roberto Pedroso de

    2003-01-01

    Full Text Available The objective of this work was to evaluate the processes of selection in a citrus hybrid population using segregation analysis of RAPD markers. The segregation of 123 RAPD markers between 'Cravo' mandarin (Citrus reticulata Blanco and 'Pêra' sweet orange (C. sinensis (L. Osbeck was analysed in a F1 progeny of 94 hybrids. Genetic composition, diversity, heterozygosity, differences in chromosomal structure and the presence of deleterious recessive genes are discussed based on the segregation ratios obtained. A high percentage of markers had a skeweness of the 1:1 expected segregation ratio in the F1 population. Many markers showed a 3:1 segregation ratio in both varieties and 1:3 in 'Pêra' sweet orange, probably due to directional selection processes. The distribution analysis of the frequencies of the segregant markers in a hybrid population is a simple method which allows a better understanding of the genetics of citrus group.

  4. Laser Process for Selective Emitter Silicon Solar Cells

    Directory of Open Access Journals (Sweden)

    G. Poulain

    2012-01-01

    Full Text Available Selective emitter solar cells can provide a significant increase in conversion efficiency. However current approaches need many technological steps and alignment procedures. This paper reports on a preliminary attempt to reduce the number of processing steps and therefore the cost of selective emitter cells. In the developed procedure, a phosphorous glass covered with silicon nitride acts as the doping source. A laser is used to open locally the antireflection coating and at the same time achieve local phosphorus diffusion. In this process the standard chemical etching of the phosphorous glass is avoided. Sheet resistance variation from 100 Ω/sq to 40 Ω/sq is demonstrated with a nanosecond UV laser. Numerical simulation of the laser-matter interaction is discussed to understand the dopant diffusion efficiency. Preliminary solar cells results show a 0.5% improvement compared with a homogeneous emitter structure.

  5. Effects of Knowledge, Attitudes, and Practices of Primary Care Providers on Antibiotic Selection, United States

    OpenAIRE

    Sanchez, Guillermo V.; Roberts, Rebecca M.; Albert, Alison P.; Darcia D. Johnson; Hicks, Lauri A.

    2014-01-01

    Appropriate selection of antibiotic drugs is critical to optimize treatment of infections and limit the spread of antibiotic resistance. To better inform public health efforts to improve prescribing of antibiotic drugs, we conducted in-depth interviews with 36 primary care providers in the United States (physicians, nurse practitioners, and physician assistants) to explore knowledge, attitudes, and self-reported practices regarding antibiotic drug resistance and antibiotic drug selection for ...

  6. Ancilla-less selective and efficient quantum process tomography

    CERN Document Server

    Schmiegelow, Christian Tomás; Larotonda, Miguel Antonio; Paz, Juan Pablo

    2011-01-01

    Several methods, known as Quantum Process Tomography, are available to characterize the evolution of quantum systems, a task of crucial importance. However, their complexity dramatically increases with the size of the system. Here we present the theory describing a new type of method for quantum process tomography. We describe an algorithm that can be used to selectively estimate any parameter characterizing a quantum process. Unlike any of its predecessors this new quantum tomographer combines two main virtues: it requires investing a number of physical resources scaling polynomially with the number of qubits and at the same time it does not require any ancillary resources. We present the results of the first photonic implementation of this quantum device, characterizing quantum processes affecting two qubits encoded in heralded single photons. Even for this small system our method displays clear advantages over the other existing ones.

  7. SELECTION AND PROMOTION PROCESS TO SUPERVISORY POSITIONS IN MEXICO, 2015

    Directory of Open Access Journals (Sweden)

    José Guadalupe Hernández López

    2015-12-01

    Full Text Available In Mexico it is starting a process of selection and promotion of teachers to supervisory positions through what has been called competitive examinations. This competition, derived from the Education Reform 2013, is justified by the alleged finding the best teachers to fill them. As a "new" process in the Mexican education system has led to a series of disputes since that examination was confined to the application and resolution of a standardized test consisting of multiple-choice questions applied in a session of eight hours which it determines whether a teacher is qualified or not qualified for the job.

  8. Selective blockade of microRNA processing by Lin-28

    OpenAIRE

    Viswanathan, Srinivas R.; Daley, George Q.; Gregory, Richard I.

    2008-01-01

    MicroRNAs (miRNAs) play critical roles in development, and dysregulation of miRNA expression has been observed in human malignancies. Recent evidence suggests that the processing of several primary miRNA transcripts (pri-miRNAs) is blocked post-transcriptionally in embryonic stem (ES) cells, embryonal carcinoma (EC) cells, and primary tumors. Here we show that Lin-28, a developmentally regulated RNA-binding protein, selectively blocks the processing of pri-let-7 miRNAs in embryonic cells. Usi...

  9. A MODEL SELECTION PROCEDURE IN MIXTURE-PROCESS EXPERIMENTS FOR INDUSTRIAL PROCESS OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Márcio Nascimento de Souza Leão

    2015-08-01

    Full Text Available We present a model selection procedure for use in Mixture and Mixture-Process Experiments. Certain combinations of restrictions on the proportions of the mixture components can result in a very constrained experimental region. This results in collinearity among the covariates of the model, which can make it difficult to fit the model using the traditional method based on the significance of the coefficients. For this reason, a model selection methodology based on information criteria will be proposed for process optimization. Two examples are presented to illustrate this model selection procedure.

  10. Thermochemical Process Development Unit: Researching Fuels from Biomass, Bioenergy Technologies (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    2009-01-01

    The Thermochemical Process Development Unit (TCPDU) at the National Renewable Energy Laboratory (NREL) is a unique facility dedicated to researching thermochemical processes to produce fuels from biomass.

  11. Analysis of Unit Process Cost for an Engineering-Scale Pyroprocess Facility Using a Process Costing Method in Korea

    National Research Council Canada - National Science Library

    Sungki Kim; Wonil Ko; Sungsig Bang

    2015-01-01

    ...) metal ingots in a high-temperature molten salt phase. This paper provides the unit process cost of a pyroprocess facility that can process up to 10 tons of pyroprocessing product per year by utilizing the process costing method...

  12. Monte Carlo MP2 on Many Graphical Processing Units.

    Science.gov (United States)

    Doran, Alexander E; Hirata, So

    2016-10-11

    In the Monte Carlo second-order many-body perturbation (MC-MP2) method, the long sum-of-product matrix expression of the MP2 energy, whose literal evaluation may be poorly scalable, is recast into a single high-dimensional integral of functions of electron pair coordinates, which is evaluated by the scalable method of Monte Carlo integration. The sampling efficiency is further accelerated by the redundant-walker algorithm, which allows a maximal reuse of electron pairs. Here, a multitude of graphical processing units (GPUs) offers a uniquely ideal platform to expose multilevel parallelism: fine-grain data-parallelism for the redundant-walker algorithm in which millions of threads compute and share orbital amplitudes on each GPU; coarse-grain instruction-parallelism for near-independent Monte Carlo integrations on many GPUs with few and infrequent interprocessor communications. While the efficiency boost by the redundant-walker algorithm on central processing units (CPUs) grows linearly with the number of electron pairs and tends to saturate when the latter exceeds the number of orbitals, on a GPU it grows quadratically before it increases linearly and then eventually saturates at a much larger number of pairs. This is because the orbital constructions are nearly perfectly parallelized on a GPU and thus completed in a near-constant time regardless of the number of pairs. In consequence, an MC-MP2/cc-pVDZ calculation of a benzene dimer is 2700 times faster on 256 GPUs (using 2048 electron pairs) than on two CPUs, each with 8 cores (which can use only up to 256 pairs effectively). We also numerically determine that the cost to achieve a given relative statistical uncertainty in an MC-MP2 energy increases as O(n(3)) or better with system size n, which may be compared with the O(n(5)) scaling of the conventional implementation of deterministic MP2. We thus establish the scalability of MC-MP2 with both system and computer sizes.

  13. Clinical teaching of student nurses by unit managers of selected hospitals in Limpopo Province

    Directory of Open Access Journals (Sweden)

    LA Murathi

    2005-09-01

    Full Text Available The comprehensive nature of nurse training needs the involvement of almost all health team personnel, including unit managers to gain practical experience and learn to correlate theory and practice. The overall aim of the study was to explore and describe the experiences of unit managers regarding teaching of student nurses in the clinical area and to develop recommendations that will enhance clinical teaching, for the production of competent future nurse practitioners who will render quality care to patients. A qualitative design, which is explorative, descriptive and contextual in nature, was employed, utilizing a phenomenological approach to capture the experiences of unit managers regarding teaching of student nurses at selected hospitals, where students are allocated for their clinical exposure. Ethical measures as well as measures to ensure trustworthiness were adhered to. In-depth phenomenological interviews were conducted with unit managers who shared their experiences regarding clinical teaching of student nurses. Data analysis was done according to Tesch’s (1990 open coding method. One major theme emerged, namely that unit managers experienced problems when doing clinical teaching of student nurses. Based on the findings the following recommendations were made: Colleges should open a two-way communication with unit managers, involvement of unit managers in the activities that take place at the college like courses, seminars and workshops on clinical teaching, learning contracts should be developed for the students and issues of clinical learning should be addressed and unit managers should be included in both summative and formative evaluations.

  14. Telecommunications Research in the United States and Selected Foreign Countries: A Preliminary Survey. Volume I, Summary.

    Science.gov (United States)

    National Academy of Engineering, Washington, DC. Committee on Telecommunications.

    At the request of the National Science Foundation, the Panel on Telecommunications Research of the Committee on Telecommunications of the National Academy of Engineering has made a preliminary survey of the status and trends of telecommunications research in the United States and selected foreign countries. The status and trends were identified by…

  15. Advanced Beef Unit for Advanced Livestock Production Curriculum. Selected Readings. AGDEX 420/00.

    Science.gov (United States)

    Sparks, Jim; Stewart, Bob R.

    These selected readings are designed to supplement James Gillespie's "Modern Livestock and Poultry Production" (2nd edition) as the student reference for the advanced beef unit. The 15 lessons build on Agricultural Science I and II competencies. Topics of the 15 lessons are: importance of the beef enterprise; cost of beef production;…

  16. Advanced Dairy Unit for Advanced Livestock Production Curriculum. Selected Readings. AGDEX 410/00.

    Science.gov (United States)

    Coday, Stan; Stewart, Bob R.

    These selected readings are designed to supplement James Gillespie's "Modern Livestock and Poultry Production" (2nd edition) as the the student reference for the advanced dairy unit. Readings are provided for 18 lessons. Topics include profitability of the dairy enterprise; production costs for dairy; comparative advantages of dairy; milk…

  17. Scoring system for the selection of high-risk patients in the intensive care unit

    NARCIS (Netherlands)

    Iapichino, G; Mistraletti, G; Corbella, D; Bassi, G; Borotto, E; Miranda, DR; Morabito, A

    2006-01-01

    Objective. Patients admitted to the intensive care unit greatly differ in severity and intensity of care. We devised a system for selecting high-risk patients that reduces bias by excluding low-risk patients and patients with an early death irrespective of the treatment. Design: A posteriori analysi

  18. Accelerating chemical database searching using graphics processing units.

    Science.gov (United States)

    Liu, Pu; Agrafiotis, Dimitris K; Rassokhin, Dmitrii N; Yang, Eric

    2011-08-22

    The utility of chemoinformatics systems depends on the accurate computer representation and efficient manipulation of chemical compounds. In such systems, a small molecule is often digitized as a large fingerprint vector, where each element indicates the presence/absence or the number of occurrences of a particular structural feature. Since in theory the number of unique features can be exceedingly large, these fingerprint vectors are usually folded into much shorter ones using hashing and modulo operations, allowing fast "in-memory" manipulation and comparison of molecules. There is increasing evidence that lossless fingerprints can substantially improve retrieval performance in chemical database searching (substructure or similarity), which have led to the development of several lossless fingerprint compression algorithms. However, any gains in storage and retrieval afforded by compression need to be weighed against the extra computational burden required for decompression before these fingerprints can be compared. Here we demonstrate that graphics processing units (GPU) can greatly alleviate this problem, enabling the practical application of lossless fingerprints on large databases. More specifically, we show that, with the help of a ~$500 ordinary video card, the entire PubChem database of ~32 million compounds can be searched in ~0.2-2 s on average, which is 2 orders of magnitude faster than a conventional CPU. If multiple query patterns are processed in batch, the speedup is even more dramatic (less than 0.02-0.2 s/query for 1000 queries). In the present study, we use the Elias gamma compression algorithm, which results in a compression ratio as high as 0.097.

  19. Massively Parallel Latent Semantic Analyzes using a Graphics Processing Unit

    Energy Technology Data Exchange (ETDEWEB)

    Cavanagh, Joseph M [ORNL; Cui, Xiaohui [ORNL

    2009-01-01

    Latent Semantic Indexing (LSA) aims to reduce the dimensions of large Term-Document datasets using Singular Value Decomposition. However, with the ever expanding size of data sets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. The Graphics Processing Unit (GPU) can solve some highly parallel problems much faster than the traditional sequential processor (CPU). Thus, a deployable system using a GPU to speedup large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a computer cluster. Due to the GPU s application-specific architecture, harnessing the GPU s computational prowess for LSA is a great challenge. We present a parallel LSA implementation on the GPU, using NVIDIA Compute Unified Device Architecture and Compute Unified Basic Linear Algebra Subprograms. The performance of this implementation is compared to traditional LSA implementation on CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1000x1000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran five to six times faster than the CPU version. The large variation is due to architectural benefits the GPU has for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  20. Towards a Unified Sentiment Lexicon Based on Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Liliana Ibeth Barbosa-Santillán

    2014-01-01

    Full Text Available This paper presents an approach to create what we have called a Unified Sentiment Lexicon (USL. This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral {P,N,Z} depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and −1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and −1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95.430 lexical entries, out of which there are 35.201 considered to be positive, 22.029 negative, and 38.200 neutral. Finally, the runtime was 10 minutes for 95.430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times.

  1. High-throughput sequence alignment using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Trapnell Cole

    2007-12-01

    Full Text Available Abstract Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU.

  2. Kinematic modelling of disc galaxies using graphics processing units

    Science.gov (United States)

    Bekiaris, G.; Glazebrook, K.; Fluke, C. J.; Abraham, R.

    2016-01-01

    With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ˜100 when compared to a single-threaded CPU, and up to a factor of ˜10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.

  3. Graphics processing unit-accelerated quantitative trait Loci detection.

    Science.gov (United States)

    Chapuis, Guillaume; Filangi, Olivier; Elsen, Jean-Michel; Lavenier, Dominique; Le Roy, Pascale

    2013-09-01

    Mapping quantitative trait loci (QTL) using genetic marker information is a time-consuming analysis that has interested the mapping community in recent decades. The increasing amount of genetic marker data allows one to consider ever more precise QTL analyses while increasing the demand for computation. Part of the difficulty of detecting QTLs resides in finding appropriate critical values or threshold values, above which a QTL effect is considered significant. Different approaches exist to determine these thresholds, using either empirical methods or algebraic approximations. In this article, we present a new implementation of existing software, QTLMap, which takes advantage of the data parallel nature of the problem by offsetting heavy computations to a graphics processing unit (GPU). Developments on the GPU were implemented using Cuda technology. This new implementation performs up to 75 times faster than the previous multicore implementation, while maintaining the same results and level of precision (Double Precision) and computing both QTL values and thresholds. This speedup allows one to perform more complex analyses, such as linkage disequilibrium linkage analyses (LDLA) and multiQTL analyses, in a reasonable time frame.

  4. Accelerating VASP electronic structure calculations using graphic processing units

    KAUST Repository

    Hacene, Mohamed

    2012-08-20

    We present a way to improve the performance of the electronic structure Vienna Ab initio Simulation Package (VASP) program. We show that high-performance computers equipped with graphics processing units (GPUs) as accelerators may reduce drastically the computation time when offloading these sections to the graphic chips. The procedure consists of (i) profiling the performance of the code to isolate the time-consuming parts, (ii) rewriting these so that the algorithms become better-suited for the chosen graphic accelerator, and (iii) optimizing memory traffic between the host computer and the GPU accelerator. We chose to accelerate VASP with NVIDIA GPU using CUDA. We compare the GPU and original versions of VASP by evaluating the Davidson and RMM-DIIS algorithms on chemical systems of up to 1100 atoms. In these tests, the total time is reduced by a factor between 3 and 8 when running on n (CPU core + GPU) compared to n CPU cores only, without any accuracy loss. © 2012 Wiley Periodicals, Inc.

  5. Parallelizing the Cellular Potts Model on graphics processing units

    Science.gov (United States)

    Tapia, José Juan; D'Souza, Roshan M.

    2011-04-01

    The Cellular Potts Model (CPM) is a lattice based modeling technique used for simulating cellular structures in computational biology. The computational complexity of the model means that current serial implementations restrict the size of simulation to a level well below biological relevance. Parallelization on computing clusters enables scaling the size of the simulation but marginally addresses computational speed due to the limited memory bandwidth between nodes. In this paper we present new data-parallel algorithms and data structures for simulating the Cellular Potts Model on graphics processing units. Our implementations handle most terms in the Hamiltonian, including cell-cell adhesion constraint, cell volume constraint, cell surface area constraint, and cell haptotaxis. We use fine level checkerboards with lock mechanisms using atomic operations to enable consistent updates while maintaining a high level of parallelism. A new data-parallel memory allocation algorithm has been developed to handle cell division. Tests show that our implementation enables simulations of >10 cells with lattice sizes of up to 256 3 on a single graphics card. Benchmarks show that our implementation runs ˜80× faster than serial implementations, and ˜5× faster than previous parallel implementations on computing clusters consisting of 25 nodes. The wide availability and economy of graphics cards mean that our techniques will enable simulation of realistically sized models at a fraction of the time and cost of previous implementations and are expected to greatly broaden the scope of CPM applications.

  6. Kinematic Modelling of Disc Galaxies using Graphics Processing Units

    CERN Document Server

    Bekiaris, Georgios; Fluke, Christopher J; Abraham, Roberto

    2015-01-01

    With large-scale Integral Field Spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the Graphics Processing Unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and Nested Sampling algorithms, but also a naive brute-force approach based on Nested Grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ~100 when compared to a single-threaded CPU, and up to a factor of ~10 when compared to a multi-threaded dual CPU configuration. Our method's accuracy, precision and robustness a...

  7. Efficient graphics processing unit-based voxel carving for surveillance

    Science.gov (United States)

    Ober-Gecks, Antje; Zwicker, Marius; Henrich, Dominik

    2016-07-01

    A graphics processing unit (GPU)-based implementation of a space carving method for the reconstruction of the photo hull is presented. In particular, the generalized voxel coloring with item buffer approach is transferred to the GPU. The fast computation on the GPU is realized by an incrementally calculated standard deviation within the likelihood ratio test, which is applied as color consistency criterion. A fast and efficient computation of complete voxel-pixel projections is provided using volume rendering methods. This generates a speedup of the iterative carving procedure while considering all given pixel color information. Different volume rendering methods, such as texture mapping and raycasting, are examined. The termination of the voxel carving procedure is controlled through an anytime concept. The photo hull algorithm is examined for its applicability to real-world surveillance scenarios as an online reconstruction method. For this reason, a GPU-based redesign of a visual hull algorithm is provided that utilizes geometric knowledge about known static occluders of the scene in order to create a conservative and complete visual hull that includes all given objects. This visual hull approximation serves as input for the photo hull algorithm.

  8. Sequentially solution-processed, nanostructured polymer photovoltaics using selective solvents

    KAUST Repository

    Kim, Do Hwan

    2014-01-01

    We demonstrate high-performance sequentially solution-processed organic photovoltaics (OPVs) with a power conversion efficiency (PCE) of 5% for blend films using a donor polymer based on the isoindigo-bithiophene repeat unit (PII2T-C10C8) and a fullerene derivative [6,6]-phenyl-C[71]-butyric acid methyl ester (PC71BM). This has been accomplished by systematically controlling the swelling and intermixing processes of the layer with various processing solvents during deposition of the fullerene. We find that among the solvents used for fullerene deposition that primarily swell but do not re-dissolve the polymer underlayer, there were significant microstructural differences between chloro and o-dichlorobenzene solvents (CB and ODCB, respectively). Specifically, we show that the polymer crystallite orientation distribution in films where ODCB was used to cast the fullerene is broad. This indicates that out-of-plane charge transport through a tortuous transport network is relatively efficient due to a large density of inter-grain connections. In contrast, using CB results in primarily edge-on oriented polymer crystallites, which leads to diminished out-of-plane charge transport. We correlate these microstructural differences with photocurrent measurements, which clearly show that casting the fullerene out of ODCB leads to significantly enhanced power conversion efficiencies. Thus, we believe that tuning the processing solvents used to cast the electron acceptor in sequentially-processed devices is a viable way to controllably tune the blend film microstructure. © 2014 The Royal Society of Chemistry.

  9. Predictive Active Set Selection Methods for Gaussian Processes

    DEFF Research Database (Denmark)

    Henao, Ricardo; Winther, Ole

    2012-01-01

    We propose an active set selection framework for Gaussian process classification for cases when the dataset is large enough to render its inference prohibitive. Our scheme consists of a two step alternating procedure of active set update rules and hyperparameter optimization based upon marginal...... likelihood maximization. The active set update rules rely on the ability of the predictive distributions of a Gaussian process classifier to estimate the relative contribution of a datapoint when being either included or removed from the model. This means that we can use it to include points with potentially...... high impact to the classifier decision process while removing those that are less relevant. We introduce two active set rules based on different criteria, the first one prefers a model with interpretable active set parameters whereas the second puts computational complexity first, thus a model...

  10. The signal selection and processing method for polarization measurement radar

    Institute of Scientific and Technical Information of China (English)

    CHANG YuLiang; WANG XueSong; LI YongZhen; XIAO ShunPing

    2009-01-01

    Based on the ambiguity function, a novel signal processing method for the polarization measurement radar is developed. One advantage of this method is that the two orthogonal polarized signals do not have to be perpendicular to each other, which is required by traditional methods. The error due to the correlation of the two transmitting signals in the traditional method, can be reduced by this new approach. A concept called ambiguity function matrix (AFM) is introduced based on this method. AFM is a promising tool for the signal selection and design in the polarization scattering matrix measurement. The waveforms of the polarimetric radar are categorized and analyzed based on AFM in this paper. The signal processing flow of this method is explained. And the polarization scattering matrix measurement performance is testified by simulation. Furthermore, this signal processing method can be used in the inter-pulse interval measurement technique as well as in the instantaneous measurement technique.

  11. Optimization of post combustion carbon capture process-solvent selection

    Directory of Open Access Journals (Sweden)

    Udara S. P. R. Arachchige, Muhammad Mohsin, Morten C. Melaaen

    2012-01-01

    Full Text Available The reduction of the main energy requirements in the CO2 capture process that is re-boiler duty in stripper section is important. Present study was focused on selection of better solvent concentration and CO2 lean loading for CO2 capture process. Both coal and gas fired power plant flue gases were considered to develop the capture plant with different efficiencies. Solvent concentration was varied from 25 to 40 (w/w % and CO2 lean loading was varied from 0.15 to 0.30 (mol CO2/mol MEA for 70-95 (mol % CO2 removal efficiencies. The optimum specifications for coal and gas processes such as MEA concentration, CO2 lean loading, and solvent inlet flow rate were obtained.

  12. Undergraduate Game Degree Programs in the United Kingdom and United States: A Comparison of the Curriculum Planning Process

    Science.gov (United States)

    McGill, Monica M.

    2010-01-01

    Digital games are marketed, mass-produced, and consumed by an increasing number of people and the game industry is only expected to grow. In response, post-secondary institutions in the United Kingdom (UK) and the United States (US) have started to create game degree programs. Though curriculum theorists provide insight into the process of…

  13. Undergraduate Game Degree Programs in the United Kingdom and United States: A Comparison of the Curriculum Planning Process

    Science.gov (United States)

    McGill, Monica M.

    2010-01-01

    Digital games are marketed, mass-produced, and consumed by an increasing number of people and the game industry is only expected to grow. In response, post-secondary institutions in the United Kingdom (UK) and the United States (US) have started to create game degree programs. Though curriculum theorists provide insight into the process of…

  14. Selection of Forklift Unit for Warehouse Operation by Applying Multi-Criteria Analysis

    Directory of Open Access Journals (Sweden)

    Predrag Atanasković

    2013-07-01

    Full Text Available This paper presents research related to the choice of the criteria that can be used to perform an optimal selection of the forklift unit for warehouse operation. The analysis has been done with the aim of exploring the requirements and defining relevant criteria that are important when investment decision is made for forklift procurement, and based on the conducted research by applying multi-criteria analysis, to determine the appropriate parameters and their relative weights that form the input data and database for selection of the optimal handling unit. This paper presents an example of choosing the optimal forklift based on the selected criteria for the purpose of making the relevant investment decision.

  15. Development and application of microbial selective plugging processes

    Energy Technology Data Exchange (ETDEWEB)

    Jenneman, G.E. [Phillips Petroleum Co., Bartlesville, OK (United States); Gevertz, D.; Davey, M.E. [Agouron Institute, La Jolla, CA (United States)] [and others

    1995-12-31

    Phillips Petroleum Company recently completed a microbial selective plugging (MSP) pilot at the North Burbank Unit (NBU), Shidler, Oklahoma. Nutrients were selected for the pilot that could stimulate indigenous microflora in the reservoir brine to grow and produce exopolymer. It was found that soluble corn starch polymers (e.g., maltodextrins) stimulated the indigenous bacteria to produce exopolymer, whereas simple sugars (e.g., glucose and sucrose), as well as complex media (e.g., molasses and Nutrient Broth), did not. Injection of maltodextrin into rock cores in the presence of indigenous NBU bacteria resulted in stable permeability reductions (> 90%) across the entire length, while injection of glucose resulted only in face plugging. In addition, it was found that organic phosphate esters (OPE) served as a preferable source of phosphorus for the indigenous bacteria, since orthophosphates and condensed phosphates precipitated in NBU brine at reservoir temperature (45{degrees}C). Injection of maltodextrin and ethyl acid phosphate into a producing well stimulated an increase in maltodextrin utilizing bacteria (MUB) in the back-flowed, produced fluid. Additional screens of indigenous and nonindigenous bacteria yielded several nonindigenous isolates that could synthesize polymer when growing in brine containing 6% NaCl at 45{degrees}C.

  16. Remote Maintenance Design Guide for Compact Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Draper, J.V.

    2000-07-13

    Oak Ridge National Laboratory (ORNL) Robotics and Process Systems (RPSD) personnel have extensive experience working with remotely operated and maintained systems. These systems require expert knowledge in teleoperation, human factors, telerobotics, and other robotic devices so that remote equipment may be manipulated, operated, serviced, surveyed, and moved about in a hazardous environment. The RPSD staff has a wealth of experience in this area, including knowledge in the broad topics of human factors, modular electronics, modular mechanical systems, hardware design, and specialized tooling. Examples of projects that illustrate and highlight RPSD's unique experience in remote systems design and application include the following: (1) design of a remote shear and remote dissolver systems in support of U.S. Department of Energy (DOE) fuel recycling research and nuclear power missions; (2) building remotely operated mobile systems for metrology and characterizing hazardous facilities in support of remote operations within those facilities; (3) construction of modular robotic arms, including the Laboratory Telerobotic Manipulator, which was designed for the National Aeronautics and Space Administration (NASA) and the Advanced ServoManipulator, which was designed for the DOE; (4) design of remotely operated laboratories, including chemical analysis and biochemical processing laboratories; (5) construction of remote systems for environmental clean up and characterization, including underwater, buried waste, underground storage tank (UST) and decontamination and dismantlement (D&D) applications. Remote maintenance has played a significant role in fuel reprocessing because of combined chemical and radiological contamination. Furthermore, remote maintenance is expected to play a strong role in future waste remediation. The compact processing units (CPUs) being designed for use in underground waste storage tank remediation are examples of improvements in systems

  17. Sri Lanka's Health Unit Program: A Model of "Selective" Primary Health Care

    Directory of Open Access Journals (Sweden)

    Soma Hewa

    2011-12-01

    Full Text Available This paper argues that the health unit program developed in Sri Lanka in the early twentieth century was an earlier model of selective primary health care promoted by the Rockefeller Foundation in the 1980s in opposition to comprehensive primary health care advocated by the Alma-Ata Declaration of the World Health Organization. A key strategy of the health unit program was to identify the most common and serious infectious diseases in each health unit area and control them through improved sanitation, health education, immunization and treatment with the help of local communities. The health unit program was later introduced to other countries in South and Southeast Asia as part of the Rockefeller Foundation's global campaign to promote public health.

  18. Race, Self-Selection, and the Job Search Process1

    Science.gov (United States)

    Pager, Devah; Pedulla, David S.

    2015-01-01

    While existing research has documented persistent barriers facing African American job seekers, far less research has questioned how job seekers respond to this reality. Do minorities self-select into particular segments of the labor market to avoid discrimination? Such questions have remained unanswered due to the lack of data available on the positions to which job seekers apply. Drawing on two original datasets with application-specific information, we find little evidence that blacks target or avoid particular job types. Rather, blacks cast a wider net in their search than similarly situated whites, including a greater range of occupational categories and characteristics in their pool of job applications. Finally, we show that perceptions of discrimination are associated with increased search breadth, suggesting that broad search among African Americans represents an adaptation to labor market discrimination. Together these findings provide novel evidence on the role of race and self-selection in the job search process. PMID:26046224

  19. Race, self-selection, and the job search process.

    Science.gov (United States)

    Pager, Devah; Pedulla, David S

    2015-01-01

    While existing research has documented persistent barriers facing African-American job seekers, far less research has questioned how job seekers respond to this reality. Do minorities self-select into particular segments of the labor market to avoid discrimination? Such questions have remained unanswered due to the lack of data available on the positions to which job seekers apply. Drawing on two original data sets with application-specific information, we find little evidence that blacks target or avoid particular job types. Rather, blacks cast a wider net in their search than similarly situated whites, including a greater range of occupational categories and characteristics in their pool of job applications. Additionally, we show that perceptions of discrimination are associated with increased search breadth, suggesting that broad search among African-Americans represents an adaptation to labor market discrimination. Together these findings provide novel evidence on the role of race and self-selection in the job search process.

  20. PASS-GP: Predictive active set selection for Gaussian processes

    DEFF Research Database (Denmark)

    Henao, Ricardo; Winther, Ole

    2010-01-01

    to the active set selection strategy and marginal likelihood optimization on the active set. We make extensive tests on the USPS and MNIST digit classification databases with and without incorporating invariances, demonstrating that we can get state-of-the-art results (e.g.0.86% error on MNIST) with reasonable......We propose a new approximation method for Gaussian process (GP) learning for large data sets that combines inline active set selection with hyperparameter optimization. The predictive probability of the label is used for ranking the data points. We use the leave-one-out predictive probability...... available in GPs to make a common ranking for both active and inactive points, allowing points to be removed again from the active set. This is important for keeping the complexity down and at the same time focusing on points close to the decision boundary. We lend both theoretical and empirical support...

  1. Selective target processing: perceptual load or distractor salience?

    Science.gov (United States)

    Eltiti, Stacy; Wallace, Denise; Fox, Elaine

    2005-07-01

    Perceptual load theory (Lavie, 1995) states that participants cannot engage in focused attention when shown displays containing a low perceptual load, because attentional resources are not exhausted, whereas in high-load displays attention is always focused, because attentional resources are exhausted. An alternative "salience" hypothesis holds that the salience of distractors and not perceptual load per se determines selective attention. Three experiments were conducted to investigate the influence that target and distractor onsets and offsets have on selective processing in a standard interference task. Perceptual load theory predicts that, regardless of target or distractor presentation (onset or offset), interference from ignored distractors should occur in low-load displays only. In contrast, the salience hypothesis predicts that interference should occur when the distractor appears as an onset and would occur for distractor offsets only when the target was also an offset. Interference may even occur in highload displays if the distractor is more salient. The results supported the salience hypothesis.

  2. Influence of substrate process tolerances on transmission characteristics of frequency-selective surface

    Institute of Scientific and Technical Information of China (English)

    He Zhang; Jun Lu; Guancheng Sun; Hongliang Xiao

    2008-01-01

    Frequency-selective surface (FSS) is a two-dimensional periodic structure consisting of a dielectric substrate and the metal units (or apertures) arranged periodically on it. When manufacturing the substrate, its thickness and dielectric constant suffer process tolerances. This may induce the center frequency of the FSS to shift, and consequently influence its characteristics. In this paper, a bandpass FSS structure is designed. The units are the Jerusalem crosses arranged squarely. The mode-matching technique is used for simulation. The influence of the tolerances of the substrate's thickness and dielectric constant on the center frequency is analyzed. Results show that the tolerances of thickness and dielectric constant have different influences on the center frequency of the FSS. It is necessary to ensure the process tolerance of the dielectric constant in the design and manufacturing of the substrate in order to stabilize the center frequency.

  3. SELECTION AND PRELIMINARY EVALUATION OF ALTERNATIVE REDUCTANTS FOR SRAT PROCESSING

    Energy Technology Data Exchange (ETDEWEB)

    Stone, M.; Pickenheim, B.; Peeler, D.

    2009-06-30

    Defense Waste Processing Facility - Engineering (DWPF-E) has requested the Savannah River National Laboratory (SRNL) to perform scoping evaluations of alternative flowsheets with the primary focus on alternatives to formic acid during Chemical Process Cell (CPC) processing. The reductants shown below were selected for testing during the evaluation of alternative reductants for Sludge Receipt and Adjustment Tank (SRAT) processing. The reductants fall into two general categories: reducing acids and non-acidic reducing agents. Reducing acids were selected as direct replacements for formic acid to reduce mercury in the SRAT, to acidify the sludge, and to balance the melter REDuction/OXidation potential (REDOX). Non-acidic reductants were selected as melter reductants and would not be able to reduce mercury in the SRAT. Sugar was not tested during this scoping evaluation as previous work has already been conducted on the use of sugar with DWPF feeds. Based on the testing performed, the only viable short-term path to mitigating hydrogen generation in the CPC is replacement of formic acid with a mixture of glycolic and formic acids. An experiment using glycolic acid blended with formic on an 80:20 molar basis was able to reduce mercury, while also targeting a predicted REDuction/OXidation (REDOX) of 0.2 expressed as Fe{sup 2+}/{Sigma}Fe. Based on this result, SRNL recommends performing a complete CPC demonstration of the glycolic/formic acid flowsheet followed by a design basis development and documentation. Of the options tested recently and in the past, nitric/glycolic/formic blended acids has the potential for near term implementation in the existing CPC equipment providing rapid throughput improvement. Use of a non-acidic reductant is recommended only if the processing constraints to remove mercury and acidify the sludge acidification are eliminated. The non-acidic reductants (e.g. sugar) will not reduce mercury during CPC processing and sludge acidification would

  4. The AMchip04 and the Processing Unit Prototype for the FastTracker

    CERN Document Server

    Andreani, A; The ATLAS collaboration; Beretta, M; Bogdan, M; Citterio, M; Alberti, F; Giannetti, P; Lanza, A; Magalotti, D; Piendibene, M; Shochet, M; Stabile, A; Tang, J; Tompkins, L; Volpi, G

    2012-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity and the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present the first prototype of a new Processing Unit, the core of the FastTracker processor for Atlas, whose computing power is such that a couple of hundreds of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV in the ATLAS events up to Phase II instantaneous luminosities (5×1034 cm-2 s-1) with an event input rate of 100 kHz and a latency below hundreds of microseconds. We plan extremely powerful, very compact and low consumption units for the far future, essential to increase efficiency and purity of the Level 2 selected samples through the intensive use of tracking. This strategy requires massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generall...

  5. Pyrochemical and Dry Processing Methods Program. A selected bibliography

    Energy Technology Data Exchange (ETDEWEB)

    McDuffie, H.F.; Smith, D.H.; Owen, P.T.

    1979-03-01

    This selected bibliography with abstracts was compiled to provide information support to the Pyrochemical and Dry Processing Methods (PDPM) Program sponsored by DOE and administered by the Argonne National Laboratory. Objectives of the PDPM Program are to evaluate nonaqueous methods of reprocessing spent fuel as a route to the development of proliferation-resistant and diversion-resistant methods for widespread use in the nuclear industry. Emphasis was placed on the literature indexed in the ERDA--DOE Energy Data Base (EDB). The bibliography includes indexes to authors, subject descriptors, EDB subject categories, and titles.

  6. Board Directors' Selection Process Following a Gender Quota

    DEFF Research Database (Denmark)

    Sigurjonsson, Olaf; Arnardottir, Audur Arna

    The 2008 financial crisis in Iceland was triggered by poor governance of three of the country’s major banks and resulted in new corporate governance regulations including a 40% gender quota for the boards of all state-owned enterprises, publicly traded enterprises, and public and private limited......-quota selection of new board directors as well as the attitudes of board members towards the quota and perceptions of the effect of quota on processes. We incorporate a dual qualitative and quantitative methodology with in-depth interviews with 20 board directors and chairs, and a survey of 260 directors who...

  7. A Block-Asynchronous Relaxation Method for Graphics Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Antz, Hartwig [Karlsruhe Inst. of Technology (KIT) (Germany); Tomov, Stanimire [Univ. of Tennessee, Knoxville, TN (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Univ. of Manchester (United Kingdom); Heuveline, Vincent [Karlsruhe Inst. of Technology (KIT) (Germany)

    2011-11-30

    In this paper, we analyze the potential of asynchronous relaxation methods on Graphics Processing Units (GPUs). For this purpose, we developed a set of asynchronous iteration algorithms in CUDA and compared them with a parallel implementation of synchronous relaxation methods on CPU-based systems. For a set of test matrices taken from the University of Florida Matrix Collection we monitor the convergence behavior, the average iteration time and the total time-to-solution time. Analyzing the results, we observe that even for our most basic asynchronous relaxation scheme, despite its lower convergence rate compared to the Gauss-Seidel relaxation (that we expected), the asynchronous iteration running on GPUs is still able to provide solution approximations of certain accuracy in considerably shorter time then Gauss- Seidel running on CPUs. Hence, it overcompensates for the slower convergence by exploiting the scalability and the good fit of the asynchronous schemes for the highly parallel GPU architectures. Further, enhancing the most basic asynchronous approach with hybrid schemes – using multiple iterations within the ”subdomain” handled by a GPU thread block and Jacobi-like asynchronous updates across the ”boundaries”, subject to tuning various parameters – we manage to not only recover the loss of global convergence but often accelerate convergence of up to two times (compared to the effective but difficult to parallelize Gauss-Seidel type of schemes), while keeping the execution time of a global iteration practically the same. This shows the high potential of the asynchronous methods not only as a stand alone numerical solver for linear systems of equations fulfilling certain convergence conditions but more importantly as a smoother in multigrid methods. Due to the explosion of parallelism in todays architecture designs, the significance and the need for asynchronous methods, as the ones described in this work, is expected to grow.

  8. Flocking-based Document Clustering on the Graphics Processing Unit

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL; Patton, Robert M [ORNL; ST Charles, Jesse Lee [ORNL

    2008-01-01

    Abstract?Analyzing and grouping documents by content is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. Each bird represents a single document and flies toward other documents that are similar to it. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to receive results in a reasonable amount of time. However, flocking behavior, along with most naturally inspired algorithms such as ant colony optimization and particle swarm optimization, are highly parallel and have found increased performance on expensive cluster computers. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. Some applications see a huge increase in performance on this new platform. The cost of these high-performance devices is also marginal when compared with the price of cluster machines. In this paper, we have conducted research to exploit this architecture and apply its strengths to the document flocking problem. Our results highlight the potential benefit the GPU brings to all naturally inspired algorithms. Using the CUDA platform from NIVIDA? we developed a document flocking implementation to be run on the NIVIDA?GEFORCE 8800. Additionally, we developed a similar but sequential implementation of the same algorithm to be run on a desktop CPU. We tested the performance of each on groups of news articles ranging in size from 200 to 3000 documents. The results of these tests were very significant. Performance gains ranged from three to nearly five times improvement of the GPU over the CPU implementation. This dramatic improvement in runtime makes the GPU a potentially revolutionary platform for document clustering algorithms.

  9. Handling geophysical flows: Numerical modelling using Graphical Processing Units

    Science.gov (United States)

    Garcia-Navarro, Pilar; Lacasta, Asier; Juez, Carmelo; Morales-Hernandez, Mario

    2016-04-01

    Computational tools may help engineers in the assessment of sediment transport during the decision-making processes. The main requirements are that the numerical results have to be accurate and simulation models must be fast. The present work is based on the 2D shallow water equations in combination with the 2D Exner equation [1]. The resulting numerical model accuracy was already discussed in previous work. Regarding the speed of the computation, the Exner equation slows down the already costly 2D shallow water model as the number of variables to solve is increased and the numerical stability is more restrictive. On the other hand, the movement of poorly sorted material over steep areas constitutes a hazardous environmental problem. Computational tools help in the predictions of such landslides [2]. In order to overcome this problem, this work proposes the use of Graphical Processing Units (GPUs) for decreasing significantly the simulation time [3, 4]. The numerical scheme implemented in GPU is based on a finite volume scheme. The mathematical model and the numerical implementation are compared against experimental and field data. In addition, the computational times obtained with the Graphical Hardware technology are compared against Single-Core (sequential) and Multi-Core (parallel) CPU implementations. References [Juez et al.(2014)] Juez, C., Murillo, J., & Garca-Navarro, P. (2014) A 2D weakly-coupled and efficient numerical model for transient shallow flow and movable bed. Advances in Water Resources. 71 93-109. [Juez et al.(2013)] Juez, C., Murillo, J., & Garca-Navarro, P. (2013) . 2D simulation of granular flow over irregular steep slopes using global and local coordinates. Journal of Computational Physics. 225 166-204. [Lacasta et al.(2014)] Lacasta, A., Morales-Hernndez, M., Murillo, J., & Garca-Navarro, P. (2014) An optimized GPU implementation of a 2D free surface simulation model on unstructured meshes Advances in Engineering Software. 78 1-15. [Lacasta

  10. Viscoelastic Finite Difference Modeling Using Graphics Processing Units

    Science.gov (United States)

    Fabien-Ouellet, G.; Gloaguen, E.; Giroux, B.

    2014-12-01

    Full waveform seismic modeling requires a huge amount of computing power that still challenges today's technology. This limits the applicability of powerful processing approaches in seismic exploration like full-waveform inversion. This paper explores the use of Graphics Processing Units (GPU) to compute a time based finite-difference solution to the viscoelastic wave equation. The aim is to investigate whether the adoption of the GPU technology is susceptible to reduce significantly the computing time of simulations. The code presented herein is based on the freely accessible software of Bohlen (2002) in 2D provided under a General Public License (GNU) licence. This implementation is based on a second order centred differences scheme to approximate time differences and staggered grid schemes with centred difference of order 2, 4, 6, 8, and 12 for spatial derivatives. The code is fully parallel and is written using the Message Passing Interface (MPI), and it thus supports simulations of vast seismic models on a cluster of CPUs. To port the code from Bohlen (2002) on GPUs, the OpenCl framework was chosen for its ability to work on both CPUs and GPUs and its adoption by most of GPU manufacturers. In our implementation, OpenCL works in conjunction with MPI, which allows computations on a cluster of GPU for large-scale model simulations. We tested our code for model sizes between 1002 and 60002 elements. Comparison shows a decrease in computation time of more than two orders of magnitude between the GPU implementation run on a AMD Radeon HD 7950 and the CPU implementation run on a 2.26 GHz Intel Xeon Quad-Core. The speed-up varies depending on the order of the finite difference approximation and generally increases for higher orders. Increasing speed-ups are also obtained for increasing model size, which can be explained by kernel overheads and delays introduced by memory transfers to and from the GPU through the PCI-E bus. Those tests indicate that the GPU memory size

  11. Calculation of HELAS amplitudes for QCD processes using graphics processing unit (GPU)

    CERN Document Server

    Hagiwara, K; Okamura, N; Rainwater, D L; Stelzer, T

    2009-01-01

    We use a graphics processing unit (GPU) for fast calculations of helicity amplitudes of quark and gluon scattering processes in massless QCD. New HEGET ({\\bf H}ELAS {\\bf E}valuation with {\\bf G}PU {\\bf E}nhanced {\\bf T}echnology) codes for gluon self-interactions are introduced, and a C++ program to convert the MadGraph generated FORTRAN codes into HEGET codes in CUDA (a C-platform for general purpose computing on GPU) is created. Because of the proliferation of the number of Feynman diagrams and the number of independent color amplitudes, the maximum number of final state jets we can evaluate on a GPU is limited to 4 for pure gluon processes ($gg\\to 4g$), or 5 for processes with one or more quark lines such as $q\\bar{q}\\to 5g$ and $qq\\to qq+3g$. Compared with the usual CPU-based programs, we obtain 60-100 times better performance on the GPU, except for 5-jet production processes and the $gg\\to 4g$ processes for which the GPU gain over the CPU is about 20.

  12. Design of the Laboratory-Scale Plutonium Oxide Processing Unit in the Radiochemical Processing Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Lumetta, Gregg J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Meier, David E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Tingey, Joel M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Casella, Amanda J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Delegard, Calvin H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Edwards, Matthew K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Orton, Robert D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rapko, Brian M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Smart, John E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report describes a design for a laboratory-scale capability to produce plutonium oxide (PuO2) for use in identifying and validating nuclear forensics signatures associated with plutonium production, as well as for use as exercise and reference materials. This capability will be located in the Radiochemical Processing Laboratory at the Pacific Northwest National Laboratory. The key unit operations are described, including PuO2 dissolution, purification of the Pu by ion exchange, precipitation, and re-conversion to PuO2 by calcination.

  13. Novel roles for selected genes in meiotic DNA processing.

    Directory of Open Access Journals (Sweden)

    Philip W Jordan

    2007-12-01

    Full Text Available High-throughput studies of the 6,200 genes of Saccharomyces cerevisiae have provided valuable data resources. However, these resources require a return to experimental analysis to test predictions. An in-silico screen, mining existing interaction, expression, localization, and phenotype datasets was developed with the aim of selecting minimally characterized genes involved in meiotic DNA processing. Based on our selection procedure, 81 deletion mutants were constructed and tested for phenotypic abnormalities. Eleven (13.6% genes were identified to have novel roles in meiotic DNA processes including DNA replication, recombination, and chromosome segregation. In particular, this analysis showed that Def1, a protein that facilitates ubiquitination of RNA polymerase II as a response to DNA damage, is required for efficient synapsis between homologues and normal levels of crossover recombination during meiosis. These characteristics are shared by a group of proteins required for Zip1 loading (ZMM proteins. Additionally, Soh1/Med31, a subunit of the RNA pol II mediator complex, Bre5, a ubiquitin protease cofactor and an uncharacterized protein, Rmr1/Ygl250w, are required for normal levels of gene conversion events during meiosis. We show how existing datasets may be used to define gene sets enriched for specific roles and how these can be evaluated by experimental analysis.

  14. Domino Process Achieves Site-Selective Peptide Modification with High Optical Purity. Applications to Chain Diversification and Peptide Ligation.

    Science.gov (United States)

    Romero-Estudillo, Ivan; Boto, Alicia

    2015-10-02

    The development of peptide libraries by site-selective modification of a few parent peptides would save valuable time and materials in discovery processes but still is a difficult synthetic challenge. Herein, we introduce natural hydroxyproline as a convertible unit for the production of a variety of optically pure amino acids, including expensive N-alkyl amino acids, homoserine lactones, and Agl lactams, and to achieve the mild, efficient, and site-selective modification of peptides. A domino process is used to cleave the customizable Hyp unit under mild, metal-free conditions. Both terminal and internal positions can be modified, and similar customizable units can be differentiated. The resulting products possess two reactive chains which can be manipulated independently. The versatility and scope of this process is highlighted by its application to the ligation of two peptide chains, and the generation of peptides with several chains and peptides with conformational restrictions.

  15. Layerwise Monitoring of the Selective Laser Melting Process by Thermography

    Science.gov (United States)

    Krauss, Harald; Zeugner, Thomas; Zaeh, Michael F.

    Selective Laser Melting is utilized to build parts directly from CAD data. In this study layerwise monitoring of the temperature distribution is used to gather information about the process stability and the resulting part quality. The heat distribution varies with different kinds of parameters including scan vector length, laser power, layer thickness and inter-part distance in the job layout. By integration of an off-axis mounted uncooled thermal detector, the solidification as well as the layer deposition are monitored and evaluated. This enables the identification of hot spots in an early stage during the solidification process and helps to avoid process interrupts. Potential quality indicators are derived from spatially resolved measurement data and are correlated to the resulting part properties. A model of heat dissipation is presented based on the measurement of the material response for varying heat input. Current results show the feasibility of process surveillance by thermography for a limited section of the building platform in a commercial system.

  16. Nanoemulsion: process selection and application in cosmetics--a review.

    Science.gov (United States)

    Yukuyama, M N; Ghisleni, D D M; Pinto, T J A; Bou-Chacra, N A

    2016-02-01

    In recent decades, considerable and continuous growth in consumer demand in the cosmetics field has spurred the development of sophisticated formulations, aiming at high performance, attractive appearance, sensorial benefit and safety. Yet despite increasing demand from consumers, the formulator faces certain restrictions regarding the optimum equilibrium between the active compound concentration and the formulation base taking into account the nature of the skin structure, mainly concerning to the ideal penetration of the active compound, due to the natural skin barrier. Emulsion is a mixture of two immiscible phases, and the interest in nanoscale emulsion has been growing considerably in recent decades due to its specific attributes such as high stability, attractive appearance and drug delivery properties; therefore, performance is expected to improve using a lipid-based nanocarrier. Nanoemulsions are generated by different approaches: the so-called high-energy and low-energy methods. The global overview of these mechanisms and different alternatives for each method are presented in this paper, along with their benefits and drawbacks. As a cosmetics formulation is reflected in product delivery to consumers, nanoemulsion development with prospects for large-scale production is one of the key attributes in the method selection process. Thus, the aim of this review was to highlight the main high- and low-energy methods applicable in cosmetics and dermatological product development, their specificities, recent research on these methods in the cosmetics and consideration for the process selection optimization. The specific process with regard to inorganic nanoparticles, polymer nanoparticles and nanocapsule formulation is not considered in this paper.

  17. Accelerating Wright-Fisher Forward Simulations on the Graphics Processing Unit.

    Science.gov (United States)

    Lawrie, David S

    2017-09-07

    Forward Wright-Fisher simulations are powerful in their ability to model complex demography and selection scenarios, but suffer from slow execution on the Central Processor Unit (CPU), thus limiting their usefulness. However, the single-locus Wright-Fisher forward algorithm is exceedingly parallelizable, with many steps that are so-called "embarrassingly parallel," consisting of a vast number of individual computations that are all independent of each other and thus capable of being performed concurrently. The rise of modern Graphics Processing Units (GPUs) and programming languages designed to leverage the inherent parallel nature of these processors have allowed researchers to dramatically speed up many programs that have such high arithmetic intensity and intrinsic concurrency. The presented GPU Optimized Wright-Fisher simulation, or "GO Fish" for short, can be used to simulate arbitrary selection and demographic scenarios while running over 250-fold faster than its serial counterpart on the CPU. Even modest GPU hardware can achieve an impressive speedup of over two orders of magnitude. With simulations so accelerated, one can not only do quick parametric bootstrapping of previously estimated parameters, but also use simulated results to calculate the likelihoods and summary statistics of demographic and selection models against real polymorphism data, all without restricting the demographic and selection scenarios that can be modeled or requiring approximations to the single-locus forward algorithm for efficiency. Further, as many of the parallel programming techniques used in this simulation can be applied to other computationally intensive algorithms important in population genetics, GO Fish serves as an exciting template for future research into accelerating computation in evolution. GO Fish is part of the Parallel PopGen Package available at: http://dl42.github.io/ParallelPopGen/. Copyright © 2017 Lawrie.

  18. Reproducibility of Mammography Units, Film Processing and Quality Imaging

    Science.gov (United States)

    Gaona, Enrique

    2003-09-01

    The purpose of this study was to carry out an exploratory survey of the problems of quality control in mammography and processors units as a diagnosis of the current situation of mammography facilities. Measurements of reproducibility, optical density, optical difference and gamma index are included. Breast cancer is the most frequently diagnosed cancer and is the second leading cause of cancer death among women in the Mexican Republic. Mammography is a radiographic examination specially designed for detecting breast pathology. We found that the problems of reproducibility of AEC are smaller than the problems of processors units because almost all processors fall outside of the acceptable variation limits and they can affect the mammography quality image and the dose to breast. Only four mammography units agree with the minimum score established by ACR and FDA for the phantom image.

  19. Selecting a Control Strategy for Plug and Process Loads

    Energy Technology Data Exchange (ETDEWEB)

    Lobato, C.; Sheppy, M.; Brackney, L.; Pless, S.; Torcellini, P.

    2012-09-01

    Plug and Process Loads (PPLs) are building loads that are not related to general lighting, heating, ventilation, cooling, and water heating, and typically do not provide comfort to the building occupants. PPLs in commercial buildings account for almost 5% of U.S. primary energy consumption. On an individual building level, they account for approximately 25% of the total electrical load in a minimally code-compliant commercial building, and can exceed 50% in an ultra-high efficiency building such as the National Renewable Energy Laboratory's (NREL) Research Support Facility (RSF) (Lobato et al. 2010). Minimizing these loads is a primary challenge in the design and operation of an energy-efficient building. A complex array of technologies that measure and manage PPLs has emerged in the marketplace. Some fall short of manufacturer performance claims, however. NREL has been actively engaged in developing an evaluation and selection process for PPLs control, and is using this process to evaluate a range of technologies for active PPLs management that will cap RSF plug loads. Using a control strategy to match plug load use to users' required job functions is a huge untapped potential for energy savings.

  20. Improving the Quotation Process of an After-Sales Unit

    OpenAIRE

    Matilainen, Janne

    2013-01-01

    The purpose of this study was to model and analyze the quotation process of area managers at a global company. Process improvement requires understanding the fundamentals of the process. The study was conducted a case study. Data comprised of internal documentation of the case company, literature, and semi-structured, themed interviews of process performers and stakeholders. The objective was to produce model of the current state of the process. The focus was to establish a holistic view o...

  1. Effects of Selection and Training on Unit-Level Performance over Time: A Latent Growth Modeling Approach

    Science.gov (United States)

    Van Iddekinge, Chad H.; Ferris, Gerald R.; Perrewe, Pamela L.; Perryman, Alexa A.; Blass, Fred R.; Heetderks, Thomas D.

    2009-01-01

    Surprisingly few data exist concerning whether and how utilization of job-related selection and training procedures affects different aspects of unit or organizational performance over time. The authors used longitudinal data from a large fast-food organization (N = 861 units) to examine how change in use of selection and training relates to…

  2. Co-occurrence of Photochemical and Microbiological Transformation Processes in Open-Water Unit Process Wetlands.

    Science.gov (United States)

    Prasse, Carsten; Wenk, Jannis; Jasper, Justin T; Ternes, Thomas A; Sedlak, David L

    2015-12-15

    The fate of anthropogenic trace organic contaminants in surface waters can be complex due to the occurrence of multiple parallel and consecutive transformation processes. In this study, the removal of five antiviral drugs (abacavir, acyclovir, emtricitabine, lamivudine and zidovudine) via both bio- and phototransformation processes, was investigated in laboratory microcosm experiments simulating an open-water unit process wetland receiving municipal wastewater effluent. Phototransformation was the main removal mechanism for abacavir, zidovudine, and emtricitabine, with half-lives (t1/2,photo) in wetland water of 1.6, 7.6, and 25 h, respectively. In contrast, removal of acyclovir and lamivudine was mainly attributable to slower microbial processes (t1/2,bio = 74 and 120 h, respectively). Identification of transformation products revealed that bio- and phototransformation reactions took place at different moieties. For abacavir and zidovudine, rapid transformation was attributable to high reactivity of the cyclopropylamine and azido moieties, respectively. Despite substantial differences in kinetics of different antiviral drugs, biotransformation reactions mainly involved oxidation of hydroxyl groups to the corresponding carboxylic acids. Phototransformation rates of parent antiviral drugs and their biotransformation products were similar, indicating that prior exposure to microorganisms (e.g., in a wastewater treatment plant or a vegetated wetland) would not affect the rate of transformation of the part of the molecule susceptible to phototransformation. However, phototransformation strongly affected the rates of biotransformation of the hydroxyl groups, which in some cases resulted in greater persistence of phototransformation products.

  3. ExoMars 2018 Landing Site Selection Process

    Science.gov (United States)

    Vago, Jorge L.; Kminek, Gerhard; Rodionov, Daniel

    The ExoMars 2018 mission will include two science elements: a Rover and a Surface Platform. The ExoMars Rover will carry a comprehensive suite of instruments dedicated to geology and exobiology research named after Louis Pasteur. The Rover will be able to travel several kilometres searching for traces of past and present signs of life. It will do this by collecting and analysing samples from outcrops, and from the subsurface—down to 2-m depth. The very powerful combination of mobility with the ability to access locations where organic molecules can be well preserved is unique to this mission. After the Rover will have egressed, the ExoMars Surface Platform will begin its science mission to study the surface environment at the landing location. This talk will describe the landing site selection process and introduce the scientific, planetary protection, and engineering requirements that candidate landing sites must comply with in order to be considered for the mission.

  4. Selection criteria for waste management processes in manned space missions.

    Science.gov (United States)

    Doll, S; Cothran, B; McGhee, J

    1991-10-01

    Management of waste produced during manned space exploration missions will be an important function of advanced life support systems. Waste materials can be thrown away or recovered for reuse. The first approach relies totally on external supplies to replace depleted resources while the second approach regenerates resources internally. The selection of appropriate waste management processes will be based upon criteria which include mission and hardware characteristics as well as overall system considerations. Mission characteristics discussed include destination, duration, crew size, operating environment, and transportation costs. Hardware characteristics include power, mass and volume requirements as well as suitability for a given task. Overall system considerations are essential to assure optimization for the entire mission rather than for an individual system. For example, a waste management system designed for a short trip to the moon will probably not be the best one for an extended mission to Mars. The purpose of this paper is to develop a methodology to identify and compare viable waste management options for selection of an appropriate waste management system.

  5. Bayesian site selection for fast Gaussian process regression

    KAUST Repository

    Pourhabib, Arash

    2014-02-05

    Gaussian Process (GP) regression is a popular method in the field of machine learning and computer experiment designs; however, its ability to handle large data sets is hindered by the computational difficulty in inverting a large covariance matrix. Likelihood approximation methods were developed as a fast GP approximation, thereby reducing the computation cost of GP regression by utilizing a much smaller set of unobserved latent variables called pseudo points. This article reports a further improvement to the likelihood approximation methods by simultaneously deciding both the number and locations of the pseudo points. The proposed approach is a Bayesian site selection method where both the number and locations of the pseudo inputs are parameters in the model, and the Bayesian model is solved using a reversible jump Markov chain Monte Carlo technique. Through a number of simulated and real data sets, it is demonstrated that with appropriate priors chosen, the Bayesian site selection method can produce a good balance between computation time and prediction accuracy: it is fast enough to handle large data sets that a full GP is unable to handle, and it improves, quite often remarkably, the prediction accuracy, compared with the existing likelihood approximations. © 2014 Taylor and Francis Group, LLC.

  6. Selection of Vendor Based on Intuitionistic Fuzzy Analytical Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Prabjot Kaur

    2014-01-01

    Full Text Available Business environment is characterized by greater domestic and international competitive position in the global market. Vendors play a key role in achieving the so-called corporate competition. It is not easy however to identify good vendors because evaluation is based on multiple criteria. In practice, for VSP most of the input information about the criteria is not known precisely. Intuitionistic fuzzy set is an extension of the classical fuzzy set theory (FST, which is a suitable way to deal with impreciseness. In other words, the application of intuitionistic fuzzy sets instead of fuzzy sets means the introduction of another degree of freedom called nonmembership function into the set description. In this paper, we proposed a triangular intuitionistic fuzzy number based approach for the vendor selection problem using analytical hierarchy process. The crisp data of the vendors is represented in the form of triangular intuitionistic fuzzy numbers. By applying AHP which involves decomposition, pairwise comparison, and deriving priorities for the various levels of the hierarchy, an overall crisp priority is obtained for ranking the best vendor. A numerical example illustrates our method. Lastly a sensitivity analysis is performed to find the most critical criterion on the basis of which vendor is selected.

  7. Supercritical boiler material selection using fuzzy analytic network process

    Directory of Open Access Journals (Sweden)

    Saikat Ranjan Maity

    2012-08-01

    Full Text Available The recent development of world is being adversely affected by the scarcity of power and energy. To survive in the next generation, it is thus necessary to explore the non-conventional energy sources and efficiently consume the available sources. For efficient exploitation of the existing energy sources, a great scope lies in the use of Rankin cycle-based thermal power plants. Today, the gross efficiency of Rankin cycle-based thermal power plants is less than 28% which has been increased up to 40% with reheating and regenerative cycles. But, it can be further improved up to 47% by using supercritical power plant technology. Supercritical power plants use supercritical boilers which are able to withstand a very high temperature (650-720˚C and pressure (22.1 MPa while producing superheated steam. The thermal efficiency of a supercritical boiler greatly depends on the material of its different components. The supercritical boiler material should possess high creep rupture strength, high thermal conductivity, low thermal expansion, high specific heat and very high temperature withstandability. This paper considers a list of seven supercritical boiler materials whose performance is evaluated based on seven pivotal criteria. Given the intricacy and difficulty of this supercritical boiler material selection problem having interactions and interdependencies between different criteria, this paper applies fuzzy analytic network process to select the most appropriate material for a supercritical boiler. Rene 41 is the best supercritical boiler material, whereas, Haynes 230 is the worst preferred choice.

  8. Unit Operation Experiment Linking Classroom with Industrial Processing

    Science.gov (United States)

    Benson, Tracy J.; Richmond, Peyton C.; LeBlanc, Weldon

    2013-01-01

    An industrial-type distillation column, including appropriate pumps, heat exchangers, and automation, was used as a unit operations experiment to provide a link between classroom teaching and real-world applications. Students were presented with an open-ended experiment where they defined the testing parameters to solve a generalized problem. The…

  9. Effect of energetic dissipation processes on the friction unit tribological

    Directory of Open Access Journals (Sweden)

    Moving V. V.

    2007-01-01

    Full Text Available In article presented temperature influence on reological and fric-tion unit coefficients cast iron elements. It has been found that surface layer formed in the temperature friction has good rub off resistance. The surface layer structural hardening and capacity stress relaxation make up.

  10. On the hazard rate process for imperfectly monitored multi-unit systems

    Energy Technology Data Exchange (ETDEWEB)

    Barros, A. [Institut des Sciences et Techonologies de l' Information de Troyes (ISTIT-CNRS), Equipe de Modelisation et Surete des Systemes, Universite de Technologie de Troyes (UTT), 12, rue Marie Curie, BP2060, 10010 Troyes cedex (France)]. E-mail: anne.barros@utt.fr; Berenguer, C. [Institut des Sciences et Techonologies de l' Information de Troyes (ISTIT-CNRS), Equipe de Modelisation et Surete des Systemes, Universite de Technologie de Troyes (UTT), 12, rue Marie Curie, BP2060, 10010 Troyes cedex (France); Grall, A. [Institut des Sciences et Techonologies de l' Information de Troyes (ISTIT-CNRS), Equipe de Modelisation et Surete des Systemes, Universite de Technologie de Troyes (UTT), 12, rue Marie Curie, BP2060, 10010 Troyes cedex (France)

    2005-12-01

    The aim of this paper is to present a stochastic model to characterize the failure distribution of multi-unit systems when the current units state is imperfectly monitored. The definition of the hazard rate process existing with perfect monitoring is extended to the realistic case where the units failure time are not always detected (non-detection events). The so defined observed hazard rate process gives a better representation of the system behavior than the classical failure rate calculated without any information on the units state and than the hazard rate process based on perfect monitoring information. The quality of this representation is, however, conditioned by the monotony property of the process. This problem is mainly discussed and illustrated on a practical example (two parallel units). The results obtained motivate the use of the observed hazard rate process to characterize the stochastic behavior of the multi-unit systems and to optimize for example preventive maintenance policies.

  11. Evaluation of Select Surface Processing Techniques for In Situ Application During the Additive Manufacturing Build Process

    Science.gov (United States)

    Book, Todd A.; Sangid, Michael D.

    2016-07-01

    Although additive manufacturing offers numerous performance advantages for different applications, it is not being used for critical applications due to uncertainties in structural integrity as a result of innate process variability and defects. To minimize uncertainty, the current approach relies on the concurrent utilization of process monitoring, post-processing, and non-destructive inspection in addition to an extensive material qualification process. This paper examines an alternative approach by evaluating the application of select surface process techniques, to include sliding severe plastic deformation (SPD) and fine particle shot peening, on direct metal laser sintering-produced AlSi10Mg materials. Each surface processing technique is compared to baseline as-built and post-processed samples as a proof of concept for surface enhancement. Initial results pairing sliding SPD with the manufacture's recommended thermal stress relief cycle demonstrated uniform recrystallization of the microstructure, resulting in a more homogeneous distribution of strain among the microstructure than as-built or post-processed conditions. This result demonstrates the potential for the in situ application of various surface processing techniques during the layerwise direct metal laser sintering build process.

  12. Performance-Based Technology Selection Filter application report for Teledyne Wah Chang Albany Operable Unit Number One

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, J.G.; Morrison, J.L.; Morneau, R.A.; O' Brien, M.C.; Rudin, M.J.

    1992-05-01

    This report summarizes the application of the Performance-Based Technology Selection Filter (PBTSF) developed for the Idaho National Laboratory's Buried Waste Integrated Demonstration Program as applied to remediation activities conducted at the Teledyne Wah Chang Albany (TWCA) Superfund Site, Operable Unit One. The remedial action at the TWCA Operable Unit One consisted of solidification, excavation, transportation, and monocell disposal of the contents of two sludge ponds contaminated with various inorganic and organic compounds. Inorganic compounds included low levels of uranium and radium isotopes, as well zirconium, hafnium, chromium, mercury, and nickel. Organic compounds included methylene chloride, 1,1,1-trichloroethane, 1,1-dichloroethane, tetrachloroethane, and hexachlorobenzene. Remediation began in June 1991, and was completed in November 1991. The TWCA Operable Unit One configuration option consisted of 15 functional subelements. Data were gathered on these subelements and end-to-end system operation to calculate numerical values for 28 system performance measures. These were then used to calculate a system performance score. An assessment was made of the availability and definitional clarity of these performance measures, applicability of PBTSF utility functions, and rollup methodology. The PBTSF scoring function worked well, with few problems noted in data gathering, utility function normalization, and scoring calculation. The application of this process to an actual in situ treatment and excavation technical process option clarified the specific terms and bounds of the performance score functions, and identified one problem associated with the definition of system boundary.

  13. The United States Military Entrance Processing Command (USMEPCOM) Uses Six Sigma Process to Develop and Improve Data Quality

    Science.gov (United States)

    2007-06-01

    mecpom.army.mil Original title on 712 A/B: The United States Military Entrance Processing Command (USMEPCOM) uses Six Sigma process to develop and...Entrance Processing Command (USMEPCOM) uses Six Sigma Process to Develop and Improve Data Quality 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...Processing Command (USMEPCOM) uses Six Sigma Process to Develop and Improve Data Quality 3 • USMEPCOM Overview/History • Purpose • Define: What is Important

  14. A Ten-Step Process for Developing Teaching Units

    Science.gov (United States)

    Butler, Geoffrey; Heslup, Simon; Kurth, Lara

    2015-01-01

    Curriculum design and implementation can be a daunting process. Questions quickly arise, such as who is qualified to design the curriculum and how do these people begin the design process. According to Graves (2008), in many contexts the design of the curriculum and the implementation of the curricular product are considered to be two mutually…

  15. 76 FR 13973 - United States Warehouse Act; Processed Agricultural Products Licensing Agreement

    Science.gov (United States)

    2011-03-15

    ... Farm Service Agency United States Warehouse Act; Processed Agricultural Products Licensing Agreement... warehouse licenses may be issued under the United States Warehouse Act (USWA). Through this notice, FSA is... processed agricultural products that are stored in climate controlled, cooler, and freezer warehouses....

  16. Fast prediction unit selection method for HEVC intra prediction based on salient regions

    Science.gov (United States)

    Feng, Lei; Dai, Ming; Zhao, Chun-lei; Xiong, Jing-ying

    2016-07-01

    In order to reduce the computational complexity of the high efficiency video coding (HEVC) standard, a new algorithm for HEVC intra prediction, namely, fast prediction unit (PU) size selection method for HEVC based on salient regions is proposed in this paper. We first build a saliency map for each largest coding unit (LCU) to reduce its texture complexity. Secondly, the optimal PU size is determined via a scheme that implements an information entropy comparison among sub-blocks of saliency maps. Finally, we apply the partitioning result of saliency map on the original LCUs, obtaining the optimal partitioning result. Our algorithm can determine the PU size in advance to the angular prediction in intra coding, reducing computational complexity of HEVC. The experimental results show that our algorithm achieves a 37.9% reduction in encoding time, while producing a negligible loss in Bjontegaard delta bit rate ( BDBR) of 0.62%.

  17. Reactive-Separator Process Unit for Lunar Regolith Project

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA's plans for a lunar habitation outpost call out for process technologies to separate hydrogen sulfide and sulfur dioxide gases from regolith product gas...

  18. Direct selective laser sintering of high performance metals: Machine design, process development and process control

    Science.gov (United States)

    Das, Suman

    1998-11-01

    This dissertation describes the development of an advanced manufacturing technology known as Direct Selective Laser Sintering (Direct SLS). Direct SLS is a laser based rapid manufacturing technology that enables production of functional, fully dense, metal and cermet components via the direct, layerwise consolidation of constituent powders. Specifically, this dissertation focuses on a new, hybrid net shape manufacturing technique known as Selective Laser Sintering/Hot Isostatic Pressing (SLS/HIP). The objective of research presented in this dissertation was to establish the fundamental machine technology and processing science to enable direct SLS fabrication of metal components composed of high performance, high temperature metals and alloys. Several processing requirements differentiate direct SLS of metals from SLS of polymers or polymer coated powders. Perhaps the most important distinguishing characteristic is the regime of high temperatures involved in direct SLS of metals. Biasing the temperature of the feedstock powder via radiant preheat prior to and during SLS processing was shown to be beneficial. Preheating the powder significantly influenced the flow and wetting characteristics of the melt. During this work, it was conclusively established that powder cleanliness is of paramount importance for successful layerwise consolidation of metal powders by direct SLS. Sequential trials were conducted to establish optimal bake-out and degas cycles under high vacuum. These cycles agreed well with established practices in the powder metallurgy industry. A study of some of the important transport mechanisms in direct SLS of metals was undertaken to obtain a fundamental understanding of the underlying process physics. This study not only provides an explanation of phenomena observed during SLS processing of a variety of metallic materials but also helps in developing selection schemes for those materials that are most amenable to direct SLS processing. The

  19. Conceptual design and selection of a biodiesel fuel processor for a vehicle fuel cell auxiliary power unit

    Science.gov (United States)

    Specchia, S.; Tillemans, F. W. A.; van den Oosterkamp, P. F.; Saracco, G.

    Within the European project BIOFEAT (biodiesel fuel processor for a fuel cell auxiliary power unit for a vehicle), a complete modular 10 kW e biodiesel fuel processor capable of feeding a PEMFC will be developed, built and tested to generate electricity for a vehicle auxiliary power unit (APU). Tail pipe emissions reduction, increased use of renewable fuels, increase of hydrogen-fuel economy and efficient supply of present and future APU for road vehicles are the main project goals. Biodiesel is the chosen feedstock because it is a completely natural and thus renewable fuel. Three fuel processing options were taken into account at a conceptual design level and compared for hydrogen production: (i) autothermal reformer (ATR) with high and low temperature shift (HTS/LTS) reactors; (ii) autothermal reformer (ATR) with a single medium temperature shift (MTS) reactor; (iii) thermal cracker (TC) with high and low temperature shift (HTS/LTS) reactors. Based on a number of simulations (with the AspenPlus® software), the best operating conditions were determined (steam-to-carbon and O 2/C ratios, operating temperatures and pressures) for each process alternative. The selection of the preferential fuel processing option was consequently carried out, based on a number of criteria (efficiency, complexity, compactness, safety, controllability, emissions, etc.); the ATR with both HTS and LTS reactors shows the most promising results, with a net electrical efficiency of 29% (LHV).

  20. Using Ionic Liquids in Selective Hydrocarbon Conversion Processes

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Yongchun; Periana, Roy; Chen, Weiqun; van Duin, Adri; Nielsen, Robert; Shuler, Patrick; Ma, Qisheng; Blanco, Mario; Li, Zaiwei; Oxgaard, Jonas; Cheng, Jihong; Cheung, Sam; Pudar, Sanja

    2009-09-28

    This is the Final Report of the five-year project Using Ionic Liquids in Selective Hydrocarbon Conversion Processes (DE-FC36-04GO14276, July 1, 2004- June 30, 2009), in which we present our major accomplishments with detailed descriptions of our experimental and theoretical efforts. Upon the successful conduction of this project, we have followed our proposed breakdown work structure completing most of the technical tasks. Finally, we have developed and demonstrated several optimized homogenously catalytic methane conversion systems involving applications of novel ionic liquids, which present much more superior performance than the Catalytica system (the best-to-date system) in terms of three times higher reaction rates and longer catalysts lifetime and much stronger resistance to water deactivation. We have developed in-depth mechanistic understandings on the complicated chemistry involved in homogenously catalytic methane oxidation as well as developed the unique yet effective experimental protocols (reactors, analytical tools and screening methodologies) for achieving a highly efficient yet economically feasible and environmentally friendly catalytic methane conversion system. The most important findings have been published, patented as well as reported to DOE in this Final Report and our 20 Quarterly Reports.

  1. Human values in the team leader selection process.

    Science.gov (United States)

    Rovira, Núria; Ozgen, Sibel; Medir, Magda; Tous, Jordi; Alabart, Joan Ramon

    2012-03-01

    The selection process of team leaders is fundamental if the effectiveness of teams is to be guaranteed. Human values have proven to be an important factor in the behaviour of individuals and leaders. The aim of this study is twofold. The first is to validate Schwartz's survey of human values. The second is to determine whether there are any relationships between the values held by individuals and their preferred roles in a team. Human values were measured by the items of the Schwartz Value Survey (SVS) and the preferred roles in a team were identified by the Belbin Self Perception Inventory (BSPI). The two questionnaires were answered by two samples of undergraduate students (183 and 177 students, respectively). As far as the first objective is concerned, Smallest Space Analysis (SSA) was performed at the outset to examine how well the two-dimensional circular structure, as postulated by Schwartz, was represented in the study population. Then, the results of this analysis were compared and contrasted with those of two other published studies; one by Schwartz (2006) and one by Ros and Grad (1991). As for the second objective, Pearson correlation coefficients were computed to assess the associations between the ratings on the SVS survey items and the ratings on the eight team roles as measured by the BSPI.

  2. The role of industrial processes in the reduction of selected greenhouse gases emission

    Directory of Open Access Journals (Sweden)

    Sadowski Maciej

    2016-12-01

    Full Text Available The paper presents an analysis of the selected anthropogenic greenhouse gases (GHG emission sources in industrial processes, as well as the mitigation policies and measures in Annex I Parties to the UN Framework Convention on Climate Change. [Text of the United Nations … 1992].The main gas in this category is carbon dioxide, but several countries have a dominant share of hydrofluorocarbons (HFCs with a clear upward trend in their emissions. In Poland, the majority of the GHG emissions from industrial processes come from three categories: refrigeration and air-conditioning (HFCs, cement production (CO2 and ammonia production (CO2. An analysis of the policies and measures implemented or planned in this group of countries shows that voluntary programs and agreements among governments and stakeholders are the most effective. A crucial element of the voluntary programs is the support to assist enterprises in the transition to the best low carbon technologies and practices.

  3. A BMP selection process based on the granulometry of runoff solids ...

    African Journals Online (AJOL)

    A BMP selection process based on the granulometry of runoff solids in a ... and flow were recorded, in addition to the pollution associated with such flows. ... best management practices using the process selection diagrams is presented.

  4. Concentration of Selected Anions in Bottled Water in the United Arab Emirates

    OpenAIRE

    Abouleish, Mohamed Yehia Z.

    2012-01-01

    Several studies have shown concern over nitrate and nitrite contamination of prepared infant formula used by infants less than six months old, as it may lead to methemoglobinemia and death. One possible source of contamination is through the use of improperly treated drinking water. Contamination of water could result from fertilizers and manure runoff, not fully treated and released human and industrial waste, or from disinfection processes. In the United Arab Emirates (UAE), bottled water i...

  5. Prospects for expanded mohair and cashmere production and processing in the United States of America.

    Science.gov (United States)

    Lupton, C J

    1996-05-01

    Mohair from Angora goats has been produced in the United States since the introduction of these animals from Turkey in 1849. Cashmere on Texas meat goats was reported in 1973, but domestic interest in commercial production did not occur until the mid-1980s. Since 1982, the average prices of U.S.-produced mohair and cashmere (de-haired) have ranged from $1.81 to $9.48/kg and approximately $55 to $200/kg, respectively. However, return to producers from mohair has been relatively constant, averaging $10.21/kg, due to the federal incentive program. Because this program is scheduled to terminate with final payment in 1996, the future of mohair profitability is questionable. Prospects for expanded mohair and cashmere production and processing in the United States are influenced by numerous interacting factors and potential constraints. These include the prospect that the goat and textile industries may no longer be profitable in the absence of clear government policies. Although selection may have slightly increased fiber production by Angoras (long term) and domestic meat goats (short term), availability of genetic resources may prove to be a constraint to increased fiber production by cashmere goats and improved meat production by both types of goat. Land resources are plentiful unless new government policies prohibit goats from vast tracts of rangeland and forest because of environmental concerns. Future demand is an unknown, but with increasing world population and affluence, prospects for long-term improved demand for luxury fibers seem good. Competition from foreign cashmere growers is expected, whereas, in the short term, mohair production overseas is declining. However, increased processing of cashmere in its country of origin is expected to result in shortages of raw materials for European and U.S. processors. The amount of scouring, worsted, and woolen equipment in the United States is adequate to accommodate major increases in domestic processing of goat

  6. Effects of knowledge, attitudes, and practices of primary care providers on antibiotic selection, United States.

    Science.gov (United States)

    Sanchez, Guillermo V; Roberts, Rebecca M; Albert, Alison P; Johnson, Darcia D; Hicks, Lauri A

    2014-12-01

    Appropriate selection of antibiotic drugs is critical to optimize treatment of infections and limit the spread of antibiotic resistance. To better inform public health efforts to improve prescribing of antibiotic drugs, we conducted in-depth interviews with 36 primary care providers in the United States (physicians, nurse practitioners, and physician assistants) to explore knowledge, attitudes, and self-reported practices regarding antibiotic drug resistance and antibiotic drug selection for common infections. Participants were generally familiar with guideline recommendations for antibiotic drug selection for common infections, but did not always comply with them. Reasons for nonadherence included the belief that nonrecommended agents are more likely to cure an infection, concern for patient or parent satisfaction, and fear of infectious complications. Providers inconsistently defined broad- and narrow-spectrum antibiotic agents. There was widespread concern for antibiotic resistance; however, it was not commonly considered when selecting therapy. Strategies to encourage use of first-line agents are needed in addition to limiting unnecessary prescribing of antibiotic drugs.

  7. Mechanical Properties of Ti-6Al-4V Octahedral Porous Material Unit Formed by Selective Laser Melting

    Directory of Open Access Journals (Sweden)

    Jianfeng Sun

    2012-01-01

    Full Text Available The Ti-6Al-4V octahedral porous material unit is designed to calculate its load. In this paper, ANSYS is adopted for the load simulation of the unit. And a simplified model of dimensional theoretical calculation is established, by which the analytical equation of the fracture load is obtained and the calculation of the load of Ti-6Al-4V is completed. Moreover, selective laser melting is adopted in processing the Ti-6Al-4V porous material unit. The experimental value of fracture load of this material is obtained through compression experiment. The results show that the simulation curves approximate the variation tendency of the elastic deformation of the compression curves; the curves of theoretical calculation approximate the general variation tendency; and the experimental value of fracture load is very close to the theoretical value. Therefore, the theoretical prediction accuracy of fracture load is high, which lays the foundation for the mechanical properties of the octahedral porous material.

  8. Option pricing with COS method on Graphics Processing Units

    NARCIS (Netherlands)

    B. Zhang (Bo); C.W. Oosterlee (Cornelis)

    2009-01-01

    htmlabstractIn this paper, acceleration on the GPU for option pricing by the COS method is demonstrated. In particular, both European and Bermudan options will be discussed in detail. For Bermudan options, we consider both the Black-Scholes model and Levy processes of infinite activity. Moreover, th

  9. Option pricing with COS method on Graphics Processing Units

    NARCIS (Netherlands)

    Zhang, B.; Oosterlee, C.W.

    2009-01-01

    In this paper, acceleration on the GPU for option pricing by the COS method is demonstrated. In particular, both European and Bermudan options will be discussed in detail. For Bermudan options, we consider both the Black-Scholes model and Levy processes of infinite activity. Moreover, the influence

  10. Research Regarding the Anticorosiv Protection of Atmospheric and Vacuum Distillation Unit that Process Crude Oil

    Directory of Open Access Journals (Sweden)

    M. Morosanu

    2011-12-01

    Full Text Available Due to high boiling temperature, organic acids are present in the warmer areas of metal equipment from atmospheric and vacuum distillation units and determine, increased corrosion processes in furnace tubes, transfer lines, metal equipment within the distillation columns etc. In order to protect the corrosion of metal equipment from atmospheric and vacuum distillation units, against acids, de authors researched solution which integrates corrosion inhibitors and selecting materials for equipment construction. For this purpose, we tested the inhibitor PET 1441, which has dialchilfosfat in his composition and inhibitor based on phosphate ester. In this case, to the metal surface forms a complex phosphorous that forms of high temperature and high fluid speed. In order to form the passive layer and to achieve a 90% protection, we initially insert a shock dose, and in order to ensure further protection there is used a dose of 20 ppm. The check of anticorrosion protection namely the inhibition efficiency is achieved by testing samples made from steel different.

  11. The Cilium: Cellular Antenna and Central Processing Unit

    OpenAIRE

    Malicki, Jarema J.; Johnson, Colin A.

    2017-01-01

    Cilia mediate an astonishing diversity of processes. Recent advances provide unexpected insights into the regulatory mechanisms of cilium formation, and reveal diverse regulatory inputs that are related to the cell cycle, cytoskeleton, proteostasis, and cilia-mediated signaling itself. Ciliogenesis and cilia maintenance are regulated by reciprocal antagonistic or synergistic influences, often acting in parallel to each other. By receiving parallel inputs, cilia appear to integrate multiple si...

  12. Uniting Gradual and Abrupt set Processes in Resistive Switching Oxides

    Science.gov (United States)

    Fleck, Karsten; La Torre, Camilla; Aslam, Nabeel; Hoffmann-Eifert, Susanne; Böttger, Ulrich; Menzel, Stephan

    2016-12-01

    Identifying limiting factors is crucial for a better understanding of the dynamics of the resistive switching phenomenon in transition-metal oxides. This improved understanding is important for the design of fast-switching, energy-efficient, and long-term stable redox-based resistive random-access memory devices. Therefore, this work presents a detailed study of the set kinetics of valence change resistive switches on a time scale from 10 ns to 104 s , taking Pt /SrTiO3/TiN nanocrossbars as a model material. The analysis of the transient currents reveals that the switching process can be subdivided into a linear-degradation process that is followed by a thermal runaway. The comparison with a dynamical electrothermal model of the memory cell allows the deduction of the physical origin of the degradation. The origin is an electric-field-induced increase of the oxygen-vacancy concentration near the Schottky barrier of the Pt /SrTiO3 interface that is accompanied by a steadily rising local temperature due to Joule heating. The positive feedback of the temperature increase on the oxygen-vacancy mobility, and thereby on the conductivity of the filament, leads to a self-acceleration of the set process.

  13. Fairness reactions to personnel selection methods: An international comparison between the Netherlands, the United States, France, Spain, Portugal, and Singapore

    NARCIS (Netherlands)

    Anderson, N.; Witvliet, C.

    2008-01-01

    This paper reports reactions to employee selection methods in the Netherlands and compares these findings internationally against six other previously published samples covering the United States, France, Spain, Portugal, and Singapore. A sample of 167 participants rated 10 popular assessment

  14. U.S. Census Annual Estimates of the Resident Population for Selected Age Groups by Sex for the United States

    Data.gov (United States)

    U.S. Department of Health & Human Services — 2010-2015. U.S. Census Annual Estimates of the Resident Population for Selected Age Groups by Sex for the United States. The estimates are based on the 2010 Census...

  15. U.S. Census Annual Estimates of the Resident Population for Selected Age Groups by Sex for the United States

    Data.gov (United States)

    U.S. Department of Health & Human Services — 2010-2015. U.S. Census Annual Estimates of the Resident Population for Selected Age Groups by Sex for the United States. The estimates are based on the 2010 Census...

  16. Characterization of Enterococcus isolates colonizing the intestinal tract of intensive care unit patients receiving selective digestive decontamination

    NARCIS (Netherlands)

    Bello Gonzalez, Teresita D.J.; Pham, Phu; Top, Janetta; Willems, Rob J.L.; Schaik, van Willem; Passel, van Mark W.J.; Smidt, Hauke

    2017-01-01

    Enterococci have emerged as important opportunistic pathogens in intensive care units (ICUs). In this study, enterococcal population size and Enterococcus isolates colonizing the intestinal tract of ICU patients receiving Selective Digestive Decontamination (SDD) were investigated. All nine

  17. Oral Health Disparities as Determined by Selected Healthy People 2020 Oral Health Objectives for the United States, ...

    Science.gov (United States)

    ... Order from the National Technical Information Service NCHS Oral Health Disparities as Determined by Selected Healthy People 2020 Oral Health Objectives for the United States, 2009–2010 Recommend ...

  18. Analysis of Unit Process Cost for an Engineering-Scale Pyroprocess Facility Using a Process Costing Method in Korea

    Directory of Open Access Journals (Sweden)

    Sungki Kim

    2015-08-01

    Full Text Available Pyroprocessing, which is a dry recycling method, converts spent nuclear fuel into U (Uranium/TRU (TRansUranium metal ingots in a high-temperature molten salt phase. This paper provides the unit process cost of a pyroprocess facility that can process up to 10 tons of pyroprocessing product per year by utilizing the process costing method. Toward this end, the pyroprocess was classified into four kinds of unit processes: pretreatment, electrochemical reduction, electrorefining and electrowinning. The unit process cost was calculated by classifying the cost consumed at each process into raw material and conversion costs. The unit process costs of the pretreatment, electrochemical reduction, electrorefining and electrowinning were calculated as 195 US$/kgU-TRU, 310 US$/kgU-TRU, 215 US$/kgU-TRU and 231 US$/kgU-TRU, respectively. Finally the total pyroprocess cost was calculated as 951 US$/kgU-TRU. In addition, the cost driver for the raw material cost was identified as the cost for Li3PO4, needed for the LiCl-KCl purification process, and platinum as an anode electrode in the electrochemical reduction process.

  19. COSTS AND PROFITABILITY IN FOOD PROCESSING: PASTRY TYPE UNITS

    Directory of Open Access Journals (Sweden)

    DUMITRANA MIHAELA

    2013-08-01

    Full Text Available For each company, profitability, products quality and customer satisfaction are the most importanttargets. To attaint these targets, managers need to know all about costs that are used in decision making. Whatkind of costs? How these costs are calculated for a specific sector such as food processing? These are only a fewquestions with answers in our paper. We consider that a case study for this sector may be relevant for all peoplethat are interested to increase the profitability of this specific activity sector.

  20. Mathematical Model for the Selection of Processing Parameters in Selective Laser Sintering of Polymer Products

    Directory of Open Access Journals (Sweden)

    Ana Pilipović

    2014-03-01

    Full Text Available Additive manufacturing (AM is increasingly applied in the development projects from the initial idea to the finished product. The reasons are multiple, but what should be emphasised is the possibility of relatively rapid manufacturing of the products of complicated geometry based on the computer 3D model of the product. There are numerous limitations primarily in the number of available materials and their properties, which may be quite different from the properties of the material of the finished product. Therefore, it is necessary to know the properties of the product materials. In AM procedures the mechanical properties of materials are affected by the manufacturing procedure and the production parameters. During SLS procedures it is possible to adjust various manufacturing parameters which are used to influence the improvement of various mechanical and other properties of the products. The paper sets a new mathematical model to determine the influence of individual manufacturing parameters on the polymer product made by selective laser sintering. Old mathematical model is checked by statistical method with central composite plan and it is established that old mathematical model must be expanded with new parameter beam overlay ratio. Verification of new mathematical model and optimization of the processing parameters are made on SLS machine.

  1. ENTREPRENEURIAL OPPORTUNITIES IN FOOD PROCESSING UNITS (WITH SPECIAL REFERENCES TO BYADGI RED CHILLI COLD STORAGE UNITS IN THE KARNATAKA STATE

    Directory of Open Access Journals (Sweden)

    P. ISHWARA

    2010-01-01

    Full Text Available After the green revolution, we are now ushering in the evergreen revolution in the country; food processing is an evergreen activity. It is the key to the agricultural sector. In this paper an attempt has been made to study the workings of food processing units with special references to Red Chilli Cold Storage units in the Byadgi district of Karnataka State. Byadgi has been famous for Red Chilli since the days it’s of antiquity. The vast and extensive market yard in Byadagi taluk is famous as the second largest Red Chilli dealing market in the country. However, the most common and recurring problem faced by the farmer is inability to store enough red chilli from one harvest to another. Red chilli that was locally abundant for only a short period of time had to be stored against times of scarcity. In recent years, due to Oleoresin, demand for Red Chilli has grow from other countries like Sri Lanka, Bangladesh, America, Europe, Nepal, Indonesia, Mexico etc. The study reveals that all the cold storage units of the study area have been using vapour compression refrigeration system or method. All entrepreneurs have satisfied with their turnover and profit and they are in a good economic position. Even though the average turnover and profits are increased, few units have shown negligible amount of decrease in turnover and profit. This is due to the competition from increasing number of cold storages and early established units. The cold storages of the study area have been storing Red chilli, Chilli seeds, Chilli powder, Tamarind, Jeera, Dania, Turmeric, Sunflower, Zinger, Channa, Flower seeds etc,. But the 80 per cent of the each cold storage is filled by the red chilli this is due to the existence of vast and extensivered chilli market yard in the Byadgi. There is no business without problems. In the same way the entrepreneurs who are chosen for the study are facing a few problems in their business like skilled labour, technical and management

  2. The Cilium: Cellular Antenna and Central Processing Unit.

    Science.gov (United States)

    Malicki, Jarema J; Johnson, Colin A

    2017-02-01

    Cilia mediate an astonishing diversity of processes. Recent advances provide unexpected insights into the regulatory mechanisms of cilium formation, and reveal diverse regulatory inputs that are related to the cell cycle, cytoskeleton, proteostasis, and cilia-mediated signaling itself. Ciliogenesis and cilia maintenance are regulated by reciprocal antagonistic or synergistic influences, often acting in parallel to each other. By receiving parallel inputs, cilia appear to integrate multiple signals into specific outputs and may have functions similar to logic gates of digital systems. Some combinations of input signals appear to impose higher hierarchical control related to the cell cycle. An integrated view of these regulatory inputs will be necessary to understand ciliogenesis and its wider relevance to human biology. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Professional Competence of the Head of External Relations Unit and its Development in the Study Process

    OpenAIRE

    Turuševa, Larisa

    2010-01-01

    Dissertation Annotation Larisa Turuševa’s promotion paper „Professional Competence of the Head of External Relations Unit and its Development in the Study Process” is a fulfilled research on the development of Professional competence of the heads of external relations units, conditions for the study programme development. A model of professional competence of the head of external relations unit is worked out, its indicators and levels are described. A study process model for th...

  4. Econobiophysics - game of choosing. Model of selection or election process with diverse accessible information

    Science.gov (United States)

    2011-01-01

    We propose several models applicable to both selection and election processes when each selecting or electing subject has access to different information about the objects to choose from. We wrote special software to simulate these processes. We consider both the cases when the environment is neutral (natural process) as well as when the environment is involved (controlled process). PMID:21892959

  5. Language selection in bilingual speech: evidence for inhibitory processes.

    Science.gov (United States)

    Kroll, Judith F; Bobb, Susan C; Misra, Maya; Guo, Taomei

    2008-07-01

    Although bilinguals rarely make random errors of language when they speak, research on spoken production provides compelling evidence to suggest that both languages are active when only one language is spoken (e.g., [Poulisse, N. (1999). Slips of the tongue: Speech errors in first and second language production. Amsterdam/Philadelphia: John Benjamins]). Moreover, the parallel activation of the two languages appears to characterize the planning of speech for highly proficient bilinguals as well as second language learners. In this paper, we first review the evidence for cross-language activity during single word production and then consider the two major alternative models of how the intended language is eventually selected. According to language-specific selection models, both languages may be active but bilinguals develop the ability to selectively attend to candidates in the intended language. The alternative model, that candidates from both languages compete for selection, requires that cross-language activity be modulated to allow selection to occur. On the latter view, the selection mechanism may require that candidates in the nontarget language be inhibited. We consider the evidence for such an inhibitory mechanism in a series of recent behavioral and neuroimaging studies.

  6. The Aluminum Deep Processing Project of North United Aluminum Landed in Qijiang

    Institute of Scientific and Technical Information of China (English)

    2014-01-01

    <正>On April 10,North United Aluminum Company respectively signed investment cooperation agreements with Qijiang Industrial Park and Qineng Electricity&Aluminum Co.,Ltd,signifying the landing of North United Aluminum’s aluminum deep processing project in Qijiang.

  7. DOE’s Alcohol Fuels Awards Process Resulted in Questionable Award Selections and Limited Small Business Success.

    Science.gov (United States)

    1981-08-21

    THE UNITED STATES PROCESS RESULTED IN UESTION- ABLE AWARD SELECTIONS AND LIMITED SMALL BUSINESS SUCCESS As part of its alternative fuels program, thle...intensive nature of alcohol fuels technology, and the larger number of high-quality small business proposals, DOE anticipated that small business success would...may . also want to closely monitor the success of small businesses. In the event that a desired level of small business success is not achieved, the

  8. Selected Tools for Risk Analysis in Logistics Processes

    Science.gov (United States)

    Kulińska, Ewa

    2012-03-01

    As each organization aims at managing effective logistics processes, risk factors can and should be controlled through proper system of risk management. Implementation of complex approach to risk management allows for the following: - evaluation of significant risk groups associated with logistics processes implementation, - composition of integrated strategies of risk management, - composition of tools for risk analysis in logistics processes.

  9. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    Science.gov (United States)

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.

  10. The ATLAS Fast Tracker Processing Units - input and output data preparation

    CERN Document Server

    Bolz, Arthur Eugen; The ATLAS collaboration

    2016-01-01

    The ATLAS Fast Tracker is a hardware processor built to reconstruct tracks at a rate of up to 100 kHz and provide them to the high level trigger system. The Fast Tracker will allow the trigger to utilize tracking information from the entire detector at an earlier event selection stage than ever before, allowing for more efficient event rejection. The connection of the system from to the detector read-outs and to the high level trigger computing farms are made through custom boards implementing Advanced Telecommunications Computing Technologies standard. The input is processed by the Input Mezzanines and Data Formatter boards, designed to receive and sort the data coming from the Pixel and Semi-conductor Tracker. The Fast Tracker to Level-2 Interface Card connects the system to the computing farm. The Input Mezzanines are 128 boards, performing clustering, placed on the 32 Data Formatter mother boards that sort the information into 64 logical regions required by the downstream processing units. This necessitat...

  11. Argumentative SOX Compliant and Quality Decision Support Intelligent Expert System over the Suppliers Selection Process

    Directory of Open Access Journals (Sweden)

    Jesus Angel Fernandez Canelas

    2013-01-01

    Full Text Available The objective of this paper is to define a decision support system over SOX (Sarbanes-Oxley Act compatibility and quality of the Suppliers Selection Process based on Artificial Intelligence and Argumentation Theory knowledge and techniques. The present SOX Law, in effect nowadays, was created to improve financial government control over US companies. This law is a factor standard out United States due to several factors like present globalization, expansion of US companies, or key influence of US stock exchange markets worldwide. This paper constitutes a novel approach to this kind of problems due to following elements: (1 it has an optimized structure to look for the solution, (2 it has a dynamic learning method to handle court and control gonvernment bodies decisions, (3 it uses fuzzy knowledge to improve its performance, and (4 it uses its past accumulated experience to let the system evolve far beyond its initial state.

  12. A formative approach to strategic message targeting through soap operas: using selective processing theories.

    Science.gov (United States)

    Dutta-Bergman, Mohan J

    2006-01-01

    In the past 2 decades, soap operas have been used extensively to attain prosocial change in other parts of the world. The role of the soap opera in achieving social change has become of special interest to strategic health message designers and planners in the United States. Before a strategic approach is implemented, however, researchers need to conduct formative research to interrogate the viability of soap operas and guide communication strategies. This article constructs a profile of the soap opera user who is younger, less educated, and earns less than the nonuser. Using selective processing theory, I argue that the health-oriented individual is most likely to remember health content from soap operas and incorporate the content in future behavior. Strategic media planning and message construction guidelines are provided for the use of soap operas as vehicles for reinforcing positive health behaviors and introducing new behaviors in the health oriented segment.

  13. Thermal/Heat Transfer Analysis Using a Graphic Processing Unit (GPU) Enabled Computing Environment Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this project was to use GPU enabled computing to accelerate the analyses of heat transfer and thermal effects. Graphical processing unit (GPU)...

  14. Advanced In-Space Propulsion (AISP): High Temperature Boost Power Processing Unit (PPU) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The task is to investigate the technology path to develop a 10kW modular Silicon Carbide (SiC) based power processing unit (PPU). The PPU utilizes the high...

  15. Silicon Carbide (SiC) Power Processing Unit (PPU) for Hall Effect Thrusters Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this SBIR project, APEI, Inc. is proposing to develop a high efficiency, rad-hard 3.8 kW silicon carbide (SiC) Power Processing Unit (PPU) for Hall Effect...

  16. Downstream process synthesis for biochemical production of butanol, ethanol, and acetone from grains: generation of optimal and near-optimal flowsheets with conventional operating units.

    Science.gov (United States)

    Liu, Jiahong; Fan, L T; Seib, Paul; Friedler, Ferenc; Bertok, Botond

    2004-01-01

    Manufacturing butanol, ethanol, and acetone through grain fermentation has been attracting increasing research interest. In the production of these chemicals from fermentation, the cost of product recovery constitutes the major portion of the total production cost. Developing cost-effective flowsheets for the downstream processing is, therefore, crucial to enhancing the economic viability of this manufacturing method. The present work is concerned with the synthesis of such a process that minimizes the cost of the downstream processing. At the outset, a wide variety of processing equipment and unit operations, i.e., operating units, is selected for possible inclusion in the process. Subsequently, the exactly defined superstructure with minimal complexity, termed maximal structure, is constructed from these operating units with the rigorous and highly efficient graph-theoretic method for process synthesis based on process graphs (P-graphs). Finally, the optimal and near-optimal flowsheets in terms of cost are identified.

  17. Donald Campbell's Model of the Creative Process: Creativity as Blind Variation and Selective Retention.

    Science.gov (United States)

    Simonton, Dean Keith

    1998-01-01

    This introductory article discusses a blind-variation and selective-retention model of the creative process developed by Donald Campbell. According to Campbell, creativity contains three conditions: a mechanism for introducing variation, a consistent selection process, and a mechanism for preserving and reproducing selected variations. (Author/CR)

  18. A descriptive survey of management and operations at selected sports medicine centers in the United States.

    Science.gov (United States)

    Olsen, D

    1996-11-01

    No uniform guidelines for operations or accreditation standards for sports medicine center were available and, at the time of this study, little information on the management and operation of sports medicine centers was available in the literature. The purpose of the study was to determine the management structure and function of selected sports medicine centers in the United States. Questionnaires were mailed to 200 randomly selected centers throughout the United State from a directory of sports medicine centers published in Physician and Sportsmedicine (1992) to gather descriptive information on eight areas, including 1) general background, 2) staffing, 3) services, facilities, and equipment, 4) billing, collections, and revenue, 5) clientele, caseloads, and referrals, 6) ownership and financing, 7) school and club outreach contracts, and 8) marketing strategies and future trends. A total of 71 surveys (35.5%) were returned in the allotted time frame. Data were analyzed using ranges, means, medians, modes, and percentages. Results yielded several conclusions about sports medicine centers. Nearly all (93%) of the centers employed physical therapists; physical therapists were clinical directors at 70.2% of centers; orthopaedists were most often medical directors; rehabilitation was the most frequently offered service (93%); physical therapy produced the highest revenue; sports injuries accounted for a mean 34.5% of patients, who were mostly recreational or high school athletes between 13-60 years of age; primary shareholders were most often physical therapists or physicians; most were involved in outreach services for schools; marketing strategies primarily involved communication with referral sources; and managed care was identified most frequently as a trend affecting the future of sports medicine centers. Findings identified common aspects of sports medicine centers and may assist in establishing guidelines for operations or accreditation of sports medicine

  19. Performance Analysis of the United States Marine Corps War Reserve Materiel Program Process Flow

    Science.gov (United States)

    2016-12-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA MBA PROFESSIONAL REPORT PERFORMANCE ANALYSIS OF THE UNITED STATES MARINE CORPS...PERFORMANCE ANALYSIS OF THE UNITED STATES MARINE CORPS WAR RESERVE MATERIEL PROGRAM PROCESS FLOW 5. FUNDING NUMBERS 6. AUTHOR(S) Nathan A. Campbell...an item is requested but not maintained in the WRM inventory. By conducting a process analysis and using computer modeling, our recommendations are

  20. LITERATURE REVIEWS TO SUPPORT ION EXCHANGE TECHNOLOGY SELECTION FOR MODULAR SALT PROCESSING

    Energy Technology Data Exchange (ETDEWEB)

    King, W

    2007-11-30

    This report summarizes the results of literature reviews conducted to support the selection of a cesium removal technology for application in a small column ion exchange (SCIX) unit supported within a high level waste tank. SCIX is being considered as a technology for the treatment of radioactive salt solutions in order to accelerate closure of waste tanks at the Savannah River Site (SRS) as part of the Modular Salt Processing (MSP) technology development program. Two ion exchange materials, spherical Resorcinol-Formaldehyde (RF) and engineered Crystalline Silicotitanate (CST), are being considered for use within the SCIX unit. Both ion exchange materials have been studied extensively and are known to have high affinities for cesium ions in caustic tank waste supernates. RF is an elutable organic resin and CST is a non-elutable inorganic material. Waste treatment processes developed for the two technologies will differ with regard to solutions processed, secondary waste streams generated, optimum column size, and waste throughput. Pertinent references, anticipated processing sequences for utilization in waste treatment, gaps in the available data, and technical comparisons will be provided for the two ion exchange materials to assist in technology selection for SCIX. The engineered, granular form of CST (UOP IE-911) was the baseline ion exchange material used for the initial development and design of the SRS SCIX process (McCabe, 2005). To date, in-tank SCIX has not been implemented for treatment of radioactive waste solutions at SRS. Since initial development and consideration of SCIX for SRS waste treatment an alternative technology has been developed as part of the River Protection Project Waste Treatment Plant (RPP-WTP) Research and Technology program (Thorson, 2006). Spherical RF resin is the baseline media for cesium removal in the RPP-WTP, which was designed for the treatment of radioactive waste supernates and is currently under construction in Hanford, WA

  1. Statistical Comparisons of watershed scale response to climate change in selected basins across the United States

    Science.gov (United States)

    Risley, John; Moradkhani, Hamid; Hay, Lauren; Markstrom, Steve

    2011-01-01

    In an earlier global climate-change study, air temperature and precipitation data for the entire twenty-first century simulated from five general circulation models were used as input to precalibrated watershed models for 14 selected basins across the United States. Simulated daily streamflow and energy output from the watershed models were used to compute a range of statistics. With a side-by-side comparison of the statistical analyses for the 14 basins, regional climatic and hydrologic trends over the twenty-first century could be qualitatively identified. Low-flow statistics (95% exceedance, 7-day mean annual minimum, and summer mean monthly streamflow) decreased for almost all basins. Annual maximum daily streamflow also decreased in all the basins, except for all four basins in California and the Pacific Northwest. An analysis of the supply of available energy and water for the basins indicated that ratios of evaporation to precipitation and potential evapotranspiration to precipitation for most of the basins will increase. Probability density functions (PDFs) were developed to assess the uncertainty and multimodality in the impact of climate change on mean annual streamflow variability. Kolmogorov?Smirnov tests showed significant differences between the beginning and ending twenty-first-century PDFs for most of the basins, with the exception of four basins that are located in the western United States. Almost none of the basin PDFs were normally distributed, and two basins in the upper Midwest had PDFs that were extremely dispersed and skewed.

  2. Eligibility Worker Selection Process: Biographical Inventory Validation Study.

    Science.gov (United States)

    Darany, Theodore; And Others

    One way for agencies to reduce fiscal stress is to minimize employee turnover. A project undertaken by San Bernardino County (California) to reduce employee turnover through the development, validation, and use of a non-traditional worker selection instrument (biographical inventory) is described. This project was aimed at the specific…

  3. Cultural Influence on Selective Attention Processes among Nigerian Adolescents.

    Science.gov (United States)

    Uba, Anselm

    Three experiments in auditory selective attention form the basis of this investigation of cross-cultural differences among the ethnic groups of the Ibo and Yoruba adolescents of Nigeria. A sample of 200 16-year-olds were randomly drawn from four secondary schools. Yoruba adolescents showed superior performance in a task-involving the repetition of…

  4. A selection-quotient process for packed word Hopf algebra

    CERN Document Server

    Duchamp, G H E; Tanasa, A

    2013-01-01

    In this paper, we define a Hopf algebra structure on the vector space spanned by packed words using a selection-quotient coproduct. We show that this algebra is free on its irreducible packed words. Finally, we give some brief explanations on the Maple codes we have used.

  5. Concentration of Selected Anions in Bottled Water in the United Arab Emirates

    Directory of Open Access Journals (Sweden)

    Mohamed Yehia Z. Abouleish

    2012-05-01

    Full Text Available Several studies have shown concern over nitrate and nitrite contamination of prepared infant formula used by infants less than six months old, as it may lead to methemoglobinemia and death. One possible source of contamination is through the use of improperly treated drinking water. Contamination of water could result from fertilizers and manure runoff, not fully treated and released human and industrial waste, or from disinfection processes. In the United Arab Emirates (UAE, bottled water is the major source of drinking water and may be used for the preparation of infant formula. Therefore, in this study, several bottled water brands that are sold on the UAE market, and could be used for preparation of infant formula, were tested for nitrate and nitrite and other anions to show their compatibility with the permissible levels of the United States Environmental Protection Agency (U.S. EPA, United States Food and Drug Administration/Code of Federal Regulations (U.S. FDA/CFR, and other international organizations. All the bottled water samples demonstrated nitrate, nitrite, and other anions levels below the permissible levels accepted by U.S. EPA, U.S. FDA/CFR, and other international organizations, except for one sample that showed nitrite levels exceeding the European Commission and Drinking Water Directive (EC/DWD permissible levels. Such study sheds light on the quality of bottled water sold not only in the UAE and the region, but also in other countries, such as France, since some of them are imported. In addition, the results shed light on the effectiveness of the treatment processes and possible sources of infant formula contamination that can affect the health of infants.

  6. Selection of parameters for advanced machining processes using firefly algorithm

    Directory of Open Access Journals (Sweden)

    Rajkamal Shukla

    2017-02-01

    Full Text Available Advanced machining processes (AMPs are widely utilized in industries for machining complex geometries and intricate profiles. In this paper, two significant processes such as electric discharge machining (EDM and abrasive water jet machining (AWJM are considered to get the optimum values of responses for the given range of process parameters. The firefly algorithm (FA is attempted to the considered processes to obtain optimized parameters and the results obtained are compared with the results given by previous researchers. The variation of process parameters with respect to the responses are plotted to confirm the optimum results obtained using FA. In EDM process, the performance parameter “MRR” is increased from 159.70 gm/min to 181.6723 gm/min, while “Ra” and “REWR” are decreased from 6.21 μm to 3.6767 μm and 6.21% to 6.324 × 10−5% respectively. In AWJM process, the value of the “kerf” and “Ra” are decreased from 0.858 mm to 0.3704 mm and 5.41 mm to 4.443 mm respectively. In both the processes, the obtained results show a significant improvement in the responses.

  7. Substitution of Organic Solvents in Selected Industrial Cleaning Processes

    DEFF Research Database (Denmark)

    Jacobsen, Thomas; Rasmussen, Pia Brunn

    1997-01-01

    Volatile organic solvents (VOC)are becoming increasingly unwanted in industrial processes. Substitution of VOC with non-volatile, low-toxic compounds is a possibility to reduce VOC-use. It has been successfully demonstrated, that organic solvents used in cleaning processes in sheet offset printing...

  8. How Different Medical School Selection Processes Call upon Different Personality Characteristics

    NARCIS (Netherlands)

    Schripsema, Nienke R; van Trigt, Anke M; van der Wal, Martha A; Cohen-Schotanus, Janke

    2016-01-01

    BACKGROUND: Research indicates that certain personality traits relate to performance in the medical profession. Yet, personality testing during selection seems ineffective. In this study, we examine the extent to which different medical school selection processes call upon desirable personality char

  9. NERSC-6 Workload Analysis and Benchmark Selection Process

    Energy Technology Data Exchange (ETDEWEB)

    Antypas, Katie; Shalf, John; Wasserman, Harvey

    2008-08-29

    This report describes efforts carried out during early 2008 to determine some of the science drivers for the"NERSC-6" next-generation high-performance computing system acquisition. Although the starting point was existing Greenbooks from DOE and the NERSC User Group, the main contribution of this work is an analysis of the current NERSC computational workload combined with requirements information elicited from key users and other scientists about expected needs in the 2009-2011 timeframe. The NERSC workload is described in terms of science areas, computer codes supporting research within those areas, and description of key algorithms that comprise the codes. This work was carried out in large part to help select a small set of benchmark programs that accurately capture the science and algorithmic characteristics of the workload. The report concludes with a description of the codes selected and some preliminary performance data for them on several important systems.

  10. NERSC-6 Workload Analysis and Benchmark Selection Process

    Energy Technology Data Exchange (ETDEWEB)

    Antypas, Katie; Shalf, John; Wasserman, Harvey

    2008-08-29

    This report describes efforts carried out during early 2008 to determine some of the science drivers for the"NERSC-6" next-generation high-performance computing system acquisition. Although the starting point was existing Greenbooks from DOE and the NERSC User Group, the main contribution of this work is an analysis of the current NERSC computational workload combined with requirements information elicited from key users and other scientists about expected needs in the 2009-2011 timeframe. The NERSC workload is described in terms of science areas, computer codes supporting research within those areas, and description of key algorithms that comprise the codes. This work was carried out in large part to help select a small set of benchmark programs that accurately capture the science and algorithmic characteristics of the workload. The report concludes with a description of the codes selected and some preliminary performance data for them on several important systems.

  11. Ground moving target processing for tracking selected targets

    Science.gov (United States)

    Nichols, Howard; Majumder, Uttam; Owirka, Gregory; Finn, Lucas

    2016-05-01

    BAE Systems has developed a baseline real-time selected vehicle (SV) radar tracking capability that successfully tracked multiple civilian vehicles in real-world traffic conditions within challenging semi-urban clutter. This real-time tracking capability was demonstrated in laboratory setting. Recent enhancements to the baseline capability include multiple detection modes, improvements to the system-level design, and a wide-area tracking mode. The multiple detection modes support two tracking regimes; wide-area and localized selected vehicle tracking. These two tracking regimes have distinct challenges that may be suited to different trackers. Incorporation of a wide-area tracking mode provides both situational awareness and the potential for enhancing SV track initiation. Improvements to the system-level design simplify the integration of multiple detection modes and more realistic SV track initiation capabilities. Improvements are designed to contribute to a comprehensive tracking capability that exploits a continuous stare paradigm. In this paper, focus will be on the challenges, design considerations, and integration of selected vehicle tracking.

  12. Processing of Feature Selectivity in Cortical Networks with Specific Connectivity.

    Directory of Open Access Journals (Sweden)

    Sadra Sadeh

    Full Text Available Although non-specific at the onset of eye opening, networks in rodent visual cortex attain a non-random structure after eye opening, with a specific bias for connections between neurons of similar preferred orientations. As orientation selectivity is already present at eye opening, it remains unclear how this specificity in network wiring contributes to feature selectivity. Using large-scale inhibition-dominated spiking networks as a model, we show that feature-specific connectivity leads to a linear amplification of feedforward tuning, consistent with recent electrophysiological single-neuron recordings in rodent neocortex. Our results show that optimal amplification is achieved at an intermediate regime of specific connectivity. In this configuration a moderate increase of pairwise correlations is observed, consistent with recent experimental findings. Furthermore, we observed that feature-specific connectivity leads to the emergence of orientation-selective reverberating activity, and entails pattern completion in network responses. Our theoretical analysis provides a mechanistic understanding of subnetworks' responses to visual stimuli, and casts light on the regime of operation of sensory cortices in the presence of specific connectivity.

  13. Substitution of Organic Solvents in Selected Industrial Cleaning Processes

    DEFF Research Database (Denmark)

    Jacobsen, Thomas; Rasmussen, Pia Brunn

    1997-01-01

    Volatile organic solvents (VOC)are becoming increasingly unwanted in industrial processes. Substitution of VOC with non-volatile, low-toxic compounds is a possibility to reduce VOC-use. It has been successfully demonstrated, that organic solvents used in cleaning processes in sheet offset printing......, and industrial coating processes are likely candidates for substitution of VOC with VOFA. Requirements to the resulting surfaces may, however, hinder the replacement. This is especially important when the surface has to be coated in a subsequent step....

  14. Automated processing of whole blood units: operational value and in vitro quality of final blood components

    Science.gov (United States)

    Jurado, Marisa; Algora, Manuel; Garcia-Sanchez, Félix; Vico, Santiago; Rodriguez, Eva; Perez, Sonia; Barbolla, Luz

    2012-01-01

    Background The Community Transfusion Centre in Madrid currently processes whole blood using a conventional procedure (Compomat, Fresenius) followed by automated processing of buffy coats with the OrbiSac system (CaridianBCT). The Atreus 3C system (CaridianBCT) automates the production of red blood cells, plasma and an interim platelet unit from a whole blood unit. Interim platelet unit are pooled to produce a transfusable platelet unit. In this study the Atreus 3C system was evaluated and compared to the routine method with regards to product quality and operational value. Materials and methods Over a 5-week period 810 whole blood units were processed using the Atreus 3C system. The attributes of the automated process were compared to those of the routine method by assessing productivity, space, equipment and staffing requirements. The data obtained were evaluated in order to estimate the impact of implementing the Atreus 3C system in the routine setting of the blood centre. Yield and in vitro quality of the final blood components processed with the two systems were evaluated and compared. Results The Atreus 3C system enabled higher throughput while requiring less space and employee time by decreasing the amount of equipment and processing time per unit of whole blood processed. Whole blood units processed on the Atreus 3C system gave a higher platelet yield, a similar amount of red blood cells and a smaller volume of plasma. Discussion These results support the conclusion that the Atreus 3C system produces blood components meeting quality requirements while providing a high operational efficiency. Implementation of the Atreus 3C system could result in a large organisational improvement. PMID:22044958

  15. Histogram bin width selection for time-dependent Poisson processes

    Energy Technology Data Exchange (ETDEWEB)

    Koyama, Shinsuke; Shinomoto, Shigeru [Department of Physics, Graduate School of Science, Kyoto University, Sakyo-ku, Kyoto 606-8502 (Japan)

    2004-07-23

    In constructing a time histogram of the event sequences derived from a nonstationary point process, we wish to determine the bin width such that the mean squared error of the histogram from the underlying rate of occurrence is minimized. We find that the optimal bin widths obtained for a doubly stochastic Poisson process and a sinusoidally regulated Poisson process exhibit different scaling relations with respect to the number of sequences, time scale and amplitude of rate modulation, but both diverge under similar parametric conditions. This implies that under these conditions, no determination of the time-dependent rate can be made. We also apply the kernel method to these point processes, and find that the optimal kernels do not exhibit any critical phenomena, unlike the time histogram method.

  16. Histogram bin width selection for time-dependent Poisson processes

    Science.gov (United States)

    Koyama, Shinsuke; Shinomoto, Shigeru

    2004-07-01

    In constructing a time histogram of the event sequences derived from a nonstationary point process, we wish to determine the bin width such that the mean squared error of the histogram from the underlying rate of occurrence is minimized. We find that the optimal bin widths obtained for a doubly stochastic Poisson process and a sinusoidally regulated Poisson process exhibit different scaling relations with respect to the number of sequences, time scale and amplitude of rate modulation, but both diverge under similar parametric conditions. This implies that under these conditions, no determination of the time-dependent rate can be made. We also apply the kernel method to these point processes, and find that the optimal kernels do not exhibit any critical phenomena, unlike the time histogram method.

  17. Self-Repair and Language Selection in Bilingual Speech Processing

    Directory of Open Access Journals (Sweden)

    Inga Hennecke

    2013-07-01

    Full Text Available In psycholinguistic research the exact level of language selection in bilingual lexical access is still controversial and current models of bilingual speech production offer conflicting statements about the mechanisms and location of language selection. This paper aims to provide a corpus analysis of self-repair mechanisms in code-switching contexts of highly fluent bilingual speakers in order to gain further insights into bilingual speech production. The present paper follows the assumptions of the Selection by Proficiency model, which claims that language proficiency and lexical robustness determine the mechanism and level of language selection. In accordance with this hypothesis, highly fluent bilinguals select languages at a prelexical level, which should influence the occurrence of self-repairs in bilingual speech. A corpus of natural speech data of highly fluent and balanced bilingual French-English speakers of the Canadian French variety Franco-Manitoban serves as the basis for a detailed analysis of different self-repair mechanisms in code-switching environments. Although the speech data contain a large amount of code-switching, results reveal that only a few speech errors and self-repairs occur in direct code-switching environments. A detailed analysis of the respective starting point of code-switching and the different repair mechanisms supports the hypothesis that highly proficient bilinguals do not select languages at the lexical level.Le niveau exact de la sélection des langues lors de l’accès lexical chez le bilingue reste une question controversée dans la recherche psycholinguistique. Les modèles actuels de la production verbale bilingue proposent des arguments contradictoires concernant le mécanisme et le lieu de la sélection des langues. La présente recherche vise à fournir une analyse de corpus mettant l’accent sur les mécanismes d’autoréparation dans le contexte d’alternance codique dans la production verbale

  18. Chapter 2: selection process for nuclear sites; Capitulo 2: processo de selecao de sitios nucleares

    Energy Technology Data Exchange (ETDEWEB)

    Atala, Drausio Lima

    2009-07-01

    The five criteria of a selection process will be discussed as follows: 1) Structure of the site selection procedure: functional stages required for discrimination from the interest region to installation license. 2) Selection criteria incorporating the nuclear regulatory, environmental and installation project requirements that should be considered in the selection process. 3) Quantification criterion: quantification of relative adequacy of a site related to selection criterion. 4) Weighing factor: weighing factor development reflecting the relative importance of individual criteria incorporating the interest composition among criterion. 5) Public involvement: development of a process integrating information and public enrollment participation.

  19. Determination of Properties of Selected Fresh and Processed Medicinal Plants

    Directory of Open Access Journals (Sweden)

    Shirley G. Cabrera

    2015-11-01

    Full Text Available The study aimed to determine the chemical properties, bioactive compounds, antioxidant activity and toxicity level of fresh and processed medicinal plants such as corn (Zea mays silk, pancitpancitan (Peperomiapellucida leaves, pandan (Pandanus amaryllifolius leaves, and commercially available tea. The toxicity level of the samples was measured using the Brine Shrimp Lethality Assay (BSLA. Statistical analysis was done using Statistical Package for Social Sciences (SPSS. Results showed that in terms of chemical properties there is significant difference between fresh and processed corn silk except in crude fiber content was noted. Based on proximate analyses of fresh and processed medicinal plants specifically in terms of % moisture, %crude protein and % total carbohydrates were also observed. In addition, there is also significant difference on bioactive compound contents such as total flavonoids and total phenolics between fresh and processed corn silk except in total vitamin E (TVE content. Pandan and pancit-pancitan showed significant difference in all bioactive compounds except in total antioxidant content (TAC. Fresh pancit-pancitan has the highest total phenolics content (TPC and TAC, while the fresh and processed corn silk has the lowest TAC and TVE content, respectively. Furthermore, results of BSLA for the three medicinal plants and commercially available tea extract showed after 24 hours exposure significant difference in toxicity level was observed. The percentage mortality increased with an increase in exposure time of the three medicinal plants and tea extract. The results of the study can served as baseline data for further processing and commercialization of these medicinal plants.

  20. The second law as a selection principle: The microscopic theory of dissipative processes in quantum systems.

    Science.gov (United States)

    Prigogine, I; George, C

    1983-07-01

    The second law of thermodynamics, for quantum systems, is formulated, on the microscopic level. As for classical systems, such a formulation is only possible when specific conditions are satisfied (continuous spectrum, nonvanishing of the collision operator, etc.). The unitary dynamical group can then be mapped into two contractive semigroups, reaching equilibrium either for t --> +infinity or for t --> -infinity. The second law appears as a symmetry-breaking selection principle, limiting the observables and density functions to the class that tends to thermodynamic equilibrium in the future (for t --> +infinity). The physical content of the dynamical structure is now displayed in terms of the appropriate semigroup, which is realized through a nonunitary transformation. The superposition principle of quantum mechanics has to be reconsidered as irreversible processes transform pure states into mixtures and unitary transformations are limited by the requirement that entropy remains invariant. In the semigroup representation, interacting fields lead to units that behave incoherently at equilibrium. Inversely, nonequilibrium constraints introduce correlations between these units.

  1. The Second Law as a Selection Principle: The Microscopic Theory of Dissipative Processes in Quantum Systems

    Science.gov (United States)

    Prigogine, I.; George, Cl.

    1983-07-01

    The second law of thermodynamics, for quantum systems, is formulated, on the microscopic level. As for classical systems, such a formulation is only possible when specific conditions are satisfied (continuous spectrum, nonvanishing of the collision operator, etc.). The unitary dynamical group can then be mapped into two contractive semigroups, reaching equilibrium either for t → +∞ or for t → -∞. The second law appears as a symmetry-breaking selection principle, limiting the observables and density functions to the class that tends to thermodynamic equilibrium in the future (for t → +∞). The physical content of the dynamical structure is now displayed in terms of the appropriate semigroup, which is realized through a nonunitary transformation. The superposition principle of quantum mechanics has to be reconsidered as irreversible processes transform pure states into mixtures and unitary transformations are limited by the requirement that entropy remains invariant. In the semigroup representation, interacting fields lead to units that behave incoherently at equilibrium. Inversely, nonequilibrium constraints introduce correlations between these units.

  2. Technology and manufacturing process selection the product life cycle perspective

    CERN Document Server

    Pecas, Paulo; Silva, Arlindo

    2014-01-01

    This book provides specific topics intending to contribute to an improved knowledge on Technology Evaluation and Selection in a Life Cycle Perspectives. Although each chapter will present possible approaches and solutions, there are no recipes for success. Each reader will find his/her balance in applying the different topics to his/her own specific situation. Case studies presented throughout will help in deciding what fits best to each situation, but most of all any ultimate success will come out of the interplay between the available solutions and the specific problem or opportunity the reader is faced with.

  3. The Ideal Criteria of Supplier Selection for SMEs Food Processing Industry

    Directory of Open Access Journals (Sweden)

    Ramlan Rohaizan

    2016-01-01

    Full Text Available Selection of good supplier is important to determine the performance and profitability of SMEs food processing industry. The lack of managerial capability on supplier selection in SMEs food processing industry affects the competitiveness of SMEs food processing industry. This research aims to determine the ideal criteria of supplier for food processing industry using Analytical Hierarchy Process (AHP. The research was carried out in a quantitative method by distributing questionnaires to 50 SMEs food processing industries. The collected data analysed using Expert Choice software to rank the supplier selection criteria. The result shows that criteria for supplier selection are ranked by cost, quality, service, delivery and management and organisation while purchase cost, audit result, defect analysis, transportation cost and fast responsiveness are the first five sub-criteria. The result of this research intends to improve managerial capabilities of SMEs food processing industry in supplier selection.

  4. Monte Carlo simulation of photon migration in 3D turbid media accelerated by graphics processing units.

    Science.gov (United States)

    Fang, Qianqian; Boas, David A

    2009-10-26

    We report a parallel Monte Carlo algorithm accelerated by graphics processing units (GPU) for modeling time-resolved photon migration in arbitrary 3D turbid media. By taking advantage of the massively parallel threads and low-memory latency, this algorithm allows many photons to be simulated simultaneously in a GPU. To further improve the computational efficiency, we explored two parallel random number generators (RNG), including a floating-point-only RNG based on a chaotic lattice. An efficient scheme for boundary reflection was implemented, along with the functions for time-resolved imaging. For a homogeneous semi-infinite medium, good agreement was observed between the simulation output and the analytical solution from the diffusion theory. The code was implemented with CUDA programming language, and benchmarked under various parameters, such as thread number, selection of RNG and memory access pattern. With a low-cost graphics card, this algorithm has demonstrated an acceleration ratio above 300 when using 1792 parallel threads over conventional CPU computation. The acceleration ratio drops to 75 when using atomic operations. These results render the GPU-based Monte Carlo simulation a practical solution for data analysis in a wide range of diffuse optical imaging applications, such as human brain or small-animal imaging.

  5. High-Throughput Characterization of Porous Materials Using Graphics Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jihan; Martin, Richard L.; Rübel, Oliver; Haranczyk, Maciej; Smit, Berend

    2012-05-08

    We have developed a high-throughput graphics processing units (GPU) code that can characterize a large database of crystalline porous materials. In our algorithm, the GPU is utilized to accelerate energy grid calculations where the grid values represent interactions (i.e., Lennard-Jones + Coulomb potentials) between gas molecules (i.e., CH$_{4}$ and CO$_{2}$) and material's framework atoms. Using a parallel flood fill CPU algorithm, inaccessible regions inside the framework structures are identified and blocked based on their energy profiles. Finally, we compute the Henry coefficients and heats of adsorption through statistical Widom insertion Monte Carlo moves in the domain restricted to the accessible space. The code offers significant speedup over a single core CPU code and allows us to characterize a set of porous materials at least an order of magnitude larger than ones considered in earlier studies. For structures selected from such a prescreening algorithm, full adsorption isotherms can be calculated by conducting multiple grand canonical Monte Carlo simulations concurrently within the GPU.

  6. THE ASYMPTOTIC PROPERTIES OF SUPERCRITICAL BISEXUAL GALTON-WATSON BRANCHING PROCESSES WITH IMMIGRATION OF MATING UNITS

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In this article the supercritical bisexual Galton-Watson branching processes with the immigration of mating units is considered. A necessary condition for the almost sure convergence, and a sufficient condition for the L1 convergence are given for the process with the suitably normed condition.

  7. A Framework for Smart Distribution of Bio-signal Processing Units in M-Health

    NARCIS (Netherlands)

    Mei, Hailiang; Widya, Ing; Broens, Tom; Pawar, Pravin; Halteren, van Aart; Shishkov, Boris; Sinderen, van Marten

    2007-01-01

    This paper introduces the Bio-Signal Processing Unit (BSPU) as a functional component that hosts (part of ) the bio-signal information processing algorithms that are needed for an m-health application. With our approach, the BSPUs can be dynamically assigned to available nodes between the bio-signal

  8. Characterization of suspended bacteria from processing units in an advanced drinking water treatment plant of China.

    Science.gov (United States)

    Wang, Feng; Li, Weiying; Zhang, Junpeng; Qi, Wanqi; Zhou, Yanyan; Xiang, Yuan; Shi, Nuo

    2017-05-01

    For the drinking water treatment plant (DWTP), the organic pollutant removal was the primary focus, while the suspended bacterial was always neglected. In this study, the suspended bacteria from each processing unit in a DWTP employing an ozone-biological activated carbon process was mainly characterized by using heterotrophic plate counts (HPCs), a flow cytometer, and 454-pyrosequencing methods. The results showed that an adverse changing tendency of HPC and total cell counts was observed in the sand filtration tank (SFT), where the cultivability of suspended bacteria increased to 34%. However, the cultivability level of other units stayed below 3% except for ozone contact tank (OCT, 13.5%) and activated carbon filtration tank (ACFT, 34.39%). It meant that filtration processes promoted the increase in cultivability of suspended bacteria remarkably, which indicated biodegrading capability. In the unit of OCT, microbial diversity indexes declined drastically, and the dominant bacteria were affiliated to Proteobacteria phylum (99.9%) and Betaproteobacteria class (86.3%), which were also the dominant bacteria in the effluent of other units. Besides, the primary genus was Limnohabitans in the effluents of SFT (17.4%) as well as ACFT (25.6%), which was inferred to be the crucial contributors for the biodegradable function in the filtration units. Overall, this paper provided an overview of community composition of each processing units in a DWTP as well as reference for better developing microbial function for drinking water treatment in the future.

  9. Acceleration of integral imaging based incoherent Fourier hologram capture using graphic processing unit.

    Science.gov (United States)

    Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung

    2012-10-01

    Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.

  10. Module energy rating candidate reference days: Criteria and selection process

    Science.gov (United States)

    Myers, Daryl R.

    1999-03-01

    Presently the performance of flat-plate photovoltaic (PV) modules is based upon module power, open circuit voltage, short-circuit current, or peak power voltage and current with respect to fixed environmental conditions such as Standard Reporting Conditions, (SRC) or nominal operating cell temperature. These reporting conditions represent ideal conditions under which PV module performance (e.g., between manufacturers or technologies) may be compared. They result in overly optimistic expectations of performance by PV industry customers. A recommended practice for PV module energy-rating methodologies to relate PV module performance to real world conditions is under development. The methodologies will provide system designers and PV customers with an energy rating representative of real world conditions. The rating methodology reports energy production under 5 selected types of days representative of possible operational environments. The development and application of qualitative and quantitative criteria for identifying and selecting these representative days from the 30-year U.S. National Solar Radiation Data Base is described.

  11. Optimization Solutions for Improving the Performance of the Parallel Reduction Algorithm Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2012-01-01

    Full Text Available In this paper, we research, analyze and develop optimization solutions for the parallel reduction function using graphics processing units (GPUs that implement the Compute Unified Device Architecture (CUDA, a modern and novel approach for improving the software performance of data processing applications and algorithms. Many of these applications and algorithms make use of the reduction function in their computational steps. After having designed the function and its algorithmic steps in CUDA, we have progressively developed and implemented optimization solutions for the reduction function. In order to confirm, test and evaluate the solutions' efficiency, we have developed a custom tailored benchmark suite. We have analyzed the obtained experimental results regarding: the comparison of the execution time and bandwidth when using graphic processing units covering the main CUDA architectures (Tesla GT200, Fermi GF100, Kepler GK104 and a central processing unit; the data type influence; the binary operator's influence.

  12. Selection of natural treatment processes for algae removal from stabilisation ponds effluents in Brasilia, using multicriterion methods.

    Science.gov (United States)

    Neder, K D; Carneiro, G A; Queiroz, T R; de Souza, M A A

    2002-01-01

    A multicriterion methodology is used in the evaluation and selection of the most appropriate alternative(s) for removing algae from stabilisation ponds effluents in a case study in Brasilia. For this purpose, five different natural treatment processes are tested at pilot scale: rock filter, sand filter, floating aquatic plants, constructed wetlands, and overland flow. These pilot units were constructed in Brasilia and set in parallel, each one receiving a portion of the effluent that comes from an existing wastewater treatment plant composed of preliminary treatment, UASB reactors, and high-rate stabilisation ponds. Several evaluation criteria are used in order to relate the capabilities of the post-treatment processes to the multiple objectives in this case. Two multicriterion decision-aid methods--compromise programming and ELECTRE-III--are used to select the most satisfying processes. The top ranking alternatives are indicated for subsequent studies, considering the possible implementation of these technologies to existing plants.

  13. Screening method for solvent selection used in tar removal by the absorption process.

    Science.gov (United States)

    Masurel, Eve; Authier, Olivier; Castel, Christophe; Roizard, Christine

    2015-01-01

    The aim of this paper is the study of the treatment of flue gas issued from a process of biomass gasification in fluidized bed. The flue gas contains tar which should be selectively removed from the fuel components of interest (e.g. H2, CO and light hydrocarbons) to avoid condensation and deposits in internal combustion engine. The chosen flue gas treatment is the gas-liquid absorption using solvents, which present specific physicochemical properties (e.g. solubility, viscosity, volatility and chemical and thermal stability) in order to optimize the unit on energetic, technico-economic and environmental criteria. The rational choice of the proper solvent is essential for solving the tar issue. The preselection of the solvents is made using a Hansen parameter in order to evaluate the tar solubility and the saturation vapour pressure of the solvent is obtained using Antoine law. Among the nine families of screened solvents (alcohols, amines, ketones, halogenates, ethers, esters, hydrocarbons, sulphured and chlorinates), acids methyl esters arise as solvents of interest. Methyl oleate has then been selected and studied furthermore. Experimental liquid-vapour equilibrium data using bubbling point and absorption cell measurements and theoretical results obtained by the UNIFAC-Dortmund model confirm the high potential of this solvent and the good agreement between experimental and theoretical results.

  14. A Subjective and Objective Process for Athletic Training Student Selection

    Science.gov (United States)

    Hawkins, Jeremy R.; McLoda, Todd A.; Stanek, Justin M.

    2015-01-01

    Context: Admission decisions are made annually concerning whom to accept into athletic training programs. Objective: To present an approach used to make admissions decisions at an undergraduate athletic training program and to corroborate this information by comparing each aspect to nursing program admission processes. Background: Annually,…

  15. Command decoder unit. [performance tests of data processing terminals and data converters for space shuttle orbiters

    Science.gov (United States)

    1976-01-01

    The design and testing of laboratory hardware (a command decoder unit) used in evaluating space shuttle instrumentation, data processing, and ground check-out operations is described. The hardware was a modification of another similar instrumentation system. A data bus coupler was designed and tested to interface the equipment to a central bus controller (computer). A serial digital data transfer mechanism was also designed. Redundant power supplies and overhead modules were provided to minimize the probability of a single component failure causing a catastrophic failure. The command decoder unit is packaged in a modular configuration to allow maximum user flexibility in configuring a system. Test procedures and special test equipment for use in testing the hardware are described. Results indicate that the unit will allow NASA to evaluate future software systems for use in space shuttles. The units were delivered to NASA and appear to be adequately performing their intended function. Engineering sketches and photographs of the command decoder unit are included.

  16. Hierarchical zeolites and their catalytic performance in selective oxidative processes.

    Science.gov (United States)

    Ojeda, Manuel; Grau-Atienza, Aida; Campos, Rafael; Romero, Antonio A; Serrano, Elena; Maria Marinas, Jose; García Martínez, Javier; Luque, Rafael

    2015-04-24

    Hierarchical ZSM-5 zeolites prepared using a simple alkali treatment and subsequent HCl washing are found to exhibit unprecedented catalytic activities in selective oxidation of benzyl alcohol under microwave irradiation. The metal-free zeolites promote the microwave-assisted oxidation of benzyl alcohol with hydrogen peroxide in yields ranging from 45-35 % after 5 min of reaction under mild reaction conditions as well as the epoxidation of cyclohexene to valuable products (40-60 % conversion). The hierarchically porous systems also exhibited an interesting catalytic activity in the dehydration of N,N-dimethylformamide (25-30 % conversion), representing the first example of transition-metal free catalysts in this reaction.

  17. Optimal selection of intermediate storage tank capacity in a periodic batch/semicontinuous process

    Energy Technology Data Exchange (ETDEWEB)

    Karimi, I.A.; Reklaitis, G.V.

    1983-07-01

    Batch/semicontinuous chemical plants are usually designed either by assuming infinite intermediate storage or by assuming that the units themselves act as storage vessels, while the storage vessels are sized by rules of thumb or experience. In this paper, the case of an intermediate storage vessel which links one upstream batch/semicontinuous unit to one downstream batch/semicontinuous unit is analyzed. The units are assumed to operate with fixed cycle times and capacities. Expressions for determining the minimum storage tank capacity necessary to decouple the two units are derived from a mathematical model of the periodic process. Effects of the relative starting times of the two units on the required storage capacity are determined, thus suggesting the optimum process timings to minimize the same. Application of the results is illustrated by an example.

  18. Activation process in excitable systems with multiple noise sources: Large number of units

    CERN Document Server

    Franović, Igor; Todorović, Kristina; Kostić, Srđan; Burić, Nikola

    2015-01-01

    We study the activation process in large assemblies of type II excitable units whose dynamics is influenced by two independent noise terms. The mean-field approach is applied to explicitly demonstrate that the assembly of excitable units can itself exhibit macroscopic excitable behavior. In order to facilitate the comparison between the excitable dynamics of a single unit and an assembly, we introduce three distinct formulations of the assembly activation event. Each formulation treats different aspects of the relevant phenomena, including the threshold-like behavior and the role of coherence of individual spikes. Statistical properties of the assembly activation process, such as the mean time-to-first pulse and the associated coefficient of variation, are found to be qualitatively analogous for all three formulations, as well as to resemble the results for a single unit. These analogies are shown to derive from the fact that global variables undergo a stochastic bifurcation from the stochastically stable fix...

  19. Unit Process Wetlands for Removal of Trace Organic Contaminants and Pathogens from Municipal Wastewater Effluents

    Science.gov (United States)

    Jasper, Justin T.; Nguyen, Mi T.; Jones, Zackary L.; Ismail, Niveen S.; Sedlak, David L.; Sharp, Jonathan O.; Luthy, Richard G.; Horne, Alex J.; Nelson, Kara L.

    2013-01-01

    Abstract Treatment wetlands have become an attractive option for the removal of nutrients from municipal wastewater effluents due to their low energy requirements and operational costs, as well as the ancillary benefits they provide, including creating aesthetically appealing spaces and wildlife habitats. Treatment wetlands also hold promise as a means of removing other wastewater-derived contaminants, such as trace organic contaminants and pathogens. However, concerns about variations in treatment efficacy of these pollutants, coupled with an incomplete mechanistic understanding of their removal in wetlands, hinder the widespread adoption of constructed wetlands for these two classes of contaminants. A better understanding is needed so that wetlands as a unit process can be designed for their removal, with individual wetland cells optimized for the removal of specific contaminants, and connected in series or integrated with other engineered or natural treatment processes. In this article, removal mechanisms of trace organic contaminants and pathogens are reviewed, including sorption and sedimentation, biotransformation and predation, photolysis and photoinactivation, and remaining knowledge gaps are identified. In addition, suggestions are provided for how these treatment mechanisms can be enhanced in commonly employed unit process wetland cells or how they might be harnessed in novel unit process cells. It is hoped that application of the unit process concept to a wider range of contaminants will lead to more widespread application of wetland treatment trains as components of urban water infrastructure in the United States and around the globe. PMID:23983451

  20. A Shipping Container-Based Sterile Processing Unit for Low Resources Settings.

    Science.gov (United States)

    Boubour, Jean; Jenson, Katherine; Richter, Hannah; Yarbrough, Josiah; Oden, Z Maria; Schuler, Douglas A

    2016-01-01

    Deficiencies in the sterile processing of medical instruments contribute to poor outcomes for patients, such as surgical site infections, longer hospital stays, and deaths. In low resources settings, such as some rural and semi-rural areas and secondary and tertiary cities of developing countries, deficiencies in sterile processing are accentuated due to the lack of access to sterilization equipment, improperly maintained and malfunctioning equipment, lack of power to operate equipment, poor protocols, and inadequate quality control over inventory. Inspired by our sterile processing fieldwork at a district hospital in Sierra Leone in 2013, we built an autonomous, shipping-container-based sterile processing unit to address these deficiencies. The sterile processing unit, dubbed "the sterile box," is a full suite capable of handling instruments from the moment they leave the operating room to the point they are sterile and ready to be reused for the next surgery. The sterile processing unit is self-sufficient in power and water and features an intake for contaminated instruments, decontamination, sterilization via non-electric steam sterilizers, and secure inventory storage. To validate efficacy, we ran tests of decontamination and sterilization performance. Results of 61 trials validate convincingly that our sterile processing unit achieves satisfactory outcomes for decontamination and sterilization and as such holds promise to support healthcare facilities in low resources settings.

  1. A Shipping Container-Based Sterile Processing Unit for Low Resources Settings.

    Directory of Open Access Journals (Sweden)

    Jean Boubour

    Full Text Available Deficiencies in the sterile processing of medical instruments contribute to poor outcomes for patients, such as surgical site infections, longer hospital stays, and deaths. In low resources settings, such as some rural and semi-rural areas and secondary and tertiary cities of developing countries, deficiencies in sterile processing are accentuated due to the lack of access to sterilization equipment, improperly maintained and malfunctioning equipment, lack of power to operate equipment, poor protocols, and inadequate quality control over inventory. Inspired by our sterile processing fieldwork at a district hospital in Sierra Leone in 2013, we built an autonomous, shipping-container-based sterile processing unit to address these deficiencies. The sterile processing unit, dubbed "the sterile box," is a full suite capable of handling instruments from the moment they leave the operating room to the point they are sterile and ready to be reused for the next surgery. The sterile processing unit is self-sufficient in power and water and features an intake for contaminated instruments, decontamination, sterilization via non-electric steam sterilizers, and secure inventory storage. To validate efficacy, we ran tests of decontamination and sterilization performance. Results of 61 trials validate convincingly that our sterile processing unit achieves satisfactory outcomes for decontamination and sterilization and as such holds promise to support healthcare facilities in low resources settings.

  2. Measuring process performance within healthcare logistics - a decision tool for selecting measuring technologies

    DEFF Research Database (Denmark)

    Feibert, Diana Cordes; Jacobsen, Peter

    2015-01-01

    Performance measurement can support the organization in improving the efficiency and effectiveness of logistical healthcare processes. Selecting the most suitable technologies is important to ensure data validity. A case study of the hospital cleaning process at a public Danish hospital...

  3. High Power Silicon Carbide (SiC) Power Processing Unit Development

    Science.gov (United States)

    Scheidegger, Robert J.; Santiago, Walter; Bozak, Karin E.; Pinero, Luis R.; Birchenough, Arthur G.

    2015-01-01

    NASA GRC successfully designed, built and tested a technology-push power processing unit for electric propulsion applications that utilizes high voltage silicon carbide (SiC) technology. The development specifically addresses the need for high power electronics to enable electric propulsion systems in the 100s of kilowatts. This unit demonstrated how high voltage combined with superior semiconductor components resulted in exceptional converter performance.

  4. Experience in design and startup of distillation towers in primary crude oil processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Lebedev, Y.N.; D' yakov, V.G.; Mamontov, G.V.; Sheinman, V.A.; Ukhin, V.V.

    1985-11-01

    This paper describes a refinery in the city of Mathura, India, with a capacity of 7 million metric tons of crude per year, designed and constructed to include the following units: AVT for primary crude oil processing; catalytic cracking; visbreaking; asphalt; and other units. A diagram of the atmospheric tower with stripping sections is shown, and the stabilizer tower is illustrated. The startup and operation of the AVT and visbreaking units are described, and they demonstrate the high reliability and efficiency of the equipment.

  5. Program note: applying the UN process indicators for emergency obstetric care to the United States.

    Science.gov (United States)

    Lobis, S; Fry, D; Paxton, A

    2005-02-01

    The United Nations Process Indicators for emergency obstetric care (EmOC) have been used extensively in countries with high maternal mortality ratios (MMR) to assess the availability, utilization and quality of EmOC services. To compare the situation in high MMR countries to that of a low MMR country, data from the United States were used to determine EmOC service availability, utilization and quality. As was expected, the United States was found to have an adequate amount of good-quality EmOC services that are used by the majority of women with life-threatening obstetric complications.

  6. elative age effect in the selection process of handball players of the regional selection teams

    Directory of Open Access Journals (Sweden)

    Manuel Gómez López

    2017-06-01

    Full Text Available This study aimed to analyze the effect of age on adolescent handball players of the regional selection teams. To do this, data of sex and date of birth of 84 youth players from different regional selection teams in the 2015-2016 season were analyzed, performing comparisons and differences being studied by χ2 and Z tests and the Bonferroni method. The analysis of results by quarter and half of birth revealed no statistically significant differences in gender and category. It seems to confirm that there is not relative age effect in the analyzed Teams. Whereupon, seems to confirm that in handball base, all young people participate, regardless of the degree of maturity submit.

  7. Five hydrologic and landscape databases for select National Wildlife Refuges in southeastern United States

    Science.gov (United States)

    Buell, Gary R.; Gurley, Laura N.; Calhoun, Daniel L.; Hunt, Alexandria M.

    2017-01-01

    Five hydrologic and landscape databases were developed by the U.S. Geological Survey, in cooperation with the U.S. Fish and Wildlife Service, for select National Wildlife Refuges (NWRs) in southeastern United States: (1) the Cahaba River NWR and contributing watersheds in Alabama, (2) the Caloosahatchee and J.N. "Ding" Darling NWRs and contributing watersheds in Florida, (3) the Clarks River NWR and contributing watersheds in Kentucky, Tennessee, and Mississippi, (4) the Lower Suwannee NWR and contributing watersheds in Georgia and Florida, and (5) the Okefenokee NWR and contributing watersheds in Georgia and Florida. The databases were developed as an assessment and evaluation tool to use in examining refuge-specific hydrologic patterns and trends as related to water availability and water quality for refuge ecosystems, habitats, and target species. They include hydrologic time-series data, statistics on landscape and hydrologic time-series data, and hydro-ecological metrics that can be used to assess refuge hydrologic conditions. The databases are described and documented in detail in Open File Report 2017-1018.

  8. Effect of heat processing on selected grain amaranth physicochemical properties.

    Science.gov (United States)

    Muyonga, John H; Andabati, Brian; Ssepuuya, Geoffrey

    2014-01-01

    Grain amaranth is a pseudocereal with unique agricultural, nutritional, and functional properties. This study was undertaken to determine the effect of different heat-processing methods on physicochemical and nutraceutical properties in two main grain amaranth species, of Amaranthus hypochondriacus L. and Amaranthus cruentus L. Grains were prepared by roasting and popping, milled and analyzed for changes in in vitro protein digestibility, gruel viscosity, pasting characteristics, antioxidant activity, flavonoids, and total phenolics. In vitro protein digestibility was determined using the pepsin-pancreatin enzyme system. Viscosity and pasting characteristics of samples were determined using a Brookfield Viscometer and a Rapid Visco Analyzer, respectively. The grain methanol extracts were analysed for phenolics using spectrophotometry while antioxidant activity was determined using the DPPH (2,2-diphenyl-1-picrylhydrazyl) method. Heat treatment led to a reduction in protein digestibility, the effect being higher in popped than in roasted samples. Viscosities for roasted grain amaranth gruels were significantly higher than those obtained from raw and popped grain amaranth gruels. The results for pasting properties were consistent with the results for viscosity. In both A. hypochondriacus L. and A. cruentus L., the order of the viscosity values was roasted>raw>popped. The viscosities were also generally lower for A. cruentus L. compared to A. hypochondriacus L. Raw samples for both A. hypochondriacus L. and A. cruentus L. did not significantly differ in total phenolic content (TPC), total flavonoid content (TFC), and total antioxidant activity values. Thermal processing led to an increase in TFC and antioxidant activity. However, TPC of heat-processed samples remained unchanged. From the results, it can be concluded that heat treatment enhances antioxidant activity of grain amaranth and causes rheological changes dependent on the nature of heat treatment.

  9. Occurrence of Aflatoxins in Selected Processed Foods from Pakistan

    Directory of Open Access Journals (Sweden)

    Muhammad Ashrafuzzaman

    2012-07-01

    Full Text Available A total of 125 (ready to eat processed food samples (70 intended for infant and 55 for adult intake belonging to 20 different food categories were analyzed for aflatoxins contamination using Reverse Phase High Performance Liquid Chromatography (RP-HPLC with fluorescent detection. A solvent mixture of acetonitrile-water was used for the extraction followed by immunoaffinity clean-up to enhance sensitivity of the method. The limit of detection (LOD (0.01–0.02 ng·g−1 and limit of quantification (LOQ (0.02 ng·g−1 was established for aflatoxins based on signal to noise ratio of 3:1 and 10:1, respectively. Of the processed food samples tested, 38% were contaminated with four types of aflatoxins, i.e., AFB1 (0.02–1.24 μg·kg−1, AFB2 (0.02–0.37 μg·kg−1, AFG1 (0.25–2.7 μg·kg−1 and AFG2 (0.21–1.3 μg·kg−1. In addition, the results showed that 21% of the processed foods intended for infants contained AFB1 levels higher than the European Union permissible limits (0.1 μg·kg−1, while all of those intended for adult consumption had aflatoxin contamination levels within the permitted limits.

  10. Selected studies in HTGR reprocessing development. [KA2C process

    Energy Technology Data Exchange (ETDEWEB)

    Notz, K.J.

    1976-03-01

    Recent work at ORNL on hot cell studies, off-gas cleanup, and waste handling is reviewed. The work includes small-scale burning tests with irradiated fuels to study fission product release, development of the KALC process for the removal of /sup 85/Kr from a CO/sub 2/ stream, preliminary work on a nonfluidized bed burner, solvent extraction studies including computer modeling, characterization of reprocessing wastes, and initiation of a development program for the fixation of /sup 14/C as CaCO/sub 3/. (auth)

  11. The influence of selected parameters on the efficiency and economic charactersistics of the oxy-type coal unit with a membrane-cryogenic oxygen separator

    Directory of Open Access Journals (Sweden)

    Kotowicz Janusz

    2016-03-01

    Full Text Available In this paper a 600 MW oxy-type coal unit with a pulverized bed boiler and a membrane-cryogenic oxygen separator and carbon capture installation was analyzed. A membrane-cryogenic oxygen separation installation consists of a membrane module and two cryogenic distillation columns. In this system oxygen is produced with the purity equal to 95%. Installation of carbon capture was based on the physical separation method and allows to reduce the CO2 emission by 90%. In this work the influence of the main parameter of the membrane process – the selectivity coefficient, on the efficiency of the coal unit was presented. The economic analysis with the use of the break-even point method was carried out. The economic calculations were realized in view of the break-even price of electricity depending on a coal unit availability.

  12. Emergence of Colistin Resistance in Enterobacteriaceae after the Introduction of Selective Digestive Tract Decontamination in an Intensive Care Unit

    OpenAIRE

    Halaby, Teysir; al Naiemi, Nashwan; Kluytmans, Jan; van der Palen, Job; Vandenbroucke-Grauls, Christina M. J. E.

    2013-01-01

    Selective decontamination of the digestive tract (SDD) selectively eradicates aerobic Gram-negative bacteria (AGNB) by the enteral administration of oral nonabsorbable antimicrobial agents, i.e., colistin and tobramycin. We retrospectively investigated the impact of SDD, applied for 5 years as part of an infection control program for the control of an outbreak with extended-spectrum beta-lactamase (ESBL)-producing Klebsiella pneumoniae in an intensive care unit (ICU), on resistance among AGNB...

  13. United States Marine Corps Career Designation Board: Significant Factors in Predicting Selection

    Science.gov (United States)

    2014-03-01

    Total Force Data Warehouse USMC United States Marine Corps USNA United States Naval Academy WTI Weapons and Tactics Instructor XO...Tactics Instructor ( WTI ), Professional Military Education (PME) complete, and Special Education/Advanced Degree Programs’ graduates had a

  14. Modeling intermediate product selection under production and storage capacity limitations in food processing

    DEFF Research Database (Denmark)

    Kilic, Onur Alper; Akkerman, Renzo; Grunow, Martin

    2009-01-01

    and processing costs are minimized. However, this product selection process is bound by production and storage capacity limitations, such as the number and size of storage tanks or silos. In this paper, we present a mathematical programming approach that combines decision making on product selection...

  15. 25 CFR 122.5 - Selection/nomination process for committee members.

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Selection/nomination process for committee members. 122.5... OF OSAGE JUDGMENT FUNDS FOR EDUCATION § 122.5 Selection/nomination process for committee members. (a... include a brief statement of interest and qualifications for serving on the committee. (b) Nominations...

  16. NUMATH: a nuclear-material-holdup estimator for unit operations and chemical processes

    Energy Technology Data Exchange (ETDEWEB)

    Krichinsky, A.M.

    1983-02-01

    A computer program, NUMATH (Nuclear Material Holdup Estimator), has been developed to estimate compositions of materials in vessels involved in unit operations and chemical processes. This program has been implemented in a remotely operated nuclear fuel processing plant. NUMATH provides estimates of the steady-state composition of materials residing in process vessels until representative samples can be obtained and chemical analyses can be performed. Since these compositions are used for inventory estimations, the results are determined for the cataloged in container-oriented files. The estimated compositions represent materials collected in applicable vessels - including consideration for materials previously acknowledged in these vessels. The program utilizes process measurements and simple performance models to estimate material holdup and distribution within unit operations. In simulated run-testing, NUMATH typically produced estimates within 5% of the measured inventories for uranium and within 8% of the measured inventories for thorium during steady-state process operation.

  17. Selective imitation impairments differentially interact with language processing.

    Science.gov (United States)

    Mengotti, Paola; Corradi-Dell'Acqua, Corrado; Negri, Gioia A L; Ukmar, Maja; Pesavento, Valentina; Rumiati, Raffaella I

    2013-08-01

    Whether motor and linguistic representations of actions share common neural structures has recently been the focus of an animated debate in cognitive neuroscience. Group studies with brain-damaged patients reported association patterns of praxic and linguistic deficits whereas single case studies documented double dissociations between the correct execution of gestures and their comprehension in verbal contexts. When the relationship between language and imitation was investigated, each ability was analysed as a unique process without distinguishing between possible subprocesses. However, recent cognitive models can be successfully used to account for these inconsistencies in the extant literature. In the present study, in 57 patients with left brain damage, we tested whether a deficit at imitating either meaningful or meaningless gestures differentially impinges on three distinct linguistic abilities (comprehension, naming and repetition). Based on the dual-pathway models, we predicted that praxic and linguistic performance would be associated when meaningful gestures are processed, and would dissociate for meaningless gestures. We used partial correlations to assess the association between patients' scores while accounting for potential confounding effects of aspecific factors such age, education and lesion size. We found that imitation of meaningful gestures significantly correlated with patients' performance on naming and repetition (but not on comprehension). This was not the case for the imitation of meaningless gestures. Moreover, voxel-based lesion-symptom mapping analysis revealed that damage to the angular gyrus specifically affected imitation of meaningless gestures, independent of patients' performance on linguistic tests. Instead, damage to the supramarginal gyrus affected not only imitation of meaningful gestures, but also patients' performance on naming and repetition. Our findings clarify the apparent conflict between associations and dissociations

  18. Selective Hydrogen Transfer Reaction in FCC Process:Characterization and Application

    Institute of Scientific and Technical Information of China (English)

    Chen Beiyan; He Mingyuan; Da Zhijian

    2003-01-01

    The product distribution and gasoline quality of FCC process, especially the olefin content,heavily depends on the catalyst performance in terms of selective/non-selective hydrogen transfer reaction selectivity. A reliable experimental protocol has been established by using n-dodecane as a probe molecule to characterize the selective hydrogen transfer ability of catalytic materials. The results obtained have been correlated with the performance of the practical catalysts.

  19. 100-OL-1 Operable Unit Pilot Study: XRF Evaluation of Select Pre-Hanford Orchards

    Energy Technology Data Exchange (ETDEWEB)

    Bunn, Amoret L.; Fritz, Brad G.; Pulsipher, Brent A.; Gorton, Alicia M.; Bisping, Lynn E.; Brandenberger, Jill M.; Pino, Christian; Martinez, Dominique M.; Rana, Komal; Wellman, Dawn M.

    2014-11-20

    the results of the pilot study, the recommendations for the revision of the work plan are as follows: • characterize the surface soil using field portable XRF measurements with confirmatory inductively coupled plasma mass spectroscopy sampling for the remedial investigation • establish decision units of similar defined areas • establish a process for field investigation of soil concentrations exceeding the screening criteria at the border of the 100-OL-1 OU • define data quality objectives for the work plan using the results of the pilot study and refining the sampling approach for the remedial investigation.

  20. Grace: a Cross-platform Micromagnetic Simulator On Graphics Processing Units

    CERN Document Server

    Zhu, Ru

    2014-01-01

    A micromagnetic simulator running on graphics processing unit (GPU) is presented. It achieves significant performance boost as compared to previous central processing unit (CPU) simulators, up to two orders of magnitude for large input problems. Different from GPU implementations of other research groups, this simulator is developed with C++ Accelerated Massive Parallelism (C++ AMP) and is hardware platform compatible. It runs on GPU from venders include NVidia, AMD and Intel, which paved the way for fast micromagnetic simulation on both high-end workstations with dedicated graphics cards and low-end personal computers with integrated graphics card. A copy of the simulator software is publicly available.

  1. Fast extended focused imaging in digital holography using a graphics processing unit.

    Science.gov (United States)

    Wang, Le; Zhao, Jianlin; Di, Jianglei; Jiang, Hongzhen

    2011-05-01

    We present a simple and effective method for reconstructing extended focused images in digital holography using a graphics processing unit (GPU). The Fresnel transform method is simplified by an algorithm named fast Fourier transform pruning with frequency shift. Then the pixel size consistency problem is solved by coordinate transformation and combining the subpixel resampling and the fast Fourier transform pruning with frequency shift. With the assistance of the GPU, we implemented an improved parallel version of this method, which obtained about a 300-500-fold speedup compared with central processing unit codes.

  2. Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system.

    Science.gov (United States)

    Takada, Naoki; Shimobaba, Tomoyoshi; Nakayama, Hirotaka; Shiraki, Atsushi; Okada, Naohisa; Oikawa, Minoru; Masuda, Nobuyuki; Ito, Tomoyoshi

    2012-10-20

    To overcome the computational complexity of a computer-generated hologram (CGH), we implement an optimized CGH computation in our multi-graphics processing unit cluster system. Our system can calculate a CGH of 6,400×3,072 pixels from a three-dimensional (3D) object composed of 2,048 points in 55 ms. Furthermore, in the case of a 3D object composed of 4096 points, our system is 553 times faster than a conventional central processing unit (using eight threads).

  3. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    Science.gov (United States)

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  4. Substantiation of the cogeneration turbine unit selection for reconstruction of power units with a T-250/300-23.5 turbine

    Science.gov (United States)

    Valamin, A. E.; Kultyshev, A. Yu.; Shibaev, T. L.; Gol'dberg, A. A.; Sakhnin, Yu. A.; Stepanov, M. Yu.; Bilan, V. N.; Kadkina, I. V.

    2016-11-01

    The selection of a cogeneration steam turbine unit (STU) for the reconstruction of power units with a T-250/300-23.5 turbine is substantiated by the example of power unit no. 9 at the cogeneration power station no. 22 (TETs-22) of Mosenergo Company. Series T-250 steam turbines have been developed for combined heat and power generation. A total of 31 turbines were manufactured. By the end of 2015, the total operation time of prototype power units with the T-250/300-23.5 turbine exceeded 290000 hours. Considering the expiry of the service life, the decision was made that the reconstruction of the power unit at st. no. 9 of TETs-22 should be the first priority. The main issues that arose in developing this project—the customer's requirements and the request for the reconstruction, the view on certain problems of Ural Turbine Works (UTZ) as the manufacturer of the main power unit equipment, and the opinions of other project parties—are examined. The decisions were made with account taken of the experience in operation of all Series T-250 turbines and the results of long-term discussions of pressing problems at scientific and technical councils, meetings, and negotiations. For the new power unit, the following parameters have been set: a live steam pressure of 23.5 MPa and live steam/reheat temperature of 565/565°C. Considering that the boiler equipment will be upgraded, the live steam flow is increased up to 1030 t/h. The reconstruction activities involving the replacement of the existing turbine with a new one will yield a service life of 250000 hours for turbine parts exposed to a temperature of 450°C or higher and 200000 hours for pipeline components. Hence, the decision has been made to reuse the arrangement of the existing turbine: a four-cylinder turbine unit comprising a high-pressure cylinder (HPC), two intermediate pressure cylinders (IPC-1 & 2), and a low-pressure cylinder (LPC). The flow path in the new turbine will have active blading in LPC and IPC-1

  5. Parallelized multi-graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy.

    Science.gov (United States)

    Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P

    2014-07-01

    Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.

  6. Parallelized multi–graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy

    Science.gov (United States)

    Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.

    2014-01-01

    Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868

  7. Process and structure: resource management and the development of sub-unit organisational structure.

    Science.gov (United States)

    Packwood, T; Keen, J; Buxton, M

    1992-03-01

    Resource Management (RM) requires hospital units to manage their work in new ways, and the new management processes affect, and are affected by, organisation structure. This paper is concerned with these effects, reporting on the basis of a three-year evaluation of the national RM experiment that was commissioned by the DH. After briefly indicating some of the major characteristics of the RM process, the two main types of unit structures existing in the pilot sites at the beginning of the experiment, unit disciplinary structure and clinical directorates, are analysed. At the end of the experiment, while clinical directorates had become more popular, another variant, clinical grouping, had replaced the unit disciplinary structure. Both types of structure represent a movement towards sub-unit organisation, bringing the work and interests of the service providers and unit managers closer together. Their properties are likewise analysed and their implications, particularly in terms of training and organisational development (OD), are then considered. The paper concludes by considering the causes for these structural changes, which, in the immediate time-scale, appear to owe as much to the NHS Review as to RM.

  8. [Applying graphics processing unit in real-time signal processing and visualization of ophthalmic Fourier-domain OCT system].

    Science.gov (United States)

    Liu, Qiaoyan; Li, Yuejie; Xu, Qiujing; Zhao, Jincheng; Wang, Liwei; Gao, Yonghe

    2013-01-01

    This investigation introduces GPU (Graphics Processing Unit)- based CUDA (Compute Unified Device Architecture) technology into signal processing of ophthalmic FD-OCT (Fourier-Domain Optical Coherence Tomography) imaging system, can realize parallel data processing, using CUDA to optimize relevant operations and algorithms, in order to solve the technical bottlenecks that currently affect ophthalmic real-time imaging in OCT system. Laboratory results showed that with GPU as a general parallel computing processor, the speed of imaging data processing using GPU+CPU mode is more than dozens times faster than traditional CPU platform based serial computing and imaging mode when executing the same data processing, which reaches the clinical requirements for two dimensional real-time imaging.

  9. Telecommunications Research in the United States and Selected Foreign Countries: A Preliminary Survey. Volume II, Individual Contributions.

    Science.gov (United States)

    National Academy of Engineering, Washington, DC. Committee on Telecommunications.

    At the request of the National Science Foundation, the Panel on Telecommunications Research of the Committee on Telecommunications of the National Academy of Engineering has made a preliminary survey of the status and trends of telecommunications research in the United States and selected foreign countries. The status and trends were identified by…

  10. The Role of Music in Speech Intelligibility of Learners with Post Lingual Hearing Impairment in Selected Units in Lusaka District

    Science.gov (United States)

    Katongo, Emily Mwamba; Ndhlovu, Daniel

    2015-01-01

    This study sought to establish the role of music in speech intelligibility of learners with Post Lingual Hearing Impairment (PLHI) and strategies teachers used to enhance speech intelligibility in learners with PLHI in selected special units for the deaf in Lusaka district. The study used a descriptive research design. Qualitative and quantitative…

  11. International Migration, Self-Selection, and the Distribution of Wages: Evidence from Mexico and the United States.

    Science.gov (United States)

    Chiquiar, Daniel; Hanson, Gordon H.

    2005-01-01

    We use the 1990 and 2000 Mexican and U.S. population censuses to test Borjas's negative-selection hypothesis that the less skilled are those most likely to migrate from countries with high skill premia/earnings inequality to countries with low skill premia/earnings inequality. We find that Mexican immigrants in the United States are more educated…

  12. Site selection process for new nuclear power plants - a method to support decision making and improving public participation

    Energy Technology Data Exchange (ETDEWEB)

    Martins, Vivian B.; Cunha, Tatiana S. da; Simoes Filho, Francisco Fernando Lamego, E-mail: vbmartins@ien.gov.br, E-mail: flamego@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Laboratorio de Impactos Ambientais; Lapa, Celso Marcelo F., E-mail: lapa@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Programa de Pos-Graduacao em Ciencia e Tecnologia Nucleares

    2011-07-01

    The Brazilian Energy Plan (PNE 2030) that guides the Government in formulating its strategy for expanding energy supply by 2030 highlights the need for the Brazilian electrical system have more than 4,000 MW from nuclear sources by 2025. Therefore, the Government presented a proposal to build four more nuclear power plants with capacity of 1,000 MW each, at first, two in the Northeast and two in Southeast. The selection and site assessment are key parts of the installation process of a nuclear plant and may significantly affect the cost, public acceptance and safety of the facility during its entire life cycle. The result of this initial stage, it can even seriously affect program success. Wrong decisions in the process of site selection may also require a financial commitment to higher planned in a later phase of the project, besides causing extensive and expensive downtime. Select the location where these units will be built is not a trivial process, because involves the consideration of multiple criteria and judgments in addition to obtaining, organizing and managing a diverse range of data, both qualitative and quantitative, to assist in decision making and ensure that the site selected is the most appropriate in relation to safety and technical, economic and environmental feasibility. This paper presents an overview of the site selection process and its stages, the criteria involved in each step, the tools to support decision making that can be used and the difficulties in applying a formal process of decision making. Also discussed are ways to make the process more transparent and democratic, increasing public involvement as a way to improve acceptance and reduce opposition from various sectors of society, trying to minimize the expense and time involved in the implementation of undertakings of this kind. (author)

  13. Category-selective attention modulates unconscious processes in the middle occipital gyrus.

    Science.gov (United States)

    Tu, Shen; Qiu, Jiang; Martens, Ulla; Zhang, Qinglin

    2013-06-01

    Many studies have revealed the top-down modulation (spatial attention, attentional load, etc.) on unconscious processing. However, there is little research about how category-selective attention could modulate the unconscious processing. In the present study, using functional magnetic resonance imaging (fMRI), the results showed that category-selective attention modulated unconscious face/tool processing in the middle occipital gyrus (MOG). Interestingly, MOG effects were of opposed direction for face and tool processes. During unconscious face processing, activation in MOG decreased under the face-selective attention compared with tool-selective attention. This result was in line with the predictive coding theory. During unconscious tool processing, however, activation in MOG increased under the tool-selective attention compared with face-selective attention. The different effects might be ascribed to an interaction between top-down category-selective processes and bottom-up processes in the partial awareness level as proposed by Kouider, De Gardelle, Sackur, and Dupoux (2010). Specifically, we suppose an "excessive activation" hypothesis.

  14. Retail Deli Slicer Cleaning Frequency--Six Selected Sites, United States, 2012.

    Science.gov (United States)

    Brown, Laura G; Hoover, E Rickamer; Ripley, Danny; Matis, Bailey; Nicholas, David; Hedeen, Nicole; Faw, Brenda

    2016-04-01

    Listeria monocytogenes (Listeria) causes the third highest number of foodborne illness deaths (an estimated 255) in the United States annually, after nontyphoidal Salmonella species and Toxoplasma gondii (1). Deli meats are a major source of listeriosis illnesses, and meats sliced and packaged at retail delis are the major source of listeriosis illnesses attributed to deli meat (4). Mechanical slicers pose cross-contamination risks in delis and are an important source of Listeria cross-contamination. Reducing Listeria contamination of sliced meats in delis will likely reduce Listeria illnesses and outbreaks. Good slicer cleaning practices can reduce this foodborne illness risk. CDC's Environmental Health Specialists Network (EHS-Net) studied how often retail deli slicers were fully cleaned (disassembled, cleaned, and sanitized) at the Food and Drug Administration (FDA) Food Code-specified minimum frequency of every 4 hours and examined deli and staff characteristics related to slicer cleaning frequency. Interviews with staff members in 298 randomly-selected delis in six EHS-Net sites showed that approximately half of delis fully cleaned their slicers less often than FDA's specified minimum frequency. Chain-owned delis and delis with more customers, more slicers, required manager food safety training, food safety-knowledgeable workers, written slicer-cleaning policies, and food safety-certified managers fully cleaned their slicers more frequently than did other types of delis, according to deli managers or workers. States and localities should require deli manager training and certification, as specified in the FDA Food Code. They should also consider encouraging or requiring delis to have written slicer-cleaning policies. Retail food industry leaders can also implement these prevention efforts to reduce risk in their establishments. Because independent and smaller delis had lower frequencies of slicer cleaning, prevention efforts should focus on these types of

  15. Fast blood flow visualization of high-resolution laser speckle imaging data using graphics processing unit.

    Science.gov (United States)

    Liu, Shusen; Li, Pengcheng; Luo, Qingming

    2008-09-15

    Laser speckle contrast analysis (LASCA) is a non-invasive, full-field optical technique that produces two-dimensional map of blood flow in biological tissue by analyzing speckle images captured by CCD camera. Due to the heavy computation required for speckle contrast analysis, video frame rate visualization of blood flow which is essentially important for medical usage is hardly achieved for the high-resolution image data by using the CPU (Central Processing Unit) of an ordinary PC (Personal Computer). In this paper, we introduced GPU (Graphics Processing Unit) into our data processing framework of laser speckle contrast imaging to achieve fast and high-resolution blood flow visualization on PCs by exploiting the high floating-point processing power of commodity graphics hardware. By using GPU, a 12-60 fold performance enhancement is obtained in comparison to the optimized CPU implementations.

  16. Liquid phase methanol LaPorte process development unit: Modification, operation, and support studies

    Energy Technology Data Exchange (ETDEWEB)

    1991-02-02

    The primary focus of this Process Development Unit operating program was to prepare for a confident move to the next scale of operation with a simplified and optimized process. The main purpose of these runs was the evaluation of the alternate commercial catalyst (F21/0E75-43) that had been identified in the laboratory under a different subtask of the program. If the catalyst proved superior to the previous catalyst, then the evaluation run would be continued into a 120-day life run. Also, minor changes were made to the Process Development Unit system to improve operations and reliability. The damaged reactor demister from a previous run was replaced, and a new demister was installed in the intermediate V/L separator. The internal heat exchanger was equipped with an expansion loop to relieve thermal stresses so operation at higher catalyst loadings and gas velocities would be possible. These aggressive conditions are important for improving process economics. (VC)

  17. Field-trip guides to selected volcanoes and volcanic landscapes of the western United States

    Science.gov (United States)

    ,

    2017-06-23

    The North American Cordillera is home to a greater diversity of volcanic provinces than any comparably sized region in the world. The interplay between changing plate-margin interactions, tectonic complexity, intra-crustal magma differentiation, and mantle melting have resulted in a wealth of volcanic landscapes.  Field trips in this guide book collection (published as USGS Scientific Investigations Report 2017–5022) visit many of these landscapes, including (1) active subduction-related arc volcanoes in the Cascade Range; (2) flood basalts of the Columbia Plateau; (3) bimodal volcanism of the Snake River Plain-Yellowstone volcanic system; (4) some of the world’s largest known ignimbrites from southern Utah, central Colorado, and northern Nevada; (5) extension-related volcanism in the Rio Grande Rift and Basin and Range Province; and (6) the eastern Sierra Nevada featuring Long Valley Caldera and the iconic Bishop Tuff.  Some of the field trips focus on volcanic eruptive and emplacement processes, calling attention to the fact that the western United States provides opportunities to examine a wide range of volcanological phenomena at many scales.The 2017 Scientific Assembly of the International Association of Volcanology and Chemistry of the Earth’s Interior (IAVCEI) in Portland, Oregon, was the impetus to update field guides for many of the volcanoes in the Cascades Arc, as well as publish new guides for numerous volcanic provinces and features of the North American Cordillera. This collection of guidebooks summarizes decades of advances in understanding of magmatic and tectonic processes of volcanic western North America. These field guides are intended for future generations of scientists and the general public as introductions to these fascinating areas; the hope is that the general public will be enticed toward further exploration and that scientists will pursue further field-based research.

  18. [Work process of the nurse who works in child care in family health units].

    Science.gov (United States)

    de Assis, Wesley Dantas; Collet, Neusa; Reichert, Altamira Pereira da Silva; de Sá, Lenilde Duarte

    2011-01-01

    This is a qualitative research, which purpose was to analyse the working process of nurse in child care actions in family health units. Nurses are the subjects and empirical data was achieved by the means of participant observation, and interviews. Data analysis followed thematic analysis fundaments. Results reveal that working process organization of nurses still remains centered in proceedings with an offert of assistance based in client illness, showing obstacles to puericulture practice in health basic attention.

  19. 75 FR 74005 - Fisheries of the Northeastern United States; Monkfish Fishery; Scoping Process

    Science.gov (United States)

    2010-11-30

    ... National Oceanic and Atmospheric Administration RIN 0648-BA50 Fisheries of the Northeastern United States; Monkfish Fishery; Scoping Process AGENCY: National Marine Fisheries Service (NMFS), National Oceanic and... statement (EIS) and scoping meetings; request for comments. SUMMARY: The New England Fishery...

  20. ECO LOGIC INTERNATIONAL GAS-PHASE CHEMICAL REDUCTION PROCESS - THE THERMAL DESORPTION UNIT - APPLICATIONS ANALYSIS REPORT

    Science.gov (United States)

    ELI ECO Logic International, Inc.'s Thermal Desorption Unit (TDU) is specifically designed for use with Eco Logic's Gas Phase Chemical Reduction Process. The technology uses an externally heated bath of molten tin in a hydrogen atmosphere to desorb hazardous organic compounds fro...

  1. Process methods and levels of automation of wood pallet repair in the United States

    Science.gov (United States)

    Jonghun Park; Laszlo Horvath; Robert J. Bush

    2016-01-01

    This study documented the current status of wood pallet repair in the United States by identifying the types of processing and equipment usage in repair operations from an automation prespective. The wood pallet repair firms included in the sudy received an average of approximately 1.28 million cores (i.e., used pallets) for recovery in 2012. A majority of the cores...

  2. Catalyzed steam gasification of biomass. Phase 3: Biomass Process Development Unit (PDU) construction and initial operation

    Science.gov (United States)

    Healey, J. J.; Hooverman, R. H.

    1981-12-01

    The design and construction of the process development unit (PDU) are described in detail, examining each system and component in order. Siting, the chip handling system, the reactor feed system, the reactor, the screw conveyor, the ash dump system, the PDU support equipment, control and information management, and shakedown runs are described.

  3. Silicon Carbide (SiC) Power Processing Unit (PPU) for Hall Effect Thrusters Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this SBIR project, APEI, Inc. is proposing to develop a high efficiency, rad-hard 3.8 kW silicon carbide (SiC) power supply for the Power Processing Unit (PPU) of...

  4. Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units

    Science.gov (United States)

    This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...

  5. Liquid phase methanol LaPorte process development unit: Modification, operation, and support studies

    Energy Technology Data Exchange (ETDEWEB)

    1991-02-02

    This report consists of Detailed Data Acquisition Sheets for Runs E-6 and E-7 for Task 2.2 of the Modification, Operation, and Support Studies of the Liquid Phase Methanol Laporte Process Development Unit. (Task 2.2: Alternate Catalyst Run E-6 and Catalyst Activity Maintenance Run E-7).

  6. Sodium content of popular commercially processed and restaurant foods in the United States

    Science.gov (United States)

    Nutrient Data Laboratory (NDL) of the U.S. Department of Agriculture (USDA) in close collaboration with U.S. Center for Disease Control and Prevention is monitoring the sodium content of commercially processed and restaurant foods in the United States. The main purpose of this manuscript is to prov...

  7. On the use of graphics processing units (GPUs) for molecular dynamics simulation of spherical particles

    NARCIS (Netherlands)

    Hidalgo, R.C.; Kanzaki, T.; Alonso-Marroquin, F.; Luding, S.; Yu, A.; Dong, K.; Yang, R.; Luding, S.

    2013-01-01

    General-purpose computation on Graphics Processing Units (GPU) on personal computers has recently become an attractive alternative to parallel computing on clusters and supercomputers. We present the GPU-implementation of an accurate molecular dynamics algorithm for a system of spheres. The new hybr

  8. Graphics Processing Unit-Based Bioheat Simulation to Facilitate Rapid Decision Making Associated with Cryosurgery Training.

    Science.gov (United States)

    Keelan, Robert; Zhang, Hong; Shimada, Kenji; Rabin, Yoed

    2016-04-01

    This study focuses on the implementation of an efficient numerical technique for cryosurgery simulations on a graphics processing unit as an alternative means to accelerate runtime. This study is part of an ongoing effort to develop computerized training tools for cryosurgery, with prostate cryosurgery as a developmental model. The ability to perform rapid simulations of various test cases is critical to facilitate sound decision making associated with medical training. Consistent with clinical practice, the training tool aims at correlating the frozen region contour and the corresponding temperature field with the target region shape. The current study focuses on the feasibility of graphics processing unit-based computation using C++ accelerated massive parallelism, as one possible implementation. Benchmark results on a variety of computation platforms display between 3-fold acceleration (laptop) and 13-fold acceleration (gaming computer) of cryosurgery simulation, in comparison with the more common implementation on a multicore central processing unit. While the general concept of graphics processing unit-based simulations is not new, its application to phase-change problems, combined with the unique requirements for cryosurgery optimization, represents the core contribution of the current study.

  9. Process engineering and mechanical design reports. Volume III. Preliminary design and assessment of a 12,500 BPD coal-to-methanol-to-gasoline plant. [Grace C-M-G Plant, Henderson County, Kentucky; Units 26, 27, 31 through 34, 36 through 39

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, R. M.

    1982-08-01

    Various unit processes are considered as follows: a brief description, basis of design; process selection rationale, a brief description of the process chosen and a risk assessment evaluation (for some cases). (LTN)

  10. Selection of Leading Industry in Anshun Experimental District Based on Analytic Hierarchy Process

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    Analytic Hierarchy Process is selected according to the selection method of leading industries by both domestic and foreign scholars. Leading industries which can accelerate the overall economic development of Anshun Experimental District is taken as the target layer; and market demand, efficiency standards and local conditions are taken as the criterion layers, so as to construct the select model of leading industry and to choose the leading industry in Anshun Experimental District. Result shows that the priority order of the leading industry selection in Anshun Experimental District is as follows: tourism > pharmacy > transportation > energy > food processing > characteristic agriculture > package and printing > automobile industry > mining > electric engineering.

  11. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units.

    Science.gov (United States)

    Rath, N; Kato, S; Levesque, J P; Mauel, M E; Navratil, G A; Peng, Q

    2014-04-01

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  12. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    Science.gov (United States)

    Rath, N.; Kato, S.; Levesque, J. P.; Mauel, M. E.; Navratil, G. A.; Peng, Q.

    2014-04-01

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  13. Cellular scanning strategy for selective laser melting: Generating reliable, optimized scanning paths and processing parameters

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2015-01-01

    Selective laser melting is yet to become a standardized industrial manufacturing technique. The process continues to suffer from defects such as distortions, residual stresses, localized deformations and warpage caused primarily due to the localized heating, rapid cooling and high temperature...... gradients that occur during the process. While process monitoring and control of selective laser melting is an active area of research, establishing the reliability and robustness of the process still remains a challenge.In this paper, a methodology for generating reliable, optimized scanning paths...... and process parameters for selective laser melting of a standard sample is introduced. The processing of the sample is simulated by sequentially coupling a calibrated 3D pseudo-analytical thermal model with a 3D finite element mechanical model.The optimized processing parameters are subjected to a Monte Carlo...

  14. Charles Robert Darwin and Alfred Russel Wallace: their dispute over the units of selection.

    Science.gov (United States)

    Ruse, Michael

    2013-12-01

    Charles Darwin and Alfred Russel Wallace independently discovered the mechanism of natural selection for evolutionary change. However, they viewed the working of selection differently. For Darwin, selection was always focused on the benefit for the individual. For Wallace, selection was as much something of benefit for the group as for the individual. This difference is traced to their different background political-economic views, with Darwin in favor of Adam Smith's view of society and Wallace following Robert Owen in being a socialist.

  15. 从图形处理器到基于GPU的通用计算%From Graphic Processing Unit to General Purpose Graphic Processing Unit

    Institute of Scientific and Technical Information of China (English)

    刘金硕; 刘天晓; 吴慧; 曾秋梅; 任梦菲; 顾宜淳

    2013-01-01

    对GPU(graphic process unit)、基于GPU的通用计算(general purpose GPU,GPGPU)、基于GPU的编程模型与环境进行了界定;将GPU的发展分为4个阶段,阐述了GPU的架构由非统一的渲染架构到统一的渲染架构,再到新一代的费米架构的变化;通过对基于GPU的通用计算的架构与多核CPU架构、分布式集群架构进行了软硬件的对比.分析表明:当进行中粒度的线程级数据密集型并行运算时,采用多核多线程并行;当进行粗粒度的网络密集型并行运算时,采用集群并行;当进行细粒度的计算密集型并行运算时,采用GPU通用计算并行.最后本文展示了未来的GPGPU的研究热点和发展方向--GPGPU自动并行化、CUDA对多种语言的支持、CUDA的性能优化,并介绍了GPGPU的一些典型应用.%This paper defines the outline of GPU(graphic processing unit) , the general purpose computation, the programming model and the environment for GPU. Besides, it introduces the evolution process from GPU to GPGPU (general purpose graphic processing unit) , and the change from non-uniform render architecture to the unified render architecture and the next Fermi architecture in details. Then it compares GPGPU architecture with multi-core GPU architecture and distributed cluster architecture from the perspective of software and hardware. When doing the middle grain level thread data intensive parallel computing, the multi-core and multi-thread should be utilized. When doing the coarse grain network computing, the cluster computing should be utilized. When doing the fine grained compute intensive parallel computing, the general purpose computation should be adopted. Meanwhile, some classical applications of GPGPU have been mentioned. At last, this paper demonstrates the further developments and research hotspots of GPGPU, which are automatic parallelization of GPGPU, multi-language support and performance optimization of CUDA, and introduces the classic

  16. Bandwidth Enhancement between Graphics Processing Units on the Peripheral Component Interconnect Bus

    Directory of Open Access Journals (Sweden)

    ANTON Alin

    2015-10-01

    Full Text Available General purpose computing on graphics processing units is a new trend in high performance computing. Present day applications require office and personal supercomputers which are mostly based on many core hardware accelerators communicating with the host system through the Peripheral Component Interconnect (PCI bus. Parallel data compression is a difficult topic but compression has been used successfully to improve the communication between parallel message passing interface (MPI processes on high performance computing clusters. In this paper we show that special pur pose compression algorithms designed for scientific floating point data can be used to enhance the bandwidth between 2 graphics processing unit (GPU devices on the PCI Express (PCIe 3.0 x16 bus in a homebuilt personal supercomputer (PSC.

  17. Parallel computing for simultaneous iterative tomographic imaging by graphics processing units

    Science.gov (United States)

    Bello-Maldonado, Pedro D.; López, Ricardo; Rogers, Colleen; Jin, Yuanwei; Lu, Enyue

    2016-05-01

    In this paper, we address the problem of accelerating inversion algorithms for nonlinear acoustic tomographic imaging by parallel computing on graphics processing units (GPUs). Nonlinear inversion algorithms for tomographic imaging often rely on iterative algorithms for solving an inverse problem, thus computationally intensive. We study the simultaneous iterative reconstruction technique (SIRT) for the multiple-input-multiple-output (MIMO) tomography algorithm which enables parallel computations of the grid points as well as the parallel execution of multiple source excitation. Using graphics processing units (GPUs) and the Compute Unified Device Architecture (CUDA) programming model an overall improvement of 26.33x was achieved when combining both approaches compared with sequential algorithms. Furthermore we propose an adaptive iterative relaxation factor and the use of non-uniform weights to improve the overall convergence of the algorithm. Using these techniques, fast computations can be performed in parallel without the loss of image quality during the reconstruction process.

  18. Rapid learning-based video stereolization using graphic processing unit acceleration

    Science.gov (United States)

    Sun, Tian; Jung, Cheolkon; Wang, Lei; Kim, Joongkyu

    2016-09-01

    Video stereolization has received much attention in recent years due to the lack of stereoscopic three-dimensional (3-D) contents. Although video stereolization can enrich stereoscopic 3-D contents, it is hard to achieve automatic two-dimensional-to-3-D conversion with less computational cost. We proposed rapid learning-based video stereolization using a graphic processing unit (GPU) acceleration. We first generated an initial depth map based on learning from examples. Then, we refined the depth map using saliency and cross-bilateral filtering to make object boundaries clear. Finally, we performed depth-image-based-rendering to generate stereoscopic 3-D views. To accelerate the computation of video stereolization, we provided a parallelizable hybrid GPU-central processing unit (CPU) solution to be suitable for running on GPU. Experimental results demonstrate that the proposed method is nearly 180 times faster than CPU-based processing and achieves a good performance comparable to the-state-of-the-art ones.

  19. Molecular dynamics for long-range interacting systems on Graphic Processing Units

    CERN Document Server

    Filho, Tarcísio M Rocha

    2012-01-01

    We present implementations of a fourth-order symplectic integrator on graphic processing units for three $N$-body models with long-range interactions of general interest: the Hamiltonian Mean Field, Ring and two-dimensional self-gravitating models. We discuss the algorithms, speedups and errors using one and two GPU units. Speedups can be as high as 140 compared to a serial code, and the overall relative error in the total energy is of the same order of magnitude as for the CPU code. The number of particles used in the tests range from 10,000 to 50,000,000 depending on the model.

  20. IT vendor selection model by using structural equation model & analytical hierarchy process

    Science.gov (United States)

    Maitra, Sarit; Dominic, P. D. D.

    2012-11-01

    Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.

  1. A PATH INTEGRAL FORMULATION OF THE WRIGHT-FISHER PROCESS WITH GENIC SELECTION

    Science.gov (United States)

    SCHRAIBER, JOSHUA G.

    2014-01-01

    The Wright-Fisher process with selection is an important tool in population genetics theory. Traditional analysis of this process relies on the diffusion approximation. The diffusion approximation is usually studied in a partial differential equations framework. In this paper, I introduce a path integral formalism to study the Wright-Fisher process with selection and use that formalism to obtain a simple perturbation series to approximate the transition density. The perturbation series can be understood in terms of Feynman diagrams, which have a simple probabilistic interpretation in terms of selective events. The perturbation series proves to be an accurate approximation of the transition density for weak selection and is shown to be arbitrarily accurate for any selection coefficient. PMID:24269333

  2. Software Graphics Processing Unit (sGPU) for Deep Space Applications

    Science.gov (United States)

    McCabe, Mary; Salazar, George; Steele, Glen

    2015-01-01

    A graphics processing capability will be required for deep space missions and must include a range of applications, from safety-critical vehicle health status to telemedicine for crew health. However, preliminary radiation testing of commercial graphics processing cards suggest they cannot operate in the deep space radiation environment. Investigation into an Software Graphics Processing Unit (sGPU)comprised of commercial-equivalent radiation hardened/tolerant single board computers, field programmable gate arrays, and safety-critical display software shows promising results. Preliminary performance of approximately 30 frames per second (FPS) has been achieved. Use of multi-core processors may provide a significant increase in performance.

  3. Application of PROMETHEE-GAIA method for non-traditional machining processes selection

    Directory of Open Access Journals (Sweden)

    Prasad Karande

    2012-10-01

    Full Text Available With ever increasing demand for manufactured products of hard alloys and metals with high surface finish and complex shape geometry, more interest is now being paid to non-traditional machining (NTM processes, where energy in its direct form is used to remove material from workpiece surface. Compared to conventional machining processes, NTM processes possess almost unlimited capabilities and there is a strong believe that use of NTM processes would go on increasing in diverse range of applications. Presence of a large number of NTM processes along with complex characteristics and capabilities, and lack of experts in NTM process selection domain compel for development of a structured approach for NTM process selection for a given machining application. Past researchers have already attempted to solve NTM process selection problems using various complex mathematical approaches which often require a profound knowledge in mathematics/artificial intelligence from the part of process engineers. In this paper, four NTM process selection problems are solved using an integrated PROMETHEE (preference ranking organization method for enrichment evaluation and GAIA (geometrical analysis for interactive aid method which would act as a visual decision aid to the process engineers. The observed results are quite satisfactory and exactly match with the expected solutions.

  4. Random Gaussian process effect upon selective system of spectra heterodyne analyzer

    Directory of Open Access Journals (Sweden)

    N. F. Vollerner

    1967-12-01

    Full Text Available The formula is obtained that describe mean power changing the selective system output by changing speed tuning of the spectra heterodyne analyzer when searching random stationary processes.

  5. Methodology for Selection of Non-Restored Reserved Systems Pertaining to Control of Technological Processes

    Directory of Open Access Journals (Sweden)

    V. A. Anischenko

    2008-01-01

    Full Text Available The paper contains analysis of reliability of non-restored passive reserved systems pertaining to control of technological processes. Criteria have been justified and methodology for optimum selection of reserved systems has been developed.

  6. Case Studies of Internationalization in Adult and Higher Education: Inside the Processes of Four Universities in the United States and the United Kingdom

    Science.gov (United States)

    Coryell, Joellen Elizabeth; Durodoye, Beth A.; Wright, Robin Redmon; Pate, P. Elizabeth; Nguyen, Shelbee

    2012-01-01

    This report outlines a method for learning about the internationalization processes at institutions of adult and higher education and then provides the analysis of data gathered from the researchers' own institution and from site visits to three additional universities in the United States and the United Kingdom. It was found that campus…

  7. A recruitment and selection process model: the case of the Department of Justice and Constitutional Development

    OpenAIRE

    Thebe, T P; 12330841 - Van der Waldt, Gerrit

    2014-01-01

    The purpose of this article is to report on findings of an empirical investigation conducted at the Department of Justice and Constitutional Development. The aim of the investigation was to ascertain the status of current practices and challenges regarding the processes and procedures utilised for recruitment and selection. Based on these findings the article further outlines the design of a comprehensive process model for human resource recruitment and selection for the Department. The model...

  8. Cellular scanning strategy for selective laser melting: Generating reliable, optimized scanning paths and processing parameters

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2015-01-01

    to generate optimized cellular scanning strategies and processing parameters, with an objective of reducing thermal asymmetries and mechanical deformations. The optimized scanning strategies are used for selective laser melting of the standard samples, and experimental and numerical results are compared....... gradients that occur during the process. While process monitoring and control of selective laser melting is an active area of research, establishing the reliability and robustness of the process still remains a challenge.In this paper, a methodology for generating reliable, optimized scanning paths...

  9. Designing and Implementing an OVERFLOW Reader for ParaView and Comparing Performance Between Central Processing Units and Graphical Processing Units

    Science.gov (United States)

    Chawner, David M.; Gomez, Ray J.

    2010-01-01

    In the Applied Aerosciences and CFD branch at Johnson Space Center, computational simulations are run that face many challenges. Two of which are the ability to customize software for specialized needs and the need to run simulations as fast as possible. There are many different tools that are used for running these simulations and each one has its own pros and cons. Once these simulations are run, there needs to be software capable of visualizing the results in an appealing manner. Some of this software is called open source, meaning that anyone can edit the source code to make modifications and distribute it to all other users in a future release. This is very useful, especially in this branch where many different tools are being used. File readers can be written to load any file format into a program, to ease the bridging from one tool to another. Programming such a reader requires knowledge of the file format that is being read as well as the equations necessary to obtain the derived values after loading. When running these CFD simulations, extremely large files are being loaded and having values being calculated. These simulations usually take a few hours to complete, even on the fastest machines. Graphics processing units (GPUs) are usually used to load the graphics for computers; however, in recent years, GPUs are being used for more generic applications because of the speed of these processors. Applications run on GPUs have been known to run up to forty times faster than they would on normal central processing units (CPUs). If these CFD programs are extended to run on GPUs, the amount of time they would require to complete would be much less. This would allow more simulations to be run in the same amount of time and possibly perform more complex computations.

  10. Manufacturing process and material selection in concurrent collaborative design of MEMS devices

    Science.gov (United States)

    Zha, Xuan F.; Du, H.

    2003-09-01

    In this paper we present knowledge of an intensive approach and system for selecting suitable manufacturing processes and materials for microelectromechanical systems (MEMS) devices in concurrent collaborative design environment. In the paper, fundamental issues on MEMS manufacturing process and material selection such as concurrent design framework, manufacturing process and material hierarchies, and selection strategy are first addressed. Then, a fuzzy decision support scheme for a multi-criteria decision-making problem is proposed for estimating, ranking and selecting possible manufacturing processes, materials and their combinations. A Web-based prototype advisory system for the MEMS manufacturing process and material selection, WebMEMS-MASS, is developed based on the client-knowledge server architecture and framework to help the designer find good processes and materials for MEMS devices. The system, as one of the important parts of an advanced simulation and modeling tool for MEMS design, is a concept level process and material selection tool, which can be used as a standalone application or a Java applet via the Web. The running sessions of the system are inter-linked with webpages of tutorials and reference pages to explain the facets, fabrication processes and material choices, and calculations and reasoning in selection are performed using process capability and material property data from a remote Web-based database and interactive knowledge base that can be maintained and updated via the Internet. The use of the developed system including operation scenario, use support, and integration with an MEMS collaborative design system is presented. Finally, an illustration example is provided.

  11. Effects of hyperbaric nitrogen-induced narcosis on response-selection processes.

    Science.gov (United States)

    Meckler, Cédric; Blatteau, Jean-Eric; Hasbroucq, Thierry; Schmid, Bruno; Risso, Jean-Jacques; Vidal, Franck

    2014-01-01

    Certain underwater circumstances carry risk of inert gas narcosis. Impairment of sensorimotor information processing due to narcosis, induced by normobaric nitrous oxide or high partial nitrogen pressure, has been broadly evidenced, by a lengthening of the reaction time (RT). However, the locus of this effect remains a matter of debate. We examined whether inert gas narcosis affects the response-selection stage of sensorimotor information processing. We compared an air normobaric condition with a hyperbaric condition in which 10 subjects were subjected to 6 absolute atmospheres of 8.33% O2 Nitrox. In both conditions, subjects performed a between-hand choice-RT task in which we explicitly manipulated the stimulus-response association rule. The effect of this manipulation (which is supposed to affect response-selection processes) was modified by inert gas narcosis. It is concluded, therefore, that response selection processes are among the loci involved in the effect of inert gas narcosis on information processing.

  12. EQUAL EMPLOYMENT OPPORTUNITIES IN THE RECRUITMENT AND SELECTION PROCESS OF HUMAN RESOURCES

    Directory of Open Access Journals (Sweden)

    Aleksandra Stoilkovska

    2015-12-01

    Full Text Available The aim of this article is to examine the problem of the concept of equal employment opportunities in the HR recruitment and selection process. Due to the fact that in these processes, both the HR managers and the applicants are involved, this research is conducted separately among them. Thus, it will be determined if both sides share the same opinion with respect to the existence of this concept in the mentioned processes. Providing equal employment opportunities is crucial for any company and represents a key for selecting the real employees. Therefore, the research includes the existence of prejudices in the recruitment and selection process such as discrimination based on national and social origin, gender and sexual orientation, age, political affiliation etc. As an essential part of this concept, the legislation in the Republic of Macedonia and its impact in the process of generating equal opportunities will be considered.

  13. Using the Analytic Hierarchy Process to Prioritize and Select Phase Change Materials for Comfort Application in Buildings

    Directory of Open Access Journals (Sweden)

    Socaciu Lavinia Gabriela

    2014-03-01

    Full Text Available Phase change materials (PCMs selection and prioritization for comfort application in buildings have a significant contribution to the improvement of latent heat storage systems. PCMs have a relatively large thermal energy storage capacity in a temperature range close to their switch point. PCMs absorb energy during the heating process as phase change takes place and release energy to the environment in the phase change range during a reverse cooling process. Thermal energy storage systems using PCMs as storage medium offer advantages such as: high heat storage capacity and store/release thermal energy at a nearly constant temperature, relative low weight, small unit size and isothermal behaviour during charging and discharging when compared to the sensible thermal energy storage. PCMs are valuable only in the range of temperature close to their phase change point, since their main thermal energy storage capacity depend on their mass and on their latent heat of fusion. Selection of the proper PCMs is a challenging task because there are lots of different materials with different characteristics. In this research paper the principles and techniques of the Analytic Hierarchy Process (AHP are presented, discussed and applied in order to prioritize and select the proper PCMs for comfort application in buildings. The AHP method is used for solving complex decisional problems and allows the decision maker to take the most suitable decisions for the problem studied. The results obtained reveal that the AHP method can be successfully applied when we want to choose a PCM for comfort application in buildings.

  14. THE PROGRAMMING NEURO-LINGUISTICS AND THEIR APPLICABILITY IN THE PROCESS OF RECRUITMENT AND SELECTION

    Directory of Open Access Journals (Sweden)

    Gilma Álamo Sánchez

    2008-04-01

    Full Text Available He/she is carried out a study referred to tools of Programming Neuro-linguistics (PNL for the Selection, Employment and Training that allow choosing the appropriate personnel taking the language and the behavior as a result. For their development theories were revised referred to the PNL and the recruitment process and selection sustained in the process of the interview. The summations are oriented to the importance and convenience for the Management of Human resources of applying the Programming Neuro-linguistics as selection tool of personal. Finally it is recommended to apply the proposal inside the mark of adaptability according to the necessities and demands of each organization.

  15. Profiles of Reservoir Properties of Oil-Bearing Plays for Selected Petroleum Provinces in the United States

    Science.gov (United States)

    Freeman, P.A.; Attanasi, E.D.

    2015-11-05

    Profiles of reservoir properties of oil-bearing plays for selected petroleum provinces in the United States were developed to characterize the database to be used for a potential assessment by the U.S. Geological Survey (USGS) of oil that would be technically recoverable by the application of enhanced oil recovery methods using injection of carbon dioxide (CO2-EOR). The USGS assessment methodology may require reservoir-level data for the purposes of screening conventional oil reservoirs and projecting CO2-EOR performance in terms of the incremental recoverable oil. The information used in this report is based on reservoir properties from the “Significant Oil and Gas Fields of the United States Database” prepared by Nehring Associates, Inc. (2012). As described by Nehring Associates, Inc., the database “covers all producing provinces (basins) in the United States except the Appalachian Basin and the Cincinnati Arch.”

  16. The safety and regulatory process for low calorie sweeteners in the United States.

    Science.gov (United States)

    Roberts, Ashley

    2016-10-01

    Low calorie sweeteners are some of the most thoroughly tested and evaluated of all food additives. Products including aspartame and saccharin, have undergone several rounds of risk assessment by the United States Food and Drug Administration (FDA) and the European Food Safety Authority (EFSA), in relation to a number of potential safety concerns, including carcinogenicity and more recently, effects on body weight gain, glycemic control and effects on the gut microbiome. The majority of the modern day sweeteners; acesulfame K, advantame, aspartame, neotame and sucralose have been approved in the United States through the food additive process, whereas the most recent sweetener approvals for steviol glycosides and lo han guo have occurred through the Generally Recognized as Safe (GRAS) system, based on scientific procedures. While the regulatory process and review time of these two types of sweetener evaluations by the FDA differ, the same level of scientific evidence is required to support safety, so as to ensure a reasonable certainty of no harm.

  17. Fast crustal deformation computing method for multiple computations accelerated by a graphics processing unit cluster

    Science.gov (United States)

    Yamaguchi, Takuma; Ichimura, Tsuyoshi; Yagi, Yuji; Agata, Ryoichiro; Hori, Takane; Hori, Muneo

    2017-08-01

    As high-resolution observational data become more common, the demand for numerical simulations of crustal deformation using 3-D high-fidelity modelling is increasing. To increase the efficiency of performing numerical simulations with high computation costs, we developed a fast solver using heterogeneous computing, with graphics processing units (GPUs) and central processing units, and then used the solver in crustal deformation computations. The solver was based on an iterative solver and was devised so that a large proportion of the computation was calculated more quickly using GPUs. To confirm the utility of the proposed solver, we demonstrated a numerical simulation of the coseismic slip distribution estimation, which requires 360 000 crustal deformation computations with 82 196 106 degrees of freedom.

  18. Using Graphics Processing Units to solve the classical N-body problem in physics and astrophysics

    CERN Document Server

    Spera, Mario

    2014-01-01

    Graphics Processing Units (GPUs) can speed up the numerical solution of various problems in astrophysics including the dynamical evolution of stellar systems; the performance gain can be more than a factor 100 compared to using a Central Processing Unit only. In this work I describe some strategies to speed up the classical N-body problem using GPUs. I show some features of the N-body code HiGPUs as template code. In this context, I also give some hints on the parallel implementation of a regularization method and I introduce the code HiGPUs-R. Although the main application of this work concerns astrophysics, some of the presented techniques are of general validity and can be applied to other branches of physics such as electrodynamics and QCD.

  19. Multi-unit Integration in Microfluidic Processes: Current Status and Future Horizons

    Directory of Open Access Journals (Sweden)

    Pratap R. Patnaik

    2011-07-01

    Full Text Available Microfluidic processes, mainly for biological and chemical applications, have expanded rapidly in recent years. While the initial focus was on single units, principally microreactors, technological and economic considerations have caused a shift to integrated microchips in which a number of microdevices function coherently. These integrated devices have many advantages over conventional macro-scale processes. However, the small scale of operation, complexities in the underlying physics and chemistry, and differences in the time constants of the participating units, in the interactions among them and in the outputs of interest make it difficult to design and optimize integrated microprocesses. These aspects are discussed here, current research and applications are reviewed, and possible future directions are considered.

  20. Advanced Investigation and Comparative Study of Graphics Processing Unit-queries Countered

    Directory of Open Access Journals (Sweden)

    A. Baskar

    2014-10-01

    Full Text Available GPU, Graphics Processing Unit, is the buzz word ruling the market these days. What is that and how has it gained that much importance is what to be answered in this research work. The study has been constructed with full attention paid towards answering the following question. What is a GPU? How is it different from a CPU? How good/bad it is computationally when comparing to CPU? Can GPU replace CPU, or it is a day dream? How significant is arrival of APU (Accelerated Processing Unit in market? What tools are needed to make GPU work? What are the improvement/focus areas for GPU to stand in the market? All the above questions are discussed and answered well in this study with relevant explanations.

  1. How do relatives of persons with dementia experience their role in the patient participation process in special care units?

    Science.gov (United States)

    Helgesen, Ann K; Larsson, Maria; Athlin, Elsy

    2013-06-01

    To explore the role of relatives in the patient participation process for persons with dementia living in special care units in Norwegian nursing homes, with focus on everyday life. Studies exploring the experience of relatives of persons with dementia as to their role in the patient participation process are limited. The study had an explorative grounded theory design. Data collection was carried out by interviews with twelve close relatives. Simultaneously, data analysis was performed with open, axial and selective coding. The relatives' role in the patient participation process was experienced as transitions between different roles to secure the resident's well-being, which was understood as the resident's comfort and dignity. This was the ultimate goal for their participation. The categories 'being a visitor', 'being a spokesperson', 'being a guardian' and 'being a link to the outside world' described the different roles. Different situations and conditions triggered different roles, and the relatives' trust in the personnel was a crucial factor. The study has highlighted the great importance of relatives' role in the patient participation process, to secure the well-being of residents living in special care units. Our findings stress the uttermost need for a high degree of competence, interest and commitment among the personnel together with a well functioning, collaborative and cooperative relationship between the personnel and the relatives of persons with dementia. The study raises several important questions that emphasise that more research is needed. Relatives need to be seen and treated as a resource in the patient participation process in dementia care. More attention should be paid to initiating better cooperation between the personnel and the relatives, as this may have a positive impact both on the residents' and the relatives' well-being. © 2012 Blackwell Publishing Ltd.

  2. The Mixed Waste Management Facility: Technology selection and implementation plan, Part 2, Support processes

    Energy Technology Data Exchange (ETDEWEB)

    Streit, R.D.; Couture, S.A.

    1995-03-01

    The purpose of this document is to establish the foundation for the selection and implementation of technologies to be demonstrated in the Mixed Waste Management Facility, and to select the technologies for initial pilot-scale demonstration. Criteria are defined for judging demonstration technologies, and the framework for future technology selection is established. On the basis of these criteria, an initial suite of technologies was chosen, and the demonstration implementation scheme was developed. Part 1, previously released, addresses the selection of the primary processes. Part II addresses process support systems that are considered ``demonstration technologies.`` Other support technologies, e.g., facility off-gas, receiving and shipping, and water treatment, while part of the integrated demonstration, use best available commercial equipment and are not selected against the demonstration technology criteria.

  3. SELECTION OF NON-CONVENTIONAL MACHINING PROCESSES USING THE OCRA METHOD

    Directory of Open Access Journals (Sweden)

    Miloš Madić

    2015-04-01

    Full Text Available Selection of the most suitable nonconventional machining process (NCMP for a given machining application can be viewed as multi-criteria decision making (MCDM problem with many conflicting and diverse criteria. To aid these selection processes, different MCDM methods have been proposed. This paper introduces the use of an almost unexplored MCDM method, i.e. operational competitiveness ratings analysis (OCRA method for solving the NCMP selection problems. Applicability, suitability and computational procedure of OCRA method have been demonstrated while solving three case studies dealing with selection of the most suitable NCMP. In each case study the obtained rankings were compared with those derived by the past researchers using different MCDM methods. The results obtained using the OCRA method have good correlation with those derived by the past researchers which validate the usefulness of this method while solving complex NCMP selection problems.

  4. Accelerated molecular dynamics force evaluation on graphics processing units for thermal conductivity calculations

    OpenAIRE

    Fan, Zheyong; Siro, Topi; Harju, Ari

    2012-01-01

    In this paper, we develop a highly efficient molecular dynamics code fully implemented on graphics processing units for thermal conductivity calculations using the Green-Kubo formula. We compare two different schemes for force evaluation, a previously used thread-scheme where a single thread is used for one particle and each thread calculates the total force for the corresponding particle, and a new block-scheme where a whole block is used for one particle and each thread in the block calcula...

  5. State-Level Comparison of Processes and Timelines for Distributed Photovoltaic Interconnection in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Ardani, K.; Davidson, C.; Margolis, R.; Nobler, E.

    2015-01-01

    This report presents results from an analysis of distributed photovoltaic (PV) interconnection and deployment processes in the United States. Using data from more than 30,000 residential (up to 10 kilowatts) and small commercial (10-50 kilowatts) PV systems, installed from 2012 to 2014, we assess the range in project completion timelines nationally (across 87 utilities in 16 states) and in five states with active solar markets (Arizona, California, New Jersey, New York, and Colorado).

  6. Sodium content of popular commercially processed and restaurant foods in the United States ☆

    OpenAIRE

    Ahuja, Jaspreet K.C.; Shirley Wasswa-Kintu; Haytowitz, David B; Marlon Daniel; Robin Thomas; Bethany Showell; Melissa Nickle; Roseland, Janet M.; Janelle Gunn; Mary Cogswell; Pehrsson, Pamela R

    2015-01-01

    Purpose: The purpose of this study was to provide baseline estimates of sodium levels in 125 popular, sodium-contributing, commercially processed and restaurant foods in the U.S., to assess future changes as manufacturers reformulate foods. Methods: In 2010–2013, we obtained ~5200 sample units from up to 12 locations and analyzed 1654 composites for sodium and related nutrients (potassium, total dietary fiber, total and saturated fat, and total sugar), as part of the U.S. Department of Agr...

  7. Architectural and performance considerations for a 10(7)-instruction/sec optoelectronic central processing unit.

    Science.gov (United States)

    Arrathoon, R; Kozaitis, S

    1987-11-01

    Architectural considerations for a multiple-instruction, single-data-based optoelectronic central processing unit operating at 10(7) instructions per second are detailed. Central to the operation of this device is a giant fiber-optic content-addressable memory in a programmable logic array configuration. The design includes four instructions and emphasizes the fan-in and fan-out capabilities of optical systems. Interconnection limitations and scaling issues are examined.

  8. Data Handling and Processing Unit for Alphabus/Alphasat TDP-8

    Science.gov (United States)

    Habinc, Sandi; Martins, Rodolfo; Costa Pinto, Joao; Furano, Gianluca

    2011-08-01

    ESA's and Inmarsat's ARTES 8 Alphabus/Alphasat is a specific programme dedicated to the development and deployment of Alphasat. It encompasses several technology demonstration payloads (TDPs), of which the TDP8 is an Environment effects facility to monitor the GEO radiation environment and its effects on electronic components and sensors. This paper will discuss the rapid development of the processor and board for TDP8's data handling and processing unit.

  9. Evaluating Acoustic Emission Signals as an in situ process monitoring technique for Selective Laser Melting (SLM)

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, Karl A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Candy, Jim V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Guss, Gabe [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mathews, M. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-10-14

    In situ real-time monitoring of the Selective Laser Melting (SLM) process has significant implications for the AM community. The ability to adjust the SLM process parameters during a build (in real-time) can save time, money and eliminate expensive material waste. Having a feedback loop in the process would allow the system to potentially ‘fix’ problem regions before a next powder layer is added. In this study we have investigated acoustic emission (AE) phenomena generated during the SLM process, and evaluated the results in terms of a single process parameter, of an in situ process monitoring technique.

  10. Engineering development of selective agglomeration: Task 5, Bench- scale process testing

    Energy Technology Data Exchange (ETDEWEB)

    1991-09-01

    Under the overall objectives of DOE Contract Engineering Development of Selective Agglomeration,'' there were a number of specific objectives in the Task 5 program. The prime objectives of Task 5 are highlighted below: (1) Maximize process performance in pyritic sulfur rejection and BTU recovery, (2) Produce a low ash product, (3) Compare the performance of the heavy agglomerant process based on diesel and the light agglomerant process using heptane, (4) Define optimum processing conditions for engineering design, (5) Provide first-level evaluation of product handleability, and (6) Explore and investigate process options/ideas which may enhance process performance and/or product handleability.

  11. Engineering development of selective agglomeration: Task 5, Bench- scale process testing

    Energy Technology Data Exchange (ETDEWEB)

    1991-09-01

    Under the overall objectives of DOE Contract ``Engineering Development of Selective Agglomeration,`` there were a number of specific objectives in the Task 5 program. The prime objectives of Task 5 are highlighted below: (1) Maximize process performance in pyritic sulfur rejection and BTU recovery, (2) Produce a low ash product, (3) Compare the performance of the heavy agglomerant process based on diesel and the light agglomerant process using heptane, (4) Define optimum processing conditions for engineering design, (5) Provide first-level evaluation of product handleability, and (6) Explore and investigate process options/ideas which may enhance process performance and/or product handleability.

  12. Integration of photovoltaic units into electric utility grids: experiment information requirements and selected issues

    Energy Technology Data Exchange (ETDEWEB)

    1980-09-01

    A number of investigations have led to the recognition of technical, economic, and institutional issues relating to the interface between solar electric technologies and electric utility systems. These issues derive from three attributes of solar electric power concepts, including (1) the variability and unpredictability of the solar resources, (2) the dispersed nature of those resources which suggest the deployment of small dispersed power units, and (3) a high initial capital cost coupled with relatively low operating costs. An important part of the DOE programs to develop new source technologies, in particular photovoltaic systems, is the experimental testing of complete or nearby complete power units. These experiments provide an opportunity to examine operational and integration issues which must be understood before widespread commercial deployment of these technologies can be achieved. Experiments may also be required to explicitly examine integration, operational, and control aspects of single and multiple new source technology power units within a utility system. An identification of utility information requirements, a review of planned experiments, and a preliminary determination of additional experimental needs and opportunities are presented. Other issues discussed include: (1) the impacts of on-site photovoltaic units on load duration curves and optimal generation mixes are considered; (2) the impacts of on-site photovoltaic units on utility production costs, with and without dedicated storage and with and without sellback, are analyzed; and (3) current utility rate structure experiments, rationales, policies, practices, and plans are reviewed.

  13. Sensory evaluation based fuzzy AHP approach for material selection in customized garment design and development process

    Science.gov (United States)

    Hong, Y.; Curteza, A.; Zeng, X.; Bruniaux, P.; Chen, Y.

    2016-06-01

    Material selection is the most difficult section in the customized garment product design and development process. This study aims to create a hierarchical framework for material selection. The analytic hierarchy process and fuzzy sets theories have been applied to mindshare the diverse requirements from the customer and inherent interaction/interdependencies among these requirements. Sensory evaluation ensures a quick and effective selection without complex laboratory test such as KES and FAST, using the professional knowledge of the designers. A real empirical application for the physically disabled people is carried out to demonstrate the proposed method. Both the theoretical and practical background of this paper have indicated the fuzzy analytical network process can capture expert's knowledge existing in the form of incomplete, ambiguous and vague information for the mutual influence on attribute and criteria of the material selection.

  14. Unraveling the sub-processes of selective attention: insights from dynamic modeling and continuous behavior.

    Science.gov (United States)

    Frisch, Simon; Dshemuchadse, Maja; Görner, Max; Goschke, Thomas; Scherbaum, Stefan

    2015-11-01

    Selective attention biases information processing toward stimuli that are relevant for achieving our goals. However, the nature of this bias is under debate: Does it solely rely on the amplification of goal-relevant information or is there a need for additional inhibitory processes that selectively suppress currently distracting information? Here, we explored the processes underlying selective attention with a dynamic, modeling-based approach that focuses on the continuous evolution of behavior over time. We present two dynamic neural field models incorporating the diverging theoretical assumptions. Simulations with both models showed that they make similar predictions with regard to response times but differ markedly with regard to their continuous behavior. Human data observed via mouse tracking as a continuous measure of performance revealed evidence for the model solely based on amplification but no indication of persisting selective distracter inhibition.

  15. Orthographic units in the absence of visual processing: Evidence from sublexical structure in braille.

    Science.gov (United States)

    Fischer-Baum, Simon; Englebretson, Robert

    2016-08-01

    Reading relies on the recognition of units larger than single letters and smaller than whole words. Previous research has linked sublexical structures in reading to properties of the visual system, specifically on the parallel processing of letters that the visual system enables. But whether the visual system is essential for this to happen, or whether the recognition of sublexical structures may emerge by other means, is an open question. To address this question, we investigate braille, a writing system that relies exclusively on the tactile rather than the visual modality. We provide experimental evidence demonstrating that adult readers of (English) braille are sensitive to sublexical units. Contrary to prior assumptions in the braille research literature, we find strong evidence that braille readers do indeed access sublexical structure, namely the processing of multi-cell contractions as single orthographic units and the recognition of morphemes within morphologically-complex words. Therefore, we conclude that the recognition of sublexical structure is not exclusively tied to the visual system. However, our findings also suggest that there are aspects of morphological processing on which braille and print readers differ, and that these differences may, crucially, be related to reading using the tactile rather than the visual sensory modality.

  16. IPULOC - Exploring Dynamic Program Locality with the Instruction Processing Unit for Filling Memory Gap

    Institute of Scientific and Technical Information of China (English)

    黄震春; 李三立

    2002-01-01

    Memory gap has become an essential factor influencing the peak performance of high-speed CPU-based systems. To fill this gap, enlarging cache capacity has been a traditional method based on static program locality principle. However, the order of instructions stored in I-Cache before being sent to Data Processing Unit (DPU) is a kind of useful information that has not ever been utilized before. So an architecture containing an Instruction Processing Unit (IPU) in parallel with the ordinary DPU is proposed. The IPU can prefetch,analyze and preprocess a large amount of instructions otherwise lying in the I-Cache untouched.It is more efficient than the conventional prefetch buffer that can only store several instructions for previewing. By IPU, Load Instructions can be preprocessed while the DPU is executing on data simultaneously. It is termed as "Instruction Processing Unit with LOokahead Cache"(IPULOC for short) in which the idea of dynamic program locality is presented. This paper describes the principle of IPULOC and illustrates the quantitative parameters for evaluation.Tools for simulating the IPULOC have been developed. The simulation result shows that it can improve program locality during program execution, and hence can improve the cache hit ratio correspondingly without further enlarging the on-chip cache that occupies a large portion of chip area.

  17. FEATURES OF THE SOCIO-POLITICAL PROCESS IN THE UNITED STATES

    Directory of Open Access Journals (Sweden)

    Tatyana Evgenevna Beydina

    2017-06-01

    Full Text Available The subject of this article is the study of political and social developments of the USA at the present stage. There are four stages of the American tradition of studying political processes. The first stage is connected with substantiation of the Executive, Legislative and Judicial branches of political system (works of F. Pollack and R. Sili. The second one includes behavioral studies of politics. Besides studying political processes Charles Merriam has studied their similarities and differences. The third stage is characterized by political system studies – the works of T. Parsons, D. Easton, R. Aron, G. Almond and K. Deutsch. The fourth stage is characterized by superpower and the systems democratization problem (S. Huntington, Zb. Bzhezinsky. American social processes were qualified by R. Park, P. Sorokin, E. Giddens. The work is concentrated on the divided explanation of social and political processes of the us and the reflection of unity of American social-political reality. Academic novelty is composed of substantiation of the US social-political process concept and characterization of its features. The US social-political process is characterized by two channels: soft power and aggression. Soft power appears in the US economy dominancy. The main results of the research are features of the socio-political process in the United States. Purpose: the main goal of the research is to systematize the definition of social-political process of the USA and estimate the line of its study within American political tradition. Methodology: in this article have used methods: such as system, comparison and historical analysis, structural-functional analysis. Results: during the research the analysis of the dynamics of social and political processes of the United States had been made. Practical implications it is expedient to apply the received results in the international relation theory and practice.

  18. Evaluating Mobile Graphics Processing Units (GPUs) for Real-Time Resource Constrained Applications

    Energy Technology Data Exchange (ETDEWEB)

    Meredith, J; Conger, J; Liu, Y; Johnson, J

    2005-11-11

    Modern graphics processing units (GPUs) can provide tremendous performance boosts for some applications beyond what a single CPU can accomplish, and their performance is growing at a rate faster than CPUs as well. Mobile GPUs available for laptops have the small form factor and low power requirements suitable for use in embedded processing. We evaluated several desktop and mobile GPUs and CPUs on traditional and non-traditional graphics tasks, as well as on the most time consuming pieces of a full hyperspectral imaging application. Accuracy remained high despite small differences in arithmetic operations like rounding. Performance improvements are summarized here relative to a desktop Pentium 4 CPU.

  19. All-optical quantum computing with a hybrid solid-state processing unit

    CERN Document Server

    Pei, Pei; Li, Chong

    2011-01-01

    We develop an architecture of hybrid quantum solid-state processing unit for universal quantum computing. The architecture allows distant and nonidentical solid-state qubits in distinct physical systems to interact and work collaboratively. All the quantum computing procedures are controlled by optical methods using classical fields and cavity QED. Our methods have prominent advantage of the insensitivity to dissipation process due to the virtual excitation of subsystems. Moreover, the QND measurements and state transfer for the solid-state qubits are proposed. The architecture opens promising perspectives for implementing scalable quantum computation in a broader sense that different solid systems can merge and be integrated into one quantum processor afterwards.

  20. Graphics processing unit-based quantitative second-harmonic generation imaging.

    Science.gov (United States)

    Kabir, Mohammad Mahfuzul; Jonayat, A S M; Patel, Sanjay; Toussaint, Kimani C

    2014-09-01

    We adapt a graphics processing unit (GPU) to dynamic quantitative second-harmonic generation imaging. We demonstrate the temporal advantage of the GPU-based approach by computing the number of frames analyzed per second from SHG image videos showing varying fiber orientations. In comparison to our previously reported CPU-based approach, our GPU-based image analysis results in ∼10× improvement in computational time. This work can be adapted to other quantitative, nonlinear imaging techniques and provides a significant step toward obtaining quantitative information from fast in vivo biological processes.

  1. Guidelines, processes and tools for coastal ecosystem restoration, with examples from the United States

    Energy Technology Data Exchange (ETDEWEB)

    Thom, Ronald M.; Diefenderfer, Heida L.; Adkins, Jeffery E.; Judd, Chaeli; Anderson, Michael G.; Buenau, Kate E.; Borde, Amy B.; Johnson, Gary E.

    2011-02-01

    This paper presents a systematic approach to coastal restoration projects in five phases: planning, implementation, performance assessment, adaptive management, and dissemination of results. Twenty features of the iterative planning process are synthesized. The planning process starts with a vision, a description of the ecosystem and landscape, and goals. A conceptual model and planning objectives are developed, a site is selected using prioritization techniques, and numerical models contribute to preliminary designs as needed. Performance criteria and reference sites are selected and the monitoring program is designed. The monitoring program is emphasized as a tool to assess project performance and identify problems affecting progression toward project goals, in an adaptive management framework. Key approaches to aspects of the monitoring program are reviewed and detailed with project examples. Within the planning process, cost analysis involves budgeting, scheduling, and financing. Finally, documentation is peer reviewed prior to making construction plans and final costing.

  2. The Lore of Admissions Policies: Contrasting Formal and Informal Understandings of the Residency Selection Process

    Science.gov (United States)

    Ginsburg, Shiphra; Schreiber, Martin; Regehr, Glenn

    2004-01-01

    Purpose: The selection process for residency positions is sometimes seen as being "opaque'' and unfair by students, and can be a significant source of student stress. Yet efforts to clarify the process may not have helped reduce student stress for a number of reasons. This paper examines the nature of the knowledge that students possess and…

  3. HOSPITAL SITE SELECTION USING TWO-STAGE FUZZY MULTI-CRITERIA DECISION MAKING PROCESS

    Directory of Open Access Journals (Sweden)

    Ali Soltani

    2011-01-01

    Full Text Available Site selection for sitting of urban activities/facilities is one of the crucial policy-related decisions taken by urban planners and policy makers. The process of site selection is inherently complicated. A careless site imposes exorbitant costs on city budget and damages the environment inevitably. Nowadays, multi-attributes decision making approaches are suggested to use to improve precision of decision making and reduce surplus side effects. Two well-known techniques, analytical hierarchal process and analytical network process are among multi-criteria decision making systems which can easily be consistent with both quantitative and qualitative criteria. These are also developed to be fuzzy analytical hierarchal process and fuzzy analytical network process systems which are capable of accommodating inherent uncertainty and vagueness in multi-criteria decision-making. This paper reports the process and results of a hospital site selection within the Region 5 of Shiraz metropolitan area, Iran using integrated fuzzy analytical network process systems with Geographic Information System (GIS. The weights of the alternatives were calculated using fuzzy analytical network process. Then a sensitivity analysis was conducted to measure the elasticity of a decision in regards to different criteria. This study contributes to planning practice by suggesting a more comprehensive decision making tool for site selection.

  4. HOSPITAL SITE SELECTION USING TWO-STAGE FUZZY MULTI-CRITERIA DECISION MAKING PROCESS

    Directory of Open Access Journals (Sweden)

    Ali Soltani

    2011-06-01

    Full Text Available Site selection for sitting of urban activities/facilities is one of the crucial policy-related decisions taken by urban planners and policy makers. The process of site selection is inherently complicated. A careless site imposes exorbitant costs on city budget and damages the environment inevitably. Nowadays, multi-attributes decision making approaches are suggested to use to improve precision of decision making and reduce surplus side effects. Two well-known techniques, analytical hierarchal process and analytical network process are among multi-criteria decision making systems which can easily be consistent with both quantitative and qualitative criteria. These are also developed to be fuzzy analytical hierarchal process and fuzzy analytical network process systems which are capable of accommodating inherent uncertainty and vagueness in multi-criteria decision-making. This paper reports the process and results of a hospital site selection within the Region 5 of Shiraz metropolitan area, Iran using integrated fuzzy analytical network process systems with Geographic Information System (GIS. The weights of the alternatives were calculated using fuzzy analytical network process. Then a sensitivity analysis was conducted to measure the elasticity of a decision in regards to different criteria. This study contributes to planning practice by suggesting a more comprehensive decision making tool for site selection.

  5. Acute effects of caffeine on selective attention and visual search processes

    NARCIS (Netherlands)

    Lorist, M.M.; Snel, J.; Kok, A; Mulder, G.

    1996-01-01

    The influence of a single dose of caffeine was evaluated in focused and divided attention conditions of a visual selective search task in which subjects had to perform controlled search processes to locate a target item. Search processes were manipulated by varying display load. A dose of 3 mg/kg bo

  6. Acute effects of caffeine on selective attention and visual search processes

    NARCIS (Netherlands)

    Lorist, M.M.; Snel, J.; Kok, A; Mulder, G.

    The influence of a single dose of caffeine was evaluated in focused and divided attention conditions of a visual selective search task in which subjects had to perform controlled search processes to locate a target item. Search processes were manipulated by varying display load. A dose of 3 mg/kg

  7. Real-time resampling in Fourier domain optical coherence tomography using a graphics processing unit.

    Science.gov (United States)

    Van der Jeught, Sam; Bradu, Adrian; Podoleanu, Adrian Gh

    2010-01-01

    Fourier domain optical coherence tomography (FD-OCT) requires either a linear-in-wavenumber spectrometer or a computationally heavy software algorithm to recalibrate the acquired optical signal from wavelength to wavenumber. The first method is sensitive to the position of the prism in the spectrometer, while the second method drastically slows down the system speed when it is implemented on a serially oriented central processing unit. We implement the full resampling process on a commercial graphics processing unit (GPU), distributing the necessary calculations to many stream processors that operate in parallel. A comparison between several recalibration methods is made in terms of performance and image quality. The GPU is also used to accelerate the fast Fourier transform (FFT) and to remove the background noise, thereby achieving full GPU-based signal processing without the need for extra resampling hardware. A display rate of 25 framessec is achieved for processed images (1,024 x 1,024 pixels) using a line-scan charge-coupled device (CCD) camera operating at 25.6 kHz.

  8. A Comparative Study on Retirement Process in Korea, Germany, and the United States: Identifying Determinants of Retirement Process.

    Science.gov (United States)

    Cho, Joonmo; Lee, Ayoung; Woo, Kwangho

    2016-10-01

    This study classifies the retirement process and empirically identifies the individual and institutional characteristics determining the retirement process of the aged in South Korea, Germany, and the United States. Using data from the Cross-National Equivalent File, we use a multinomial logistic regression with individual factors, public pension, and an interaction term between an occupation and an education level. We found that in Germany, the elderly with a higher education level were more likely to continue work after retirement with a relatively well-developed social support system, while in Korea, the elderly, with a lower education level in almost all occupation sectors, tended to work off and on after retirement. In the United States, the public pension and the interaction terms have no statistically significant impact on work after retirement. In both Germany and Korea, receiving a higher pension decreased the probability of working after retirement, but the influence of a pension in Korea was much greater than that of Germany. In South Korea, the elderly workers, with lower education levels, tended to work off and on repeatedly because there is no proper security in both the labor market and pension system.

  9. General Purpose Graphics Processing Unit Based High-Rate Rice Decompression and Reed-Solomon Decoding.

    Energy Technology Data Exchange (ETDEWEB)

    Loughry, Thomas A.

    2015-02-01

    As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to ten times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.

  10. General Purpose Graphics Processing Unit Based High-Rate Rice Decompression and Reed-Solomon Decoding

    Energy Technology Data Exchange (ETDEWEB)

    Loughry, Thomas A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to ten times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.

  11. A process for selection and training of super-users for ERP implementation projects

    DEFF Research Database (Denmark)

    Danielsen, Peter; Sandfeld Hansen, Kenneth; Helt, Mads

    2017-01-01

    -users in practice. To address this research gap, we analyze the case of an ERP implementation program at a large manufacturing company. We combine Katz’s widely accepted skill measurement model with the process observed in practice to describe and test a model of super-user selection and training. The resulting...... model contains a systematic process of super-user development and highlights the specific skillsets required in different phases of the selection and training process. Our results from a comparative assessment of management expectations and super-user skills in the ERP program show that the model can...

  12. An ontological knowledge based system for selection of process monitoring and analysis tools

    DEFF Research Database (Denmark)

    Singh, Ravendra; Gernaey, Krist; Gani, Rafiqul

    2010-01-01

    monitoring and analysis tools for a wide range of operations has made their selection a difficult, time consuming and challenging task. Therefore, an efficient and systematic knowledge base coupled with an inference system is necessary to support the optimal selection of process monitoring and analysis tools......, satisfying the process and user constraints. A knowledge base consisting of the process knowledge as well as knowledge on measurement methods and tools has been developed. An ontology has been designed for knowledge representation and management. The developed knowledge base has a dual feature. On the one...... procedures has been developed to retrieve the data/information stored in the knowledge base....

  13. Direct Slicing Based on Material Performance and Process Parameters for Selective Laser Sintering

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Direct slicing from CAD models to generate sectional contours of thepart to be sintered for Selective Laser Sintering (SLS) may overcome inherent disadvantages of using a Stereo Lithography ( STL ) format. In this paper, a direct slicing procedure is proposed for Selective Laser Sintering based on material performance and process parameters. Slicing thickness depends on the 3 D geometric model,material performance and process parameters. The relationship among material performance, process parameters and the largest slicing thickness is established using analysis of a sintering temperature field. A dynamic linked library is developed to realize direct slicing from a CAD model.

  14. Variation in chlorophyll content per unit leaf area in spring wheat and implications for selection in segregating material.

    Directory of Open Access Journals (Sweden)

    John Hamblin

    Full Text Available Reduced levels of leaf chlorophyll content per unit leaf area in crops may be of advantage in the search for higher yields. Possible reasons include better light distribution in the crop canopy and less photochemical damage to leaves absorbing more light energy than required for maximum photosynthesis. Reduced chlorophyll may also reduce the heat load at the top of canopy, reducing water requirements to cool leaves. Chloroplasts are nutrient rich and reducing their number may increase available nutrients for growth and development. To determine whether this hypothesis has any validity in spring wheat requires an understanding of genotypic differences in leaf chlorophyll content per unit area in diverse germplasm. This was measured with a SPAD 502 as SPAD units. The study was conducted in series of environments involving up to 28 genotypes, mainly spring wheat. In general, substantial and repeatable genotypic variation was observed. Consistent SPAD readings were recorded for different sampling positions on leaves, between different leaves on single plant, between different plants of the same genotype, and between different genotypes grown in the same or different environments. Plant nutrition affected SPAD units in nutrient poor environments. Wheat genotypes DBW 10 and Transfer were identified as having consistent and contrasting high and low average SPAD readings of 52 and 32 units, respectively, and a methodology to allow selection in segregating populations has been developed.

  15. Using real time process measurements to reduce catheter related bloodstream infections in the intensive care unit

    Science.gov (United States)

    Wall, R; Ely, E; Elasy, T; Dittus, R; Foss, J; Wilkerson, K; Speroff, T

    2005-01-01

    

Problem: Measuring a process of care in real time is essential for continuous quality improvement (CQI). Our inability to measure the process of central venous catheter (CVC) care in real time prevented CQI efforts aimed at reducing catheter related bloodstream infections (CR-BSIs) from these devices. Design: A system was developed for measuring the process of CVC care in real time. We used these new process measurements to continuously monitor the system, guide CQI activities, and deliver performance feedback to providers. Setting: Adult medical intensive care unit (MICU). Key measures for improvement: Measured process of CVC care in real time; CR-BSI rate and time between CR-BSI events; and performance feedback to staff. Strategies for change: An interdisciplinary team developed a standardized, user friendly nursing checklist for CVC insertion. Infection control practitioners scanned the completed checklists into a computerized database, thereby generating real time measurements for the process of CVC insertion. Armed with these new process measurements, the team optimized the impact of a multifaceted intervention aimed at reducing CR-BSIs. Effects of change: The new checklist immediately provided real time measurements for the process of CVC insertion. These process measures allowed the team to directly monitor adherence to evidence-based guidelines. Through continuous process measurement, the team successfully overcame barriers to change, reduced the CR-BSI rate, and improved patient safety. Two years after the introduction of the checklist the CR-BSI rate remained at a historic low. Lessons learnt: Measuring the process of CVC care in real time is feasible in the ICU. When trying to improve care, real time process measurements are an excellent tool for overcoming barriers to change and enhancing the sustainability of efforts. To continually improve patient safety, healthcare organizations should continually measure their key clinical processes in real

  16. A process for selection and training of super-users for ERP implementation projects

    DEFF Research Database (Denmark)

    Danielsen, Peter; Sandfeld Hansen, Kenneth; Helt, Mads

    2017-01-01

    The concept of super-users as a means to facilitate ERP implementation projects has recently taken a foothold in practice, but is still largely overlooked in research. In particular, little is known about the selection and training processes required to successfully develop skilled super......-users in practice. To address this research gap, we analyze the case of an ERP implementation program at a large manufacturing company. We combine Katz’s widely accepted skill measurement model with the process observed in practice to describe and test a model of super-user selection and training. The resulting...... model contains a systematic process of super-user development and highlights the specific skillsets required in different phases of the selection and training process. Our results from a comparative assessment of management expectations and super-user skills in the ERP program show that the model can...

  17. Computational unit for non-contact photonic system

    Science.gov (United States)

    Kochetov, Alexander V.; Skrylev, Pavel A.

    2005-06-01

    Requirements to the unified computational unit for non-contact photonic system have been formulated. Estimation of central processing unit performance and required memory size are calculated. Specialized microcontroller optimal to use as central processing unit has been selected. Memory chip types are determinated for system. The computational unit consists of central processing unit based on selected microcontroller, NVRAM memory, receiving circuit, SDRAM memory, control and power circuits. It functions, as performing unit that calculates required parameters ofrail track.

  18. Smoking selectivity among Mexican immigrants to the United States using binational data, 1999-2012.

    Science.gov (United States)

    Fleischer, Nancy L; Ro, Annie; Bostean, Georgiana

    2017-04-01

    Mexican immigrants have lower smoking rates than US-born Mexicans, which some scholars attribute to health selection-that individuals who migrate are healthier and have better health behaviors than their non-migrant counterparts. Few studies have examined smoking selectivity using binational data and none have assessed whether selectivity remains constant over time. This study combined binational data from the US and Mexico to examine: 1) the extent to which recent Mexican immigrants (Encuesta Nacional de Salud and 2012 Encuesta Nacional de Salud y Nutrición. Multinomial logistic regressions, stratified by gender, predicted smoking status (current, former, never) by migration status. At both time points, we found lower overall smoking prevalence among recent US immigrants compared to non-migrants for both genders. Moreover, from the regression analyses, smoking selectivity remained constant between 2000 and 2012 among men, but increased among women. These findings suggest that Mexican immigrants are indeed selected on smoking compared to their non-migrating counterparts, but that selectivity is subject to smoking conditions in the sending countries and may not remain constant over time. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. LAG SELECTION OF THE AUGMENTED KAPETANIOS-SHIN-SNELL NONLINEAR UNIT ROOT TEST

    Directory of Open Access Journals (Sweden)

    Jen-Je Su

    2013-01-01

    Full Text Available We provide simulation evidence that shed light on several size and power issues in relation to lag selection of the augmented (nonlinear KSS test. Two lag selection approaches are considered-the Modified AIC (MAIC approach and a sequential General to Specific (GS testing approach Either one of these approaches can be used to select the optimal lag based on either the augmented linear Dickey Fuller test or the augmented nonlinear KSS test, resulting in four possible selection methods, namely, MAIC, GS, NMAIC and NGS. The evidence suggests that the asymptotic critical values of the KSS test tends to result in over-sizing if the (N GS method is used and under-sizing if the (N MAIC method is utilised. Thus, we recommend that the critical values should be generated from finite samples. We also find evidence that the (N MAIC method has less size distortion than the (N GS method, suggesting that the MAIC-based KSS test is preferred. Interestingly, the MAIC-based KSS test with lag selection based on the linear ADF regression is generally more powerful than the test with lag selection based on the nonlinear version.

  20. Relations between frequency selectivity, temporal fine-structure processing, and speech reception in impaired hearing

    DEFF Research Database (Denmark)

    Strelcyk, Olaf; Dau, Torsten

    2009-01-01

    and binaural TFS-processing deficits in the HI listeners, no relation was found between TFS processing and frequency selectivity. The effect of noise on TFS processing was not larger for the HI listeners than for the NH listeners. Finally, TFS-processing performance was correlated with speech reception......Frequency selectivity, temporal fine-structure (TFS) processing, and speech reception were assessed for six normal-hearing (NH) listeners, ten sensorineurally hearing-impaired (HI) listeners with similar high-frequency losses, and two listeners with an obscure dysfunction (OD). TFS processing...... was investigated at low frequencies in regions of normal hearing, through measurements of binaural masked detection, tone lateralization, and monaural frequency modulation (FM) detection. Lateralization and FM detection thresholds were measured in quiet and in background noise. Speech reception thresholds were...

  1. Climate change impacts on extreme temperature mortality in select metropolitan areas of the United States

    Science.gov (United States)

    Projected mortality from climate change-driven impacts on extremely hot and cold days increases significantly over the 21st century in a large group of United States Metropolitan Statistical Areas. Increases in projected mortality from more hot days are greater than decreases in ...

  2. Selection and Training of Teachers of the Gifted in the United States.

    Science.gov (United States)

    Addison, Linda

    1983-01-01

    The author examines characteristics of effective teachers of gifted students in the United States by reviewing research on topics of mentorship, trait theory, behaviorism, creativity, change agents, and leadership; cites accepted teacher competencies; describes a model teacher education program; and notes federal, state, and local efforts to…

  3. Puerto Ricans in Continental United States: A Bibliography, Selected and Annotated.

    Science.gov (United States)

    Velazquez, Rene

    This annotated bibliography contains approximately 900 citations of material written about Puerto Ricans residing in the mainland United States. Also included is a section listing published bibliographies that cover literature on Puerto Rico and Puerto Ricans. Citations within each section are listed in alphabetical order by author or sponsoring…

  4. Continuing Education Unit; Selected Conference Proceedings (Springfield, Illinois, September 19-20, 1974)

    Science.gov (United States)

    Illinois Community Coll. Board, Springfield.

    The conference proceedings, dealing with the Continuing Education Unit (CEU), contain the following papers. Introduction, David L. Ferris; The History and Philosophy Behind the CEU, William L. Turner; The Iowa Experience--From the State, Don McGuire; The Iowa Experience--From the University, Jack Huttig; A Computer Based CEU Retrieval System;…

  5. Puerto Ricans in Continental United States: A Bibliography, Selected and Annotated.

    Science.gov (United States)

    Velazquez, Rene

    This annotated bibliography contains approximately 900 citations of material written about Puerto Ricans residing in the mainland United States. Also included is a section listing published bibliographies that cover literature on Puerto Rico and Puerto Ricans. Citations within each section are listed in alphabetical order by author or sponsoring…

  6. Toward a formal verification of a floating-point coprocessor and its composition with a central processing unit

    Science.gov (United States)

    Pan, Jing; Levitt, Karl N.; Cohen, Gerald C.

    1991-01-01

    Discussed here is work to formally specify and verify a floating point coprocessor based on the MC68881. The HOL verification system developed at Cambridge University was used. The coprocessor consists of two independent units: the bus interface unit used to communicate with the cpu and the arithmetic processing unit used to perform the actual calculation. Reasoning about the interaction and synchronization among processes using higher order logic is demonstrated.

  7. Optogenetic Stimulation of Lateral Amygdala Input to Posterior Piriform Cortex Modulates Single-Unit and Ensemble Odor Processing.

    Science.gov (United States)

    Sadrian, Benjamin; Wilson, Donald A

    2015-01-01

    Olfactory information is synthesized within the olfactory cortex to provide not only an odor percept, but also a contextual significance that supports appropriate behavioral response to specific odor cues. The piriform cortex serves as a communication hub within this circuit by sharing reciprocal connectivity with higher processing regions, such as the lateral entorhinal cortex and amygdala. The functional significance of these descending inputs on piriform cortical processing of odorants is currently not well understood. We have employed optogenetic methods to selectively stimulate lateral and basolateral amygdala (BLA) afferent fibers innervating the posterior piriform cortex (pPCX) to quantify BLA modulation of pPCX odor-evoked activity. Single unit odor-evoked activity of anesthetized BLA-infected animals was significantly modulated compared with control animal recordings, with individual cells displaying either enhancement or suppression of odor-driven spiking. In addition, BLA activation induced a decorrelation of odor-evoked pPCX ensemble activity relative to odor alone. Together these results indicate a modulatory role in pPCX odor processing for the BLA complex. This interaction could contribute to learned changes in PCX activity following associative conditioning, as well as support alternate patterns of odor processing that are state-dependent.

  8. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    Science.gov (United States)

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  9. 40 CFR Appendix Xiii to Part 266 - Mercury Bearing Wastes That May Be Processed in Exempt Mercury Recovery Units

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 26 2010-07-01 2010-07-01 false Mercury Bearing Wastes That May Be Processed in Exempt Mercury Recovery Units XIII Appendix XIII to Part 266 Protection of Environment... XIII to Part 266—Mercury Bearing Wastes That May Be Processed in Exempt Mercury Recovery Units These...

  10. 40 CFR Table 6 to Subpart Ppp of... - Process Vents From Continuous Unit Operations-Monitoring, Recordkeeping, and Reporting Requirements

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 11 2010-07-01 2010-07-01 true Process Vents From Continuous Unit Operations-Monitoring, Recordkeeping, and Reporting Requirements 6 Table 6 to Subpart PPP of Part 63... Subpart PPP of Part 63—Process Vents From Continuous Unit Operations—Monitoring, Recordkeeping, and...

  11. Using Loop Heat Pipes to Minimize Survival Heater Power for NASA's Evolutionary Xenon Thruster Power Processing Units

    Science.gov (United States)

    Choi, Michael K.

    2017-01-01

    A thermal design concept of using propylene loop heat pipes to minimize survival heater power for NASA's Evolutionary Xenon Thruster power processing units is presented. It reduces the survival heater power from 183 W to 35 W per power processing unit. The reduction is 81%.

  12. Computer-aided tool for solvent selection in pharmaceutical processes: Solvent swap

    DEFF Research Database (Denmark)

    Papadakis, Emmanouil; K. Tula, Anjan; Gernaey, Krist V.

    In the pharmaceutical processes, solvents have a multipurpose role since different solvents can be used in different stages (such as chemical reactions, separations and purification) in the multistage active pharmaceutical ingredients (APIs) production process. The solvent swap and selection task......-aided framework with the objective to assist the pharmaceutical industry in gaining better process understanding. A software interface to improve the usability of the tool has been created also....

  13. Measuring process performance within healthcare logistics - a decision tool for selecting measuring technologies

    DEFF Research Database (Denmark)

    Feibert, Diana Cordes; Jacobsen, Peter

    2015-01-01

    Performance measurement can support the organization in improving the efficiency and effectiveness of logistical healthcare processes. Selecting the most suitable technologies is important to ensure data validity. A case study of the hospital cleaning process at a public Danish hospital...... was conducted. Monitoring tasks and ascertaining quality of work is difficult in such a process. Based on principal-agent theory, a set of decision indicator has been developed, and a decision framework for assessing technologies to enable performance measurement has been proposed....

  14. Performance Recognition for Sulphur Flotation Process Based on Froth Texture Unit Distribution

    Directory of Open Access Journals (Sweden)

    Mingfang He

    2013-01-01

    Full Text Available As an important indicator of flotation performance, froth texture is believed to be related to operational condition in sulphur flotation process. A novel fault detection method based on froth texture unit distribution (TUD is proposed to recognize the fault condition of sulphur flotation in real time. The froth texture unit number is calculated based on texture spectrum, and the probability density function (PDF of froth texture unit number is defined as texture unit distribution, which can describe the actual textual feature more accurately than the grey level dependence matrix approach. As the type of the froth TUD is unknown, a nonparametric kernel estimation method based on the fixed kernel basis is proposed, which can overcome the difficulty when comparing different TUDs under various conditions is impossible using the traditional varying kernel basis. Through transforming nonparametric description into dynamic kernel weight vectors, a principle component analysis (PCA model is established to reduce the dimensionality of the vectors. Then a threshold criterion determined by the TQ statistic based on the PCA model is proposed to realize the performance recognition. The industrial application results show that the accurate performance recognition of froth flotation can be achieved by using the proposed method.

  15. Performance-Based Technology Selection Filter application report for Teledyne Wah Chang Albany Operable Unit Number One. INEL Buried Waste Integrated Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, J.G.; Morrison, J.L.; Morneau, R.A.; O`Brien, M.C.; Rudin, M.J.

    1992-05-01

    This report summarizes the application of the Performance-Based Technology Selection Filter (PBTSF) developed for the Idaho National Laboratory`s Buried Waste Integrated Demonstration Program as applied to remediation activities conducted at the Teledyne Wah Chang Albany (TWCA) Superfund Site, Operable Unit One. The remedial action at the TWCA Operable Unit One consisted of solidification, excavation, transportation, and monocell disposal of the contents of two sludge ponds contaminated with various inorganic and organic compounds. Inorganic compounds included low levels of uranium and radium isotopes, as well zirconium, hafnium, chromium, mercury, and nickel. Organic compounds included methylene chloride, 1,1,1-trichloroethane, 1,1-dichloroethane, tetrachloroethane, and hexachlorobenzene. Remediation began in June 1991, and was completed in November 1991. The TWCA Operable Unit One configuration option consisted of 15 functional subelements. Data were gathered on these subelements and end-to-end system operation to calculate numerical values for 28 system performance measures. These were then used to calculate a system performance score. An assessment was made of the availability and definitional clarity of these performance measures, applicability of PBTSF utility functions, and rollup methodology. The PBTSF scoring function worked well, with few problems noted in data gathering, utility function normalization, and scoring calculation. The application of this process to an actual in situ treatment and excavation technical process option clarified the specific terms and bounds of the performance score functions, and identified one problem associated with the definition of system boundary.

  16. Controlled Assembly of Heterobinuclear Sites on Mesoporous Silica: Visible Light Charge-Transfer Units with Selectable Redox Properties

    Energy Technology Data Exchange (ETDEWEB)

    Frei, Heinz; Han, Hongxian; Frei, Heinz

    2008-06-04

    Mild synthetic methods are demonstrated for the selective assembly of oxo-bridged heterobinuclear units of the type TiOCrIII, TiOCoII, and TiOCeIII on mesoporous silica support MCM-41. One method takes advantage of the higher acidity and, hence, higher reactivity of titanol compared to silanol OH groups towards CeIII or CoII precursor. The procedure avoids the customary use of strong base. The controlled assembly of the TiOCr system exploits the selective redox reactivity of one metal towards another (TiIII precursor reacting with anchored CrVI centers). The observed selectivity for linking a metal precursor to an already anchored partner versus formation of isolated centers ranges from a factor of six (TiOCe) to complete (TiOCr, TiOCo). Evidence for oxo bridges and determination of the coordination environment of each metal centers is based on K-edge EXAFS (TiOCr), L-edge absorption spectroscopy (Ce), and XANES measurements (Co, Cr). EPR, optical, FT-Raman and FT-IR spectroscopy furnish additional details on oxidation state and coordination environment of donor and acceptor metal centers. In the case of TiOCr, the integrity of the anchored group upon calcination (350 oC) and cycling of the Cr oxidation state is demonstrated. The binuclear units possess metal-to-metal charge-transfer transitions that absorb deep in the visible region. The flexible synthetic method for assembling the units opens up the use of visible light charge transfer pumps featuring donor or acceptor metals with selectable redox potential.

  17. Simulation of abrasive water jet cutting process: Part 1. Unit event approach

    Science.gov (United States)

    Lebar, Andrej; Junkar, Mihael

    2004-11-01

    Abrasive water jet (AWJ) machined surfaces exhibit the texture typical of machining with high energy density beam processing technologies. It has a superior surface quality in the upper region and rough surface in the lower zone with pronounced texture marks called striations. The nature of the mechanisms involved in the domain of AWJ machining is still not well understood but is essential for AWJ control improvement. In this paper, the development of an AWJ machining simulation is reported on. It is based on an AWJ process unit event, which in this case represents the impact of a particular abrasive grain. The geometrical characteristics of the unit event are measured on a physical model of the AWJ process. The measured dependences and the proposed model relations are then implemented in the AWJ machining process simulation. The obtained results are in good agreement in the engraving regime of AWJ machining. To expand the validity of the simulation further, a cellular automata approach is explored in the second part of the paper.

  18. Factors associated with the process of adaptation among Pakistani adolescent females living in United States.

    Science.gov (United States)

    Khuwaja, Salma A; Selwyn, Beatrice J; Mgbere, Osaro; Khuwaja, Alam; Kapadia, Asha; McCurdy, Sheryl; Hsu, Chiehwen E

    2013-04-01

    This study explored post-migration experiences of recently migrated Pakistani Muslim adolescent females residing in the United States. In-depth, semi-structured interviews were conducted with thirty Pakistani Muslim adolescent females between the ages of 15 and 18 years living with their families in Houston, Texas. Data obtained from the interviews were evaluated using discourse analysis to identify major reoccurring themes. Participants discussed factors associated with the process of adaptation to the American culture. The results revealed that the main factors associated with adaptation process included positive motivation for migration, family bonding, social support networks, inter-familial communication, aspiration of adolescents to learn other cultures, availability of English-as-second-language programs, participation in community rebuilding activities, and faith practices, English proficiency, peer pressure, and inter-generational conflicts. This study provided much needed information on factors associated with adaptation process of Pakistani Muslim adolescent females in the United States. The results have important implications for improving the adaptation process of this group and offer potential directions for intervention and counseling services.

  19. Low cost solar array project production process and equipment task: A Module Experimental Process System Development Unit (MEPSDU)

    Science.gov (United States)

    1981-01-01

    Several major modifications were made to the design presented at the PDR. The frame was deleted in favor of a "frameless" design which will provide a substantially improved cell packing factor. Potential shaded cell damage resulting from operation into a short circuit can be eliminated by a change in the cell series/parallel electrical interconnect configuration. The baseline process sequence defined for the MEPSON was refined and equipment design and specification work was completed. SAMICS cost analysis work accelerated, format A's were prepared and computer simulations completed. Design work on the automated cell interconnect station was focused on bond technique selection experiments.

  20. Groundwater geochemical and selected volatile organic compound data, Operable Unit 1, Naval Undersea Warfare Center, Division Keyport, Washington, June 2011

    Science.gov (United States)

    Huffman, Raegan L.; Frans, L.M.

    2012-01-01

    Previous investigations indicate that concentrations of chlorinated volatile organic compounds are substantial in groundwater beneath the 9-acre former landfill at Operable Unit 1, Naval Undersea Warfare Center, Division Keyport, Washington. Phytoremediation combined with ongoing natural attenuation processes was the preferred remedy selected by the U.S. Navy, as specified in the Record of Decision for the site. The U.S. Navy planted two hybrid poplar plantations on the landfill in spring 1999 to remove and to control the migration of chlorinated volatile organic compounds in shallow groundwater. The U.S. Geological Survey has continued to monitor groundwater geochemistry to ensure that conditions remain favorable for contaminant biodegradation as specified in the Record of Decision. This report presents groundwater geochemical and selected volatile organic compound data collected at Operable Unit 1 by the U.S. Geological Survey during June 20-22, 2011, in support of long-term monitoring for natural attenuation. In 2011, groundwater samples were collected from 13 wells and 9 piezometers. Samples from all wells and piezometers were analyzed for redox sensitive constituents and dissolved gases, and samples from 5 of 13 wells and all piezometers also were analyzed for chlorinated volatile organic compounds. Concentrations of redox sensitive constituents measured in 2011 were consistent with previous years, with dissolved oxygen concentrations all at 0.4 milligram per liter or less; little to no detectable nitrate; abundant dissolved manganese, iron, and methane; and commonly detected sulfide. The reductive declorination byproducts - methane, ethane, and ethene - were either not detected in samples collected from the upgradient wells in the landfill and the upper aquifer beneath the northern phytoremediation plantation or were detected at concentrations less than those measured in 2010. Chlorinated volatile organic compound concentrations in 2011 at most piezometers