WorldWideScience

Sample records for unit process designs

  1. Remote Maintenance Design Guide for Compact Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Draper, J.V.

    2000-07-13

    Oak Ridge National Laboratory (ORNL) Robotics and Process Systems (RPSD) personnel have extensive experience working with remotely operated and maintained systems. These systems require expert knowledge in teleoperation, human factors, telerobotics, and other robotic devices so that remote equipment may be manipulated, operated, serviced, surveyed, and moved about in a hazardous environment. The RPSD staff has a wealth of experience in this area, including knowledge in the broad topics of human factors, modular electronics, modular mechanical systems, hardware design, and specialized tooling. Examples of projects that illustrate and highlight RPSD's unique experience in remote systems design and application include the following: (1) design of a remote shear and remote dissolver systems in support of U.S. Department of Energy (DOE) fuel recycling research and nuclear power missions; (2) building remotely operated mobile systems for metrology and characterizing hazardous facilities in support of remote operations within those facilities; (3) construction of modular robotic arms, including the Laboratory Telerobotic Manipulator, which was designed for the National Aeronautics and Space Administration (NASA) and the Advanced ServoManipulator, which was designed for the DOE; (4) design of remotely operated laboratories, including chemical analysis and biochemical processing laboratories; (5) construction of remote systems for environmental clean up and characterization, including underwater, buried waste, underground storage tank (UST) and decontamination and dismantlement (D&D) applications. Remote maintenance has played a significant role in fuel reprocessing because of combined chemical and radiological contamination. Furthermore, remote maintenance is expected to play a strong role in future waste remediation. The compact processing units (CPUs) being designed for use in underground waste storage tank remediation are examples of improvements in systems

  2. Conceptual Design for the Pilot-Scale Plutonium Oxide Processing Unit in the Radiochemical Processing Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Lumetta, Gregg J.; Meier, David E.; Tingey, Joel M.; Casella, Amanda J.; Delegard, Calvin H.; Edwards, Matthew K.; Jones, Susan A.; Rapko, Brian M.

    2014-08-05

    This report describes a conceptual design for a pilot-scale capability to produce plutonium oxide for use as exercise and reference materials, and for use in identifying and validating nuclear forensics signatures associated with plutonium production. This capability is referred to as the Pilot-scale Plutonium oxide Processing Unit (P3U), and it will be located in the Radiochemical Processing Laboratory at the Pacific Northwest National Laboratory. The key unit operations are described, including plutonium dioxide (PuO2) dissolution, purification of the Pu by ion exchange, precipitation, and conversion to oxide by calcination.

  3. Design of the Laboratory-Scale Plutonium Oxide Processing Unit in the Radiochemical Processing Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Lumetta, Gregg J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Meier, David E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Tingey, Joel M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Casella, Amanda J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Delegard, Calvin H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Edwards, Matthew K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Orton, Robert D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rapko, Brian M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Smart, John E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report describes a design for a laboratory-scale capability to produce plutonium oxide (PuO2) for use in identifying and validating nuclear forensics signatures associated with plutonium production, as well as for use as exercise and reference materials. This capability will be located in the Radiochemical Processing Laboratory at the Pacific Northwest National Laboratory. The key unit operations are described, including PuO2 dissolution, purification of the Pu by ion exchange, precipitation, and re-conversion to PuO2 by calcination.

  4. Experience in design and startup of distillation towers in primary crude oil processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Lebedev, Y.N.; D' yakov, V.G.; Mamontov, G.V.; Sheinman, V.A.; Ukhin, V.V.

    1985-11-01

    This paper describes a refinery in the city of Mathura, India, with a capacity of 7 million metric tons of crude per year, designed and constructed to include the following units: AVT for primary crude oil processing; catalytic cracking; visbreaking; asphalt; and other units. A diagram of the atmospheric tower with stripping sections is shown, and the stabilizer tower is illustrated. The startup and operation of the AVT and visbreaking units are described, and they demonstrate the high reliability and efficiency of the equipment.

  5. Free and open source simulation tools for the design of power processing units for photovoltaic systems

    Directory of Open Access Journals (Sweden)

    Sergio Morales-Hernández

    2015-06-01

    Full Text Available Renewable energy sources, including solar photovoltaic, require electronic circuits that serve as interface between the transducer device and the device or system that uses energy. Moreover, the energy efficiency and the cost of the system can be compromised if such electronic circuit is not designed properly. Given that the electrical characteristics of the photovoltaic devices are nonlinear and that the most efficient electronic circuits for power processing are naturally discontinuous, a detailed dynamic analysis to optimize the design is required. This analysis should be supported by computer simulation tools. In this paper a comparison between two software tools for dynamic system simulation is performed to determinate its usefulness in the design process of photovoltaic systems, mainly in what corresponds to the power processing units. Using as a case of study a photovoltaic system for battery charging it was determined that Scicoslab tool was the most suitable.

  6. The Design Process of a Board Game for Exploring the Territories of the United States

    Directory of Open Access Journals (Sweden)

    Mehmet Kosa

    2017-06-01

    Full Text Available The paper reports the design experience of a board game with an educational aspect, which takes place on the location of states and territories of the United States. Based on a territorial acquisition dynamic, the goal was to articulate the design process of a board game that provides information for individuals who are willing to learn the locations of the U.S. states by playing a game. The game was developed using an iterative design process based on focus groups studies and brainstorming sessions. A mechanic-driven design approach was adopted instead of a theme or setting-driven alternative and a relatively abstract game was developed. The initial design idea was formed and refined according to the player feedback. The paper details play-testing sessions conducted and documents the design experience from a qualitative perspective. Our preliminary results suggest that the initial design is moderately balanced and despite the lack of quantitative evidence, our subjective observations indicate that participants’ knowledge about the location of states was improved in an entertaining and interactive way.

  7. Accelerated multidimensional radiofrequency pulse design for parallel transmission using concurrent computation on multiple graphics processing units.

    Science.gov (United States)

    Deng, Weiran; Yang, Cungeng; Stenger, V Andrew

    2011-02-01

    Multidimensional radiofrequency (RF) pulses are of current interest because of their promise for improving high-field imaging and for optimizing parallel transmission methods. One major drawback is that the computation time of numerically designed multidimensional RF pulses increases rapidly with their resolution and number of transmitters. This is critical because the construction of multidimensional RF pulses often needs to be in real time. The use of graphics processing units for computations is a recent approach for accelerating image reconstruction applications. We propose the use of graphics processing units for the design of multidimensional RF pulses including the utilization of parallel transmitters. Using a desktop computer with four NVIDIA Tesla C1060 computing processors, we found acceleration factors on the order of 20 for standard eight-transmitter two-dimensional spiral RF pulses with a 64 × 64 excitation resolution and a 10-μsec dwell time. We also show that even greater acceleration factors can be achieved for more complex RF pulses. Copyright © 2010 Wiley-Liss, Inc.

  8. HAL/SM system functional design specification. [systems analysis and design analysis of central processing units

    Science.gov (United States)

    Ross, C.; Williams, G. P. W., Jr.

    1975-01-01

    The functional design of a preprocessor, and subsystems is described. A structure chart and a data flow diagram are included for each subsystem. Also a group of intermodule interface definitions (one definition per module) is included immediately following the structure chart and data flow for a particular subsystem. Each of these intermodule interface definitions consists of the identification of the module, the function the module is to perform, the identification and definition of parameter interfaces to the module, and any design notes associated with the module. Also described are compilers and computer libraries.

  9. Design Processes

    DEFF Research Database (Denmark)

    Ovesen, Nis

    2009-01-01

    advantages and challenges of agile processes in mobile software and web businesses are identified. The applicability of these agile processes is discussed in re- gards to design educations and product development in the domain of Industrial Design and is briefly seen in relation to the concept of dromology......Inspiration for most research and optimisations on design processes still seem to focus within the narrow field of the traditional design practise. The focus in this study turns to associated businesses of the design professions in order to learn from their development processes. Through interviews...

  10. Engineering Encounters: The Cat in the Hat Builds Satellites. A Unit Promoting Scientific Literacy and the Engineering Design Process

    Science.gov (United States)

    Rehmat, Abeera P.; Owens, Marissa C.

    2016-01-01

    This column presents ideas and techniques to enhance your science teaching. This month's issue shares information about a unit promoting scientific literacy and the engineering design process. The integration of engineering with scientific practices in K-12 education can promote creativity, hands-on learning, and an improvement in students'…

  11. Parallel design of JPEG-LS encoder on graphics processing units

    Science.gov (United States)

    Duan, Hao; Fang, Yong; Huang, Bormin

    2012-01-01

    With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.

  12. Designing and Implementing an OVERFLOW Reader for ParaView and Comparing Performance Between Central Processing Units and Graphical Processing Units

    Science.gov (United States)

    Chawner, David M.; Gomez, Ray J.

    2010-01-01

    In the Applied Aerosciences and CFD branch at Johnson Space Center, computational simulations are run that face many challenges. Two of which are the ability to customize software for specialized needs and the need to run simulations as fast as possible. There are many different tools that are used for running these simulations and each one has its own pros and cons. Once these simulations are run, there needs to be software capable of visualizing the results in an appealing manner. Some of this software is called open source, meaning that anyone can edit the source code to make modifications and distribute it to all other users in a future release. This is very useful, especially in this branch where many different tools are being used. File readers can be written to load any file format into a program, to ease the bridging from one tool to another. Programming such a reader requires knowledge of the file format that is being read as well as the equations necessary to obtain the derived values after loading. When running these CFD simulations, extremely large files are being loaded and having values being calculated. These simulations usually take a few hours to complete, even on the fastest machines. Graphics processing units (GPUs) are usually used to load the graphics for computers; however, in recent years, GPUs are being used for more generic applications because of the speed of these processors. Applications run on GPUs have been known to run up to forty times faster than they would on normal central processing units (CPUs). If these CFD programs are extended to run on GPUs, the amount of time they would require to complete would be much less. This would allow more simulations to be run in the same amount of time and possibly perform more complex computations.

  13. Signal processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Boswell, J.

    1983-01-01

    The architecture of the signal processing unit (SPU) comprises an ROM connected to a program bus, and an input-output bus connected to a data bus and register through a pipeline multiplier accumulator (pmac) and a pipeline arithmetic logic unit (palu), each associated with a random access memory (ram1,2). The system pulse frequency is from 20 mhz. The pmac is further detailed, and has a capability of 20 mega operations per second. There is also a block diagram for the palu, showing interconnections between the register block (rbl), separator for bus (bs), register (reg), shifter (sh) and combination unit. The first and second rams have formats 64*16 and 32*32 bits, respectively. Further data are a 5-v power supply and 2.5 micron n-channel silicon gate mos technology with about 50000 transistors.

  14. Design of hydraulic recuperation unit

    Directory of Open Access Journals (Sweden)

    Jandourek Pavel

    2016-01-01

    Full Text Available This article deals with design and measurement of hydraulic recuperation unit. Recuperation unit consist of radial turbine and axial pump, which are coupled on the same shaft. Speed of shaft with impellers are 6000 1/min. For economic reasons, is design of recuperation unit performed using commercially manufactured propellers.

  15. Unit Testing Using Design by Contract and Equivalence Partitions, Extreme Programming and Agile Processes in Software Engineering

    DEFF Research Database (Denmark)

    Madsen, Per

    2003-01-01

    Extreme Programming [1] and in particular the idea of Unit Testing can improve the quality of the testing process. But still programmers need to do a lot of tiresome manual work writing test cases. If the programmers could get some automatic tool support enforcing the quality of test cases then t...... then the overall quality of the software would improve significantly....

  16. Unit Testing Using Design by Contract and Equivalence Partitions, Extreme Programming and Agile Processes in Software Engineering

    DEFF Research Database (Denmark)

    Madsen, Per

    2003-01-01

    Extreme Programming [1] and in particular the idea of Unit Testing can improve the quality of the testing process. But still programmers need to do a lot of tiresome manual work writing test cases. If the programmers could get some automatic tool support enforcing the quality of test cases...

  17. Scale-up of mild gasification to be a process development unit mildgas 24 ton/day PDU design report. Final report, November 1991--July 1996

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-03-01

    From November 1991 to April 1996, Kerr McGee Coal Corporation (K-M Coal) led a project to develop the Institute of Gas Technology (IGT) Mild Gasification (MILDGAS) process for near-term commercialization. The specific objectives of the program were to: design, construct, and operate a 24-tons/day adiabatic process development unit (PDU) to obtain process performance data suitable for further design scale-up; obtain large batches of coal-derived co-products for industrial evaluation; prepare a detailed design of a demonstration unit; and develop technical and economic plans for commercialization of the MILDGAS process. The project team for the PDU development program consisted of: K-M Coal, IGT, Bechtel Corporation, Southern Illinois University at Carbondale (SIUC), General Motors (GM), Pellet Technology Corporation (PTC), LTV Steel, Armco Steel, Reilly Industries, and Auto Research.

  18. WellStar Paulding Hospital intensive care unit case study: achieving a research-based, patient-centered design using a collaborative process.

    Science.gov (United States)

    Burns, Georgeann B; Hogue, Vicky

    2014-01-01

    This article describes the processes and tools used by WellStar Paulding Hospital to plan and design a new intensive care unit (ICU) as part of a 108-bed replacement hospital on a new site. Seeking to create a culture of safety centered around patient care, quality, and efficiency, the team used multiple external resources to increase their effectiveness as participants in the design process and to ensure that the new ICU achieves the functional performance goals identified at the beginning of planning and design. Specific focus on evidence-based design was assisted through participation in the Center for Health Design's Pebble Project process as well as the Joint Commission International Safe Health Design Learning Academy Pilot Program.

  19. ECO DESIGN IN DESIGN PROCESS

    Directory of Open Access Journals (Sweden)

    PRALEA Jeni

    2014-05-01

    Full Text Available Eco-design is a new domain, required by the new trends and existing concerns worldwide, generated by the necessity of adopting new design principles. New design principles require the designer to provide a friendly relationship between concept created, environment and consume. This "friendly" relationship should be valid both at present and in the future, generating new opportunities for product, product components or materials from which it was made. Awareness, by the designer, the importance of this new trend, permits the establishment of concepts that have as their objective the protection of present values and ensuring the legacy of future generations. Ecodesig, by its principles, is involved in the design process, from early stage, the stage of product design. Priority objective of the designers will consist in reducing the negative effects on the environment through the entire life cycle and after it is taken out of use. The main aspects of the eco-design will consider extending product exploitation, make better use of materials, reduction of emission of waste. The design process in the "eco"domein must be started by selecting the function of the concept, materials and technological processes, causing the shape of macro and micro geometric of the product through an analysis that involves optimizing and streamlining the product. This paper presents the design process of a cross-sports footwear concept, built on the basis of the principles of ecodesign

  20. Design of Dolos Armour Units

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Liu, Zhou

    1993-01-01

    The slender, complex types of armour units, such as Tetrapods and Dolosse are widely used. Many of the recent failures are such rubble mound breakwaters revealed that there is an imbalance between strength (structural integrity) of the units and the hydraulic stability (resistance to displacements......) of the armour layers. The paper deals only with dolos armour and presents the first design diagrammes and formulae where stresses from static, quasistatic and impact loads are implemented as well as the hydraulic stability. The dolos is treated as a multi shape unit where the thickness can be adjusted...

  1. THOR Particle Processing Unit PPU

    Science.gov (United States)

    Federica Marcucci, Maria; Bruno, Roberto; Consolini, Giuseppe; D'Amicis, Raffaella; De Lauretis, Marcello; De Marco, Rossana; De Michelis, Paola; Francia, Patrizia; Laurenza, Monica; Materassi, Massimo; Vellante, Massimo; Valentini, Francesco

    2016-04-01

    Turbulence Heating ObserveR (THOR) is the first mission ever flown in space dedicated to plasma turbulence. On board THOR, data collected by the Turbulent Electron Analyser, the Ion Mass Spectrum analyser and the Cold Solar Wind ion analyser instruments will be processed by a common digital processor unit, the Particle Processing Unit (PPU). PPU architecture will be based on the state of the art space flight processors and will be fully redundant, in order to efficiently and safely handle the data from the numerous sensors of the instruments suite. The approach of a common processing unit for particle instruments is very important for the enabling of an efficient management for correlative plasma measurements, also facilitating interoperation with other instruments on the spacecraft. Moreover, it permits technical and programmatic synergies giving the possibility to optimize and save spacecraft resources.

  2. The Critical Design Process

    DEFF Research Database (Denmark)

    Brunsgaard, Camilla; Knudstrup, Mary-Ann; Heiselberg, Per

    2014-01-01

    within Danish tradition of architecture and construction. The objective of the research presented in this paper, is to compare the different design processes behind the making of passive houses in a Danish context. We evaluated the process with regard to the integrated and traditional design process....... Data analysis showed that the majority of the consortiums worked in an integrated manner; though there was room for improvment. Additionally, the paper discusses the challanges of implementing the integrated design process in practice and suggests ways of overcomming some of the barriers . In doing so...

  3. Low cost balancing unit design

    Science.gov (United States)

    Golembiovsky, Matej; Dedek, Jan; Slanina, Zdenek

    2017-06-01

    This article deals with the design of a low-cost balancing system which consist of battery balancing units, accumulator pack units and coordinator unit with interface for higher level of battery management system. This solution allows decentralized mode of operation and the aim of this work is implementation of controlling and diagnostic mechanism into an electric scooter project realized at Technical university of Ostrava. In todays world which now fully enjoys the prime of electromobility, off-grid battery systems and other, it is important to seek the optimal balance between functionality and the economy side of BMS that being electronics which deals with secondary cells of batery packs. There were numerous sophisticated, but not too practical BMS models in the past, such as centralized system or standalone balance modules of individual cells. This article aims at development of standalone balance modules which are able to communicate with the coordinator, adjust their parameters and ensure their cells safety in case of a communication failure. With the current worldwide cutting cost trend in mind, the emphasis was put on the lowest price possible for individual component. The article is divided into two major categories, the first one being desing of power electronics with emphasis on quality, safety (cooling) and also cost. The second part describes development of a communication interface with reliability and cost in mind. The article contains numerous graphs from practical measurements. The outcome of the work and its possible future is defined in the conclusion.

  4. ECO DESIGN IN DESIGN PROCESS

    OpenAIRE

    PRALEA Jeni; SOLTUZ Elena

    2014-01-01

    Eco-design is a new domain, required by the new trends and existing concerns worldwide, generated by the necessity of adopting new design principles. New design principles require the designer to provide a friendly relationship between concept created, environment and consume. This "friendly" relationship should be valid both at present and in the future, generating new opportunities for product, product components or materials from which it was made. Awareness, by the designer, the imp...

  5. Investigating the Design Process

    DEFF Research Database (Denmark)

    Kautz, Karlheinz

    2011-01-01

    Purpose – This paper aims to explore a case of customer and user participation in an agile software development project, which produced a tailor-made information system for workplace support as a step towards a theory of participatory design in agile software development. Design/methodology/approach...... supported a balance between flexibility and project progress and resulted in a project and a product which were considered a success by the customer and the development organization. The analysis showed that the integrative framework for user participation can also fruitfully be used in a new context...... to understand what participatory design is and how, when and where it can be performed as an instance of a design process in agile development. As such the paper contributes to an analytical and a design theory of participatory design in agile development. Furthermore the paper explicates why participatory...

  6. Design and construction of coal/biomass to liquids (CBTL) process development unit (PDU) at the University of Kentucky Center for Applied Energy Research (CAER)

    Energy Technology Data Exchange (ETDEWEB)

    Placido, Andrew [Univ. of Kentucky, Lexington, KY (United States); Liu, Kunlei [Univ. of Kentucky, Lexington, KY (United States); Challman, Don [Univ. of Kentucky, Lexington, KY (United States); Andrews, Rodney [Univ. of Kentucky, Lexington, KY (United States); Jacques, David [Univ. of Kentucky, Lexington, KY (United States)

    2015-10-30

    This report describes a first phase of a project to design, construct and commission an integrated coal/biomass-to-liquids facility at a capacity of 1 bbl. /day at the University of Kentucky Center for Applied Energy Research (UK-CAER) – specifically for construction of the building and upstream process units for feed handling, gasification, and gas cleaning, conditioning and compression. The deliverables from the operation of this pilot plant [when fully equipped with the downstream process units] will be firstly the liquid FT products and finished fuels which are of interest to UK-CAER’s academic, government and industrial research partners. The facility will produce research quantities of FT liquids and finished fuels for subsequent Fuel Quality Testing, Performance and Acceptability. Moreover, the facility is expected to be employed for a range of research and investigations related to: Feed Preparation, Characteristics and Quality; Coal and Biomass Gasification; Gas Clean-up/ Conditioning; Gas Conversion by FT Synthesis; Product Work-up and Refining; Systems Analysis and Integration; and Scale-up and Demonstration. Environmental Considerations - particularly how to manage and reduce carbon dioxide emissions from CBTL facilities and from use of the fuels - will be a primary research objectives. Such a facility has required significant lead time for environmental review, architectural/building construction, and EPC services. UK, with DOE support, has advanced the facility in several important ways. These include: a formal EA/FONSI, and permits and approvals; construction of a building; selection of a range of technologies and vendors; and completion of the upstream process units. The results of this project are the FEED and detailed engineering studies, the alternate configurations and the as-built plant - its equipment and capabilities for future research and demonstration and its adaptability for re-purposing to meet other needs. These are described in

  7. An Integrated Design Process

    DEFF Research Database (Denmark)

    Petersen, Mads Dines; Knudstrup, Mary-Ann

    2010-01-01

    Present paper is placed in the discussion about how sustainable measures are integrated in the design process by architectural offices. It presents results from interviews with four leading Danish architectural offices working with sustainable architecture and their experiences with it, as well...... as the requirements they meet in terms of how to approach the design process – especially focused on the early stages like a competition. The interviews focus on their experiences with working in multidisciplinary teams and using digital tools to support their work with sustainable issues. The interviews show...... the environmental measures cannot be discarded due to extra costs....

  8. Application of the Lean Office philosophy and mapping of the value stream in the process of designing the banking units of a financial company

    Directory of Open Access Journals (Sweden)

    Nelson Antônio Calsavara

    2016-09-01

    Full Text Available The purpose of this study is to conduct a critical analysis of the effects of Lean Office on the design process of the banking units of a financial company and how the implementation of this philosophy may contribute to productivity, thus reducing implementation time. A literature review of the Toyota Production System was conducted, as well as studies on its methods, with advancement to lean thinking and consistent application of Lean philosophies in services and Office. A bibliographic and documentary survey of the Lean processes and procedures for opening bank branches was taken. A Current State Map was developed, modeling the current operating procedures. Soon after the identification and analysis of waste, proposals were presented for reducing deadlines and eliminating and grouping stages, with consequent development of the Future State Map, implementation and monitoring of stages, and the measurement of estimated time gains in operation, demonstrating an estimated 45% reduction, in days, from start to end of the process, concluding that the implementation of the Lean Office philosophy contributed to the process.

  9. An Integrated Design Process

    DEFF Research Database (Denmark)

    Petersen, Mads Dines; Knudstrup, Mary-Ann

    2010-01-01

    as the requirements they meet in terms of how to approach the design process – especially focused on the early stages like a competition. The interviews focus on their experiences with working in multidisciplinary teams and using digital tools to support their work with sustainable issues. The interviews show...

  10. A support design process

    Energy Technology Data Exchange (ETDEWEB)

    Arthur, J.; Scott, P.B. [Health and Safety Executive (United Kingdom)

    2004-07-01

    A workman suffered a fatal injury due to a fall of ground from the face of a development drivage, which was supported by passive supports supplemented with roof bolts. A working party was set up to review the support process and evaluate how protection of the workmen could be improved whilst setting supports.The working party included representatives from the trade unions, the mines inspectorate and mine operators.Visits were made to several mines and discussions were held with the workmen and management at these mines. The paper describes the results of the visits and how a support design process was evolved. The process will ensure that the support system is designed to reduce the inherent hazards associated with setting supports using either conventional or mixed support systems.

  11. Process engineering and mechanical design reports. Volume III. Preliminary design and assessment of a 12,500 BPD coal-to-methanol-to-gasoline plant. [Grace C-M-G Plant, Henderson County, Kentucky; Units 26, 27, 31 through 34, 36 through 39

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, R. M.

    1982-08-01

    Various unit processes are considered as follows: a brief description, basis of design; process selection rationale, a brief description of the process chosen and a risk assessment evaluation (for some cases). (LTN)

  12. Design trends in low temperature gas processing

    Energy Technology Data Exchange (ETDEWEB)

    White, W.E.; Battershell, D.D.

    1966-01-01

    The following basic trends reflected in recent design of low-temperature gas processing are discussed: (1) higher recovery levels of light hydrocarbon products; (2) lower process temperatures and lighter absorption oils; (3) increased thermodynamic efficiencies; (4) automation; (5) single rather than multiple units; and (6) prefabrication and preassembly of the operating unit.

  13. Aluminium sulfate as coagulant for highly polluted cork processing wastewater: Evaluation of settleability parameters and design of a clarifier-thickener unit.

    Science.gov (United States)

    González, Teresa; Domínguez, Joaquín R; Beltrán-Heredia, Jesús; García, Héctor M; Sanchez-Lavado, F

    2007-09-01

    This is the second part of a master project on the chemistry of aluminium as coagulant in the treatment of highly polluted cork-process-wastewater. The main aim of this second part was to determine the influence of the operating conditions on the system's settleability parameters. It is well known that it is just as important to achieve good settleability parameters in the physico-chemical treatment of wastewaters as it is to attain a high level of decontamination. These parameters will determine the dimensions of the required equipment, and hence the costs of the installation. This part of the study therefore analyzes the influence of the different operating variables on the following settleability parameters: sediment volumetric percentage, settling velocity, sludge volume index and total suspended solids just after mixture with the coagulant. The ranges used for the experimental variables were: coagulant dose (83-166 mgL(-1) of Al(3+)), coagulation mixing time (5-30 min), stirring rate (60-300 rpm), contamination level of the wastewater (Wastewater II COD approximately 2000 mg O(2) L(-1), Wastewater III COD approximately 3000 mg O(2) L(-1)), and pH (5-11). The optimal conditions found for the settling process were not the same as those that had been determined for the organic matter removal. In this case the optimal conditions were: coagulation mixing time (30 min), stirring rate (60 rpm), coagulant dose (83 mgL(-1) of Al(3+)) and pH (7-9). Finally, the Talmadge-Fitch method is used to apply the results to the design of a clarifier-thickener unit to treat 2m(3)h(-1) of wastewater. The required minimum area of the unit would be 4.11 m(2).

  14. Design Process Optimization Based on Design Process Gene Mapping

    Institute of Scientific and Technical Information of China (English)

    LI Bo; TONG Shu-rong

    2011-01-01

    The idea of genetic engineering is introduced into the area of product design to improve the design efficiency. A method towards design process optimization based on the design process gene is proposed through analyzing the correlation between the design process gene and characteristics of the design process. The concept of the design process gene is analyzed and categorized into five categories that are the task specification gene, the concept design gene, the overall design gene, the detailed design gene and the processing design gene in the light of five design phases. The elements and their interactions involved in each kind of design process gene signprocess gene mapping is drawn with its structure disclosed based on its function that process gene.

  15. Learning Is the Journey: From Process Reengineering to Systemic Customer-Service Design at the United States Department of Veterans Affairs, Veterans Benefits Administration

    Science.gov (United States)

    2013-05-23

    outlined operations process.69 Second, design, as used in the ADM is both a noun and a verb ; it refers both to a product and the process used to...too many variables. Thus, the art of framing is about capturing variables that have a tangible quality (nouns and action verbs ) – as well as...order to tackle the ill-structured problem that is transformation in the disability claims processing environment. The beauty of this model is that

  16. ON DEVELOPING CLEANER ORGANIC UNIT PROCESSES

    Science.gov (United States)

    Organic waste products, potentially harmful to the human health and the environment, are primarily produced in the synthesis stage of manufacturing processes. Many such synthetic unit processes, such as halogenation, oxidation, alkylation, nitration, and sulfonation are common to...

  17. Design Quality Indicator for Schools in the United Kingdom

    Science.gov (United States)

    PEB Exchange, 2006

    2006-01-01

    In December 2005, the United Kingdom launched a process for evaluating the design quality of primary and secondary school buildings. The Design Quality Indicator (DQI) for Schools is a tool that can assist stakeholders--teachers, parents, school governors, students, community members, local authority clients and building professionals--to achieve…

  18. The Critical Design Process

    DEFF Research Database (Denmark)

    Brunsgaard, Camilla; Knudstrup, Mary-Ann; Heiselberg, Per

    2014-01-01

    The “Comfort Houses” project is the most ambitious building project for passive, single-family houses even undertaken in Denmark. Thus far, different consortiums have designed and built 10 houses. Besides fulfilling the German passive-house standard, the goal of the project was to build the house...

  19. Decoding designers' inspiration process

    NARCIS (Netherlands)

    Gonçalves, M.

    2016-01-01

    Every great invention, innovative design or visionary art piece ever created started in the same way: with a blank canvas. However, you never begin a new project with a completely clean slate: besides memories, past experiences and general knowledge, all of us are constantly surrounded by informatio

  20. Teaching Process Design through Integrated Process Synthesis

    Science.gov (United States)

    Metzger, Matthew J.; Glasser, Benjamin J.; Patel, Bilal; Hildebrandt, Diane; Glasser, David

    2012-01-01

    The design course is an integral part of chemical engineering education. A novel approach to the design course was recently introduced at the University of the Witwatersrand, Johannesburg, South Africa. The course aimed to introduce students to systematic tools and techniques for setting and evaluating performance targets for processes, as well as…

  1. Materials in Participatory Design Processes

    DEFF Research Database (Denmark)

    Hansen, Nicolai Brodersen

    This dissertation presents three years of academic inquiry into the question of what role materials play in interaction design and participatory design processes. The dissertation aims at developing conceptual tools, based on Deweys pragmatism, for understanding how materials aid design reflection....... It has been developed using a research-through-design approach in which the author has conducted practical design work in order to investigate and experiment with using materials to scaffold design inquiry. The results of the PhD work is submitted as seven separate papers, submitted to esteemed journals...... and conferences within the field of interaction design and HCI. The work is motivated both by the growing interest in materials in interaction design and HCI and the interest in design processes and collaboration within those fields. At the core of the dissertation lies an interest in the many different materials...

  2. Biorefinery plant design, engineering and process optimisation

    DEFF Research Database (Denmark)

    Holm-Nielsen, Jens Bo; Ehimen, Ehiazesebhor Augustine

    2014-01-01

    Before new biorefinery systems can be implemented, or the modification of existing single product biomass processing units into biorefineries can be carried out, proper planning of the intended biorefinery scheme must be performed initially. This chapter outlines design and synthesis approaches a...

  3. Design of environmentally benign processes

    DEFF Research Database (Denmark)

    Hostrup, Martin; Harper, Peter Mathias; Gani, Rafiqul

    1999-01-01

    This paper presents a hybrid method for design of environmentally benign processes. The hybrid method integrates mathematical modelling with heuristic approaches to solving the optimisation problems related to separation process synthesis and solvent design and selection. A structured method...... of solution, which employs thermodynamic insights to reduce the complexity and size of the mathematical problem by eliminating redundant alternatives, has been developed for the hybrid method. Separation process synthesis and design problems related to the removal of a chemical species from process streams...... mixture and the second example involves the determination of environmentally benign substitute solvents for removal of a chemical species from wastewater. (C) 1999 Elsevier Science Ltd. All rights reserved....

  4. Design and testing of a process-based groundwater vulnerability assessment (P-GWAVA) system for predicting concentrations of agrichemicals in groundwater across the United States

    Science.gov (United States)

    Barbash, Jack E; Voss, Frank D.

    2016-03-29

    Efforts to assess the likelihood of groundwater contamination from surface-derived compounds have spanned more than three decades. Relatively few of these assessments, however, have involved the use of process-based simulations of contaminant transport and fate in the subsurface, or compared the predictions from such models with measured data—especially over regional to national scales. To address this need, a process-based groundwater vulnerability assessment (P-GWAVA) system was constructed to use transport-and-fate simulations to predict the concentration of any surface-derived compound at a specified depth in the vadose zone anywhere in the conterminous United States. The system was then used to simulate the concentrations of selected agrichemicals in the vadose zone beneath agricultural areas in multiple locations across the conterminous United States. The simulated concentrations were compared with measured concentrations of the compounds detected in shallow groundwater (that is, groundwater drawn from within a depth of 6.3 ± 0.5 meters [mean ± 95 percent confidence interval] below the water table) in more than 1,400 locations across the United States. The results from these comparisons were used to select the simulation approaches that led to the closest agreement between the simulated and the measured concentrations.The P-GWAVA system uses computer simulations that account for a broader range of the hydrologic, physical, biological and chemical phenomena known to control the transport and fate of solutes in the subsurface than has been accounted for by any other vulnerability assessment over regional to national scales. Such phenomena include preferential transport and the influences of temperature, soil properties, and depth on the partitioning, transport, and transformation of pesticides in the subsurface. Published methods and detailed soil property data are used to estimate a wide range of model input parameters for each site, including surface

  5. Design of Industrial Quenching Processes

    Institute of Scientific and Technical Information of China (English)

    Nikolai. I. KOBASKO; George .E. TOTTEN

    2004-01-01

    The method of designing industrial processes of quench cooling, in particular, the speed of the conveyor movement with regard to shape and sizes of parts to be quenched, thermal and physical properties of material and cooling capacity of quenchants has been developed. The suggested designing method and databases are the basis for the complete automation of industrial processes of quench cooling, especially for continuous conveyor lines, with the purpose of making high-strength materials. The process is controlled by infrared technique.

  6. Analog circuit design designing waveform processing circuits

    CERN Document Server

    Feucht, Dennis

    2010-01-01

    The fourth volume in the set Designing Waveform-Processing Circuits builds on the previous 3 volumes and presents a variety of analog non-amplifier circuits, including voltage references, current sources, filters, hysteresis switches and oscilloscope trigger and sweep circuitry, function generation, absolute-value circuits, and peak detectors.

  7. Data Sorting Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    M. J. Mišić

    2012-06-01

    Full Text Available Graphics processing units (GPUs have been increasingly used for general-purpose computation in recent years. The GPU accelerated applications are found in both scientific and commercial domains. Sorting is considered as one of the very important operations in many applications, so its efficient implementation is essential for the overall application performance. This paper represents an effort to analyze and evaluate the implementations of the representative sorting algorithms on the graphics processing units. Three sorting algorithms (Quicksort, Merge sort, and Radix sort were evaluated on the Compute Unified Device Architecture (CUDA platform that is used to execute applications on NVIDIA graphics processing units. Algorithms were tested and evaluated using an automated test environment with input datasets of different characteristics. Finally, the results of this analysis are briefly discussed.

  8. The Integrated Design Process (IDP)

    DEFF Research Database (Denmark)

    Hansen, Hanne Tine Ring; Knudstrup, Mary-Ann

    2005-01-01

    Summary: This paper presents the Integrated Design Process (IDP) applied to sustainable architecture and available design methods and gives an example of the tools applied. The paper focuses upon the ability to integrate knowledge from engineering and architecture and let them interact with each...... other in order to solve the often very complicated problems connected to the design of sustainable buildings. Some of the aspects of the integrated design process were tested on a virtual design project in order to evaluate if the IDP can help achieve sustainable architecture. The aim was to show how...... the different parameters and products can interact, and which consequences this would have on a project. The IDP does not ensure aesthetic or sustainable solutions, but it enables the designer to control the many parameters that must be considered and integrated in the project when creating more holistic...

  9. Integration Process for the Habitat Demonstration Unit

    Science.gov (United States)

    Gill, Tracy; Merbitz, Jerad; Kennedy, Kriss; Tn, Terry; Toups, Larry; Howe, A. Scott; Smitherman, David

    2011-01-01

    The Habitat Demonstration Unit (HDU) is an experimental exploration habitat technology and architecture test platform designed for analog demonstration activities. The HDU previously served as a test bed for testing technologies and sub-systems in a terrestrial surface environment. in 2010 in the Pressurized Excursion Module (PEM) configuration. Due to the amount of work involved to make the HDU project successful, the HDU project has required a team to integrate a variety of contributions from NASA centers and outside collaborators The size of the team and number of systems involved With the HDU makes Integration a complicated process. However, because the HDU shell manufacturing is complete, the team has a head start on FY--11 integration activities and can focus on integrating upgrades to existing systems as well as integrating new additions. To complete the development of the FY-11 HDU from conception to rollout for operations in July 2011, a cohesive integration strategy has been developed to integrate the various systems of HDU and the payloads. The highlighted HDU work for FY-11 will focus on performing upgrades to the PEM configuration, adding the X-Hab as a second level, adding a new porch providing the astronauts a larger work area outside the HDU for EVA preparations, and adding a Hygiene module. Together these upgrades result in a prototype configuration of the Deep Space Habitat (DSH), an element under evaluation by NASA's Human Exploration Framework Team (HEFT) Scheduled activates include early fit-checks and the utilization of a Habitat avionics test bed prior to installation into HDU. A coordinated effort to utilize modeling and simulation systems has aided in design and integration concept development. Modeling tools have been effective in hardware systems layout, cable routing, sub-system interface length estimation and human factors analysis. Decision processes on integration and use of all new subsystems will be defined early in the project to

  10. High Input Voltage, Silicon Carbide Power Processing Unit Performance Demonstration

    Science.gov (United States)

    Bozak, Karin E.; Pinero, Luis R.; Scheidegger, Robert J.; Aulisio, Michael V.; Gonzalez, Marcelo C.; Birchenough, Arthur G.

    2015-01-01

    A silicon carbide brassboard power processing unit has been developed by the NASA Glenn Research Center in Cleveland, Ohio. The power processing unit operates from two sources: a nominal 300 Volt high voltage input bus and a nominal 28 Volt low voltage input bus. The design of the power processing unit includes four low voltage, low power auxiliary supplies, and two parallel 7.5 kilowatt (kW) discharge power supplies that are capable of providing up to 15 kilowatts of total power at 300 to 500 Volts (V) to the thruster. Additionally, the unit contains a housekeeping supply, high voltage input filter, low voltage input filter, and master control board, such that the complete brassboard unit is capable of operating a 12.5 kilowatt Hall effect thruster. The performance of the unit was characterized under both ambient and thermal vacuum test conditions, and the results demonstrate exceptional performance with full power efficiencies exceeding 97%. The unit was also tested with a 12.5kW Hall effect thruster to verify compatibility and output filter specifications. With space-qualified silicon carbide or similar high voltage, high efficiency power devices, this would provide a design solution to address the need for high power electric propulsion systems.

  11. Analysis and Optimization of Central Processing Unit Process Parameters

    Science.gov (United States)

    Kaja Bantha Navas, R.; Venkata Chaitana Vignan, Budi; Durganadh, Margani; Rama Krishna, Chunduri

    2017-05-01

    The rapid growth of computer has made processing more data capable, which increase the heat dissipation. Hence the system unit CPU must be cooled against operating temperature. This paper presents a novel approach for the optimization of operating parameters on Central Processing Unit with single response based on response graph method. These methods have a series of steps from of proposed approach which are capable of decreasing uncertainty caused by engineering judgment in the Taguchi method. Orthogonal Array value was taken from ANSYS report. The method shows a good convergence with the experimental and the optimum process parameters.

  12. Quantum Central Processing Unit and Quantum Algorithm

    Institute of Scientific and Technical Information of China (English)

    王安民

    2002-01-01

    Based on a scalable and universal quantum network, quantum central processing unit, proposed in our previous paper [Chin. Phys. Left. 18 (2001)166], the whole quantum network for the known quantum algorithms,including quantum Fourier transformation, Shor's algorithm and Grover's algorithm, is obtained in a unitied way.

  13. Syllables as Processing Units in Handwriting Production

    Science.gov (United States)

    Kandel, Sonia; Alvarez, Carlos J.; Vallee, Nathalie

    2006-01-01

    This research focused on the syllable as a processing unit in handwriting. Participants wrote, in uppercase letters, words that had been visually presented. The interletter intervals provide information on the timing of motor production. In Experiment 1, French participants wrote words that shared the initial letters but had different syllable…

  14. Graphics processing unit-assisted lossless decompression

    Science.gov (United States)

    Loughry, Thomas A.

    2016-04-12

    Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.

  15. Graphics processing unit-assisted lossless decompression

    Energy Technology Data Exchange (ETDEWEB)

    Loughry, Thomas A.

    2016-04-12

    Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.

  16. Process characterization and Design Space definition.

    Science.gov (United States)

    Hakemeyer, Christian; McKnight, Nathan; St John, Rick; Meier, Steven; Trexler-Schmidt, Melody; Kelley, Brian; Zettl, Frank; Puskeiler, Robert; Kleinjans, Annika; Lim, Fred; Wurth, Christine

    2016-09-01

    Quality by design (QbD) is a global regulatory initiative with the goal of enhancing pharmaceutical development through the proactive design of pharmaceutical manufacturing process and controls to consistently deliver the intended performance of the product. The principles of pharmaceutical development relevant to QbD are described in the ICH guidance documents (ICHQ8-11). An integrated set of risk assessments and their related elements developed at Roche/Genentech were designed to provide an overview of product and process knowledge for the production of a recombinant monoclonal antibody (MAb). This chapter describes the tools used for the characterization and validation of MAb manufacturing process under the QbD paradigm. This comprises risk assessments for the identification of potential Critical Process Parameters (pCPPs), statistically designed experimental studies as well as studies assessing the linkage of the unit operations. Outcome of the studies is the classification of process parameters according to their criticality and the definition of appropriate acceptable ranges of operation. The process and product knowledge gained in these studies can lead to the approval of a Design Space. Additionally, the information gained in these studies are used to define the 'impact' which the manufacturing process can have on the variability of the CQAs, which is used to define the testing and monitoring strategy.

  17. COST ESTIMATION MODELS FOR DRINKING WATER TREATMENT UNIT PROCESSES

    Science.gov (United States)

    Cost models for unit processes typically utilized in a conventional water treatment plant and in package treatment plant technology are compiled in this paper. The cost curves are represented as a function of specified design parameters and are categorized into four major catego...

  18. Human Integration Design Processes (HIDP)

    Science.gov (United States)

    Boyer, Jennifer

    2014-01-01

    The purpose of the Human Integration Design Processes (HIDP) document is to provide human-systems integration design processes, including methodologies and best practices that NASA has used to meet human systems and human rating requirements for developing crewed spacecraft. HIDP content is framed around human-centered design methodologies and processes in support of human-system integration requirements and human rating. NASA-STD-3001, Space Flight Human-System Standard, is a two-volume set of National Aeronautics and Space Administration (NASA) Agency-level standards established by the Office of the Chief Health and Medical Officer, directed at minimizing health and performance risks for flight crews in human space flight programs. Volume 1 of NASA-STD-3001, Crew Health, sets standards for fitness for duty, space flight permissible exposure limits, permissible outcome limits, levels of medical care, medical diagnosis, intervention, treatment and care, and countermeasures. Volume 2 of NASASTD- 3001, Human Factors, Habitability, and Environmental Health, focuses on human physical and cognitive capabilities and limitations and defines standards for spacecraft (including orbiters, habitats, and suits), internal environments, facilities, payloads, and related equipment, hardware, and software with which the crew interfaces during space operations. The NASA Procedural Requirements (NPR) 8705.2B, Human-Rating Requirements for Space Systems, specifies the Agency's human-rating processes, procedures, and requirements. The HIDP was written to share NASA's knowledge of processes directed toward achieving human certification of a spacecraft through implementation of human-systems integration requirements. Although the HIDP speaks directly to implementation of NASA-STD-3001 and NPR 8705.2B requirements, the human-centered design, evaluation, and design processes described in this document can be applied to any set of human-systems requirements and are independent of reference

  19. Green Diesel from Hydrotreated Vegetable Oil Process Design Study

    NARCIS (Netherlands)

    Hilbers, T.J.; Sprakel, L.M.J.; Enk, van den L.B.J.; Zaalberg, B.; Berg, van den H.; Ham, van der A.G.J.

    2015-01-01

    A systematic approach was applied to study the process of hydrotreating vegetable oils. During the three phases of conceptual, detailed, and final design, unit operations were designed and sized. Modeling of the process was performed with UniSim Design®. Producing green diesel and jet fuel from vege

  20. Green Diesel from Hydrotreated Vegetable Oil Process Design Study

    NARCIS (Netherlands)

    Hilbers, T.J.; Sprakel, Lisette Maria Johanna; van den Enk, L.B.J.; Zaalberg, B.; van den Berg, Henderikus; van der Ham, Aloysius G.J.

    2015-01-01

    A systematic approach was applied to study the process of hydrotreating vegetable oils. During the three phases of conceptual, detailed, and final design, unit operations were designed and sized. Modeling of the process was performed with UniSim Design®. Producing green diesel and jet fuel from

  1. Design of the ERIS calibration unit

    Science.gov (United States)

    Dolci, Mauro; Valentini, Angelo; Di Rico, Gianluca; Esposito, Simone; Ferruzzi, Debora; Riccardi, Armando; Spanò, Paolo; Antichi, Jacopo

    2016-08-01

    The Enhanced Resolution Imager and Spectrograph (ERIS) is a new-generation instrument for the Cassegrain focus of the ESO UT4/VLT, aimed at performing AO-assisted imaging and medium resolution spectroscopy in the 1-5 micron wavelength range. ERIS consists of the 1-5 micron imaging camera NIX, the 1-2.5 micron integral field spectrograph SPIFFIER (a modified version of SPIFFI, currently operating on SINFONI), the AO module and the internal Calibration Unit (ERIS CU). The purpose of this unit is to provide facilities to calibrate the scientific instruments in the 1-2.5 micron and to perform troubleshooting and periodic maintenance tests of the AO module (e.g. NGS and LGS WFS internal calibrations and functionalities, ERIS differential flexures) in the 0.5 - 1 μm range. The ERIS CU must therefore be designed in order to provide, over the full 0.5 - 2.5 μm range, the following capabilities: 1) illumination of both the telescope focal plane and the telescope pupil with a high-degree of uniformity; 2) artificial point-like and extended sources onto the telescope focal plane, with high accuracy in both positioning and FWHM; 3) wavelength calibration; 4) high stability of these characteristics. In this paper the design of the ERIS CU, and the solutions adopted to fulfill all these requirements, is described. The ERIS CU construction is foreseen to start at the end of 2016.

  2. Biorefinery plant design, engineering and process optimisation

    DEFF Research Database (Denmark)

    Holm-Nielsen, Jens Bo; Ehimen, Ehiazesebhor Augustine

    2014-01-01

    applicable for the planning and upgrading of intended biorefinery systems, and includes discussions on the operation of an existing lignocellulosic-based biorefinery platform. Furthermore, technical considerations and tools (i.e., process analytical tools) which could be applied to optimise the operations......Before new biorefinery systems can be implemented, or the modification of existing single product biomass processing units into biorefineries can be carried out, proper planning of the intended biorefinery scheme must be performed initially. This chapter outlines design and synthesis approaches...

  3. 一种机载设备的中央处理单元模块的设计与实现%Design and Implementation of a Central Process Unit in Airborne Equipment

    Institute of Scientific and Technical Information of China (English)

    王俊; 吕俊; 杨宁

    2014-01-01

    The design and implementation of a central process unit in airborne equipment is introduced in this paper. The airborne equipment receives instruction signals from flight control system via RS422 communication, then the central process unit implements controlling, data calculation, A/D conversion, and feedback the result to actuuating mechanism, so that implementing the expected functions of airborne equipment. The equipment has been used in the airborne which proves that this design is referential and practical.%文章介绍了一种机载设备的中央处理单元模块设计与实现。机载设备通过RS422通讯接收飞行控制系统发来的指令信号,中央处理单元完成控制、数据解算、A/D转换等功能,将结果反馈给执行机构,从而实现机载设备的预期功能。本设备已在飞机上使用,使用结果良好,因此具有较强的参考性和实用性。

  4. Numerical Integration with Graphical Processing Unit for QKD Simulation

    Science.gov (United States)

    2014-03-27

    existing and proposed Quantum Key Distribution (QKD) systems. This research investigates using graphical processing unit ( GPU ) technology to more...Time Pad GPU graphical processing unit API application programming interface CUDA Compute Unified Device Architecture SIMD single-instruction-stream...and can be passed by value or reference [2]. 2.3 Graphical Processing Units Programming with graphical processing unit ( GPU ) requires a different

  5. Process Requirements for Piping Design in the Cold Box of Air Separation Unit%空分装置冷箱内配管的工艺流程要求

    Institute of Scientific and Technical Information of China (English)

    孙东升; 李超

    2015-01-01

    The principle of piping design is to meet the process requirements, ensure the safety and economic rational?ity of the pipeline and related equipment. Meeting the requirements of the process is the most important task of the piping design,it is saturated gas,liquid or medium of two-phase flow in the pipeline of air separation unit cold box, the process has many requirements for the details of pipelines, and needs the pipeline designers to pay attention.%管道设计的原则是满足工艺流程要求、保证管道及相关设备的安全性和经济性.满足工艺流程要求是管道设计的首要任务,空分装置冷箱管道内是饱和的气体、液体或两相流介质,工艺流程对配管有很多细节要求,需要管道设计人员重视.

  6. Knowledge and Processes in Design

    Science.gov (United States)

    1992-09-03

    3 a lack of a basic scientific understanding of the design process is endangering American industry - and productivity (Dertouzos, Lester, & Solow ...Dertouzos, M. L., Lester, R. K., & Solow , R. M. (1989). Made in America: Regaining the productive edge. Cambridge, MA: MIT Press. Dixon, J. R., & Duffy, M. R...NO. DPS-5 ofillsrcue Peter Pirolli and Daniel Berger University of California at Berkeley Portions of this research were funded by the Cognitive

  7. Temperature of the Central Processing Unit

    Directory of Open Access Journals (Sweden)

    Ivan Lavrov

    2016-10-01

    Full Text Available Heat is inevitably generated in the semiconductors during operation. Cooling in a computer, and in its main part – the Central Processing Unit (CPU, is crucial, allowing the proper functioning without overheating, malfunctioning, and damage. In order to estimate the temperature as a function of time, it is important to solve the differential equations describing the heat flow and to understand how it depends on the physical properties of the system. This project aims to answer these questions by considering a simplified model of the CPU + heat sink. A similarity with the electrical circuit and certain methods from electrical circuit analysis are discussed.

  8. Design of 8.0Mt/a Atmospheric and Vacuum Distillation Unit

    Institute of Scientific and Technical Information of China (English)

    Li Hejie

    2003-01-01

    The design features of 8 Mt/a atmospheric and vacuum distillation unit (Ⅲ) in Zhenhai Refiningand Chemical Company are presented and various process schemes are compared. Production practice hasproved that the main process design is advanced and reasonable and the process parameters basicallyreached design requirements.

  9. Conceptual design of distillation-based hybrid separation processes.

    Science.gov (United States)

    Skiborowski, Mirko; Harwardt, Andreas; Marquardt, Wolfgang

    2013-01-01

    Hybrid separation processes combine different separation principles and constitute a promising design option for the separation of complex mixtures. Particularly, the integration of distillation with other unit operations can significantly improve the separation of close-boiling or azeotropic mixtures. Although the design of single-unit operations is well understood and supported by computational methods, the optimal design of flowsheets of hybrid separation processes is still a challenging task. The large number of operational and design degrees of freedom requires a systematic and optimization-based design approach. To this end, a structured approach, the so-called process synthesis framework, is proposed. This article reviews available computational methods for the conceptual design of distillation-based hybrid processes for the separation of liquid mixtures. Open problems are identified that must be addressed to finally establish a structured process synthesis framework for such processes.

  10. Graphics Processing Unit Assisted Thermographic Compositing

    Science.gov (United States)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2013-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques.

  11. Supporting chemical process design under uncertainty

    OpenAIRE

    Wechsung,A.; Oldenburg, J; J. Yu; Polt,A.

    2010-01-01

    A major challenge in chemical process design is to make design decisions based on partly incomplete or imperfect design input data. Still, process engineers are expected to design safe, dependable and cost-efficient processes under these conditions. The complexity of typical process models limits intuitive engineering estimates to judge the impact of uncertain parameters on the proposed design. In this work, an approach to quantify the effect of uncertainty on a process design in order to enh...

  12. Relativistic hydrodynamics on graphics processing units

    CERN Document Server

    Sikorski, Jan; Porter-Sobieraj, Joanna; Słodkowski, Marcin; Krzyżanowski, Piotr; Książek, Natalia; Duda, Przemysław

    2016-01-01

    Hydrodynamics calculations have been successfully used in studies of the bulk properties of the Quark-Gluon Plasma, particularly of elliptic flow and shear viscosity. However, there are areas (for instance event-by-event simulations for flow fluctuations and higher-order flow harmonics studies) where further advancement is hampered by lack of efficient and precise 3+1D~program. This problem can be solved by using Graphics Processing Unit (GPU) computing, which offers unprecedented increase of the computing power compared to standard CPU simulations. In this work, we present an implementation of 3+1D ideal hydrodynamics simulations on the Graphics Processing Unit using Nvidia CUDA framework. MUSTA-FORCE (MUlti STAge, First ORder CEntral, with a~slope limiter and MUSCL reconstruction) and WENO (Weighted Essentially Non-Oscillating) schemes are employed in the simulations, delivering second (MUSTA-FORCE), fifth and seventh (WENO) order of accuracy. Third order Runge-Kutta scheme was used for integration in the t...

  13. The vacuum system for technological unit development and design

    Science.gov (United States)

    Zhukeshov, A. M.; Gabdullina, A. T.; Amrenova, A. U.; Giniyatova, Sh G.; Kaibar, A.; Sundetov, A.; Fermakhan, K.

    2015-11-01

    The paper shows results of development of plasma technological unit on the basis of accelerator of vacuum arc and automated system. During the previous years, the authors investigated the operation of pulsed plasma accelerator and developed unique technologies for hardening of materials. Principles of plasma formation in pulsed plasma accelerator were put into basis of the developed unit. Operation of the pulsed arc accelerator was investigated at different parameters of the charge. The developed vacuum system is designed for production of hi-tech plasma units in high technologies in fields of nanomaterials, mechanical and power engineering and production with high added value. Unlike integrated solutions, the system is a module one to allow its low cost, high reliability and simple maintenance. The problems of use of robots are discussed to modernize the technological process.

  14. DESIGNING FEATURES OF POWER OPTICAL UNITS FOR TECHNOLOGICAL EQUIPMENT

    Directory of Open Access Journals (Sweden)

    M. Y. Afanasiev

    2016-03-01

    Full Text Available This paper considers the question of an optical unit designing for transmitting power laser radiation through an optical fiber. The aim of this work is designing a simple construction unit with minimized reflection losses. The source of radiation in the optical unit described below is an ultraviolet laser with diode pumping. We present the general functioning scheme and designing features for the three main parts: laser beam deflecting system, laser beam dump and optical unit control system. The described laser beam deflection system is composed of a moving flat mirror and a spherical scattering mirror. Comparative analysis of the production technology for such mirrors was carried out, and, as a result, the decision was made to produce both mirrors of 99.99 % pure molybdenum without coating. A moving mirror deflects laser emission from a source through a fiber or deflects it on a spherical mirror and into the laser beam dump, moreover, switching from one position to another occurs almost immediately. It is shown that a scattering mirror is necessary, otherwise, the absorbing surface of the beam dump is being worn out irregularly. The laser beam dump is an open conical cavity, in which the conical element with its spire turned to the emission source is placed. Special microgeometry of the internal surface of the beam dump is suggested for the better absorption effect. An optical unit control system consists of a laser beam deflection system, laser temperature sensor, deflection system solenoid temperature sensor, and deflection mirror position sensor. The signal processing algorithm for signals coming from the sensors to the controller is described. The optical unit will be used in special technological equipment.

  15. Accelerating the Fourier split operator method via graphics processing units

    CERN Document Server

    Bauke, Heiko

    2010-01-01

    Current generations of graphics processing units have turned into highly parallel devices with general computing capabilities. Thus, graphics processing units may be utilized, for example, to solve time dependent partial differential equations by the Fourier split operator method. In this contribution, we demonstrate that graphics processing units are capable to calculate fast Fourier transforms much more efficiently than traditional central processing units. Thus, graphics processing units render efficient implementations of the Fourier split operator method possible. Performance gains of more than an order of magnitude as compared to implementations for traditional central processing units are reached in the solution of the time dependent Schr\\"odinger equation and the time dependent Dirac equation.

  16. Guidelines for engineering design for process safety

    National Research Council Canada - National Science Library

    2012-01-01

    .... Key areas to be enhanced in the new edition include inherently safer design, specifically concepts for design of inherently safer unit operations and Safety Instrumented Systems and Layer of Protection Analysis...

  17. Magnetohydrodynamics simulations on graphics processing units

    CERN Document Server

    Wong, Hon-Cheng; Feng, Xueshang; Tang, Zesheng

    2009-01-01

    Magnetohydrodynamics (MHD) simulations based on the ideal MHD equations have become a powerful tool for modeling phenomena in a wide range of applications including laboratory, astrophysical, and space plasmas. In general, high-resolution methods for solving the ideal MHD equations are computationally expensive and Beowulf clusters or even supercomputers are often used to run the codes that implemented these methods. With the advent of the Compute Unified Device Architecture (CUDA), modern graphics processing units (GPUs) provide an alternative approach to parallel computing for scientific simulations. In this paper we present, to the authors' knowledge, the first implementation to accelerate computation of MHD simulations on GPUs. Numerical tests have been performed to validate the correctness of our GPU MHD code. Performance measurements show that our GPU-based implementation achieves speedups of 2 (1D problem with 2048 grids), 106 (2D problem with 1024^2 grids), and 43 (3D problem with 128^3 grids), respec...

  18. Graphics Processing Units for HEP trigger systems

    Science.gov (United States)

    Ammendola, R.; Bauce, M.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Fantechi, R.; Fiorini, M.; Giagu, S.; Gianoli, A.; Lamanna, G.; Lonardo, A.; Messina, A.; Neri, I.; Paolucci, P. S.; Piandani, R.; Pontisso, L.; Rescigno, M.; Simula, F.; Sozzi, M.; Vicini, P.

    2016-07-01

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  19. Kernel density estimation using graphical processing unit

    Science.gov (United States)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  20. Graphics Processing Units for HEP trigger systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R. [INFN Sezione di Roma “Tor Vergata”, Via della Ricerca Scientifica 1, 00133 Roma (Italy); Bauce, M. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); Biagioni, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); Chiozzi, S.; Cotta Ramusino, A. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Fantechi, R. [INFN Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); CERN, Geneve (Switzerland); Fiorini, M. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Giagu, S. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); Gianoli, A. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Lamanna, G., E-mail: gianluca.lamanna@cern.ch [INFN Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN Laboratori Nazionali di Frascati, Via Enrico Fermi 40, 00044 Frascati (Roma) (Italy); Lonardo, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); Messina, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); and others

    2016-07-11

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  1. Managing Constraint Generators in Retail Design Processes

    DEFF Research Database (Denmark)

    Münster, Mia Borch; Haug, Anders

    Retail design concepts are complex designs meeting functional and aesthetic demands. During a design process a retail designer has to consider various constraint generators such as stakeholder interests, physical limitations and restrictions. Obviously the architectural site, legislators...

  2. Conceptual Chemical Process Design for Sustainability.

    Science.gov (United States)

    This chapter examines the sustainable design of chemical processes, with a focus on conceptual design, hierarchical and short-cut methods, and analyses of process sustainability for alternatives. The chapter describes a methodology for incorporating process sustainability analyse...

  3. Energy Efficient Iris Recognition With Graphics Processing Units

    National Research Council Canada - National Science Library

    Rakvic, Ryan; Broussard, Randy; Ngo, Hau

    2016-01-01

    .... In the past few years, however, this growth has slowed for central processing units (CPUs). Instead, there has been a shift to multicore computing, specifically with the general purpose graphic processing units (GPUs...

  4. Study of the ship design process model for collaborative design

    Science.gov (United States)

    He, Ze; Qiu, Chang-Hua; Wang, Neng-Jian

    2005-09-01

    The ship design process model is the basis for developing the ship collaborative design system under network environment. According to the characteristics of the ship design, a method for dividing the ship design process into, three layers is pat forward, that is project layer, design task layer and design activity layer, then the formalized definitions of the ship design process model, the decomposing principles of the ship design process and the architecture of the ship collaborative design (SDPM) system are presented. This method simplifies the activity network makes the optimization and adjustment of the design plan convenient and also makes the design process easier to control and change, at last the architecture of the ship collaborative design system is discussed.

  5. Study of the ship design process model for collaborative design

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The ship design process model is the basis for developing the ship collaborative design system under network environment.According to the characteristics of the ship design, a method for dividing the ship design process into three layers is pat forward, that is project layer, design task layer and design activity layer, then the formalized definitions of the ship design process model, the decomposing principles of the ship design process and the architecture of the ship collaborative design (SDPM) system are presented. This method simplifies the activity network, makes the optimization and adjustment of the design plan convenient and also makes the design process easier to control and change, at last the architecture of the ship collaborative design system is discussed.

  6. Business Process Redesign: Design the Improved Process

    Science.gov (United States)

    1993-09-01

    64 C. MULTIVOTING ......... .................. .. 65 D. ELECTRONIC VOTING TECHNOLOGY ... ......... .. 65 v E. PAIRED...PROCESS IMPROVEMENT PROCESS (PIP) Diagram of each Activity (AI-A4) ......... .. 122 vi APPENDIX D: PRODUCTS AND VENDORS WHICH SUPPORT ELECTRONIC VOTING ............. 126...requirements. D. ELECTRONIC VOTING TECHNOLOGY Nunamaker [1992] suggests that traditional voting usual- ly happens at the end of a discussion, to close

  7. Architecture and Design of Medical Processor Units for Medical Networks

    Directory of Open Access Journals (Sweden)

    Syed V. Ahamed

    2010-11-01

    Full Text Available This paper1 introduces analogical and deductive methodologies for the design medical processor units(MPUs. From the study of evolution of numerous earlier processors, we derive the basis for thearchitecture of MPUs. These specialized processors perform unique medical functions encoded as medicaloperational codes (mopcs. From a pragmatic perspective, MPUs function very close to CPUs. Bothprocessors have unique operation codes that command the hardware to perform a distinct chain of subprocessesupon operands and generate a specific result unique to the opcode and the operand(s. Inmedical environments, MPU decodes the mopcs and executes a series of medical sub-processes and sendsout secondary commands to the medical machine. Whereas operands in a typical computer system arenumerical and logical entities, the operands in medical machine are objects such as such as patients, bloodsamples, tissues, operating rooms, medical staff, medical bills, patient payments, etc. We follow thefunctional overlap between the two processes and evolve the design of medical computer systems andnetworks.

  8. Phenomena-based Process Synthesis and Design to achieve Process Intensification

    DEFF Research Database (Denmark)

    Lutze, Philip; Gani, Rafiqul; Woodley, John

    2011-01-01

    In order to improve processes incorporating process intensification and to allow them to go beyond pre-defined unit operations, the process has to be viewed at a lower level of aggregation, namely the phenomena scale. In this contribution, an approach for aggregating processes through phenomena...... level. This phenomena-based synthesis/design methodology is tested through a case study....

  9. Human-centered environment design in intensive care unit

    NARCIS (Netherlands)

    Li, Y.; Albayrak, A.; Goossens, R.H.M.; Xiao, D.; Jakimowicz, J.J.

    2013-01-01

    Because of high risk and instability of the patients in Intensive care unit(ICU), the design of ICU is very difficult. ICU design, auxiliary building design, lighting design, noise control and other aspects can also enhance its management. In this paper, we compare ICU design in China and Holland ba

  10. Expression regulation of design process gene in product design

    DEFF Research Database (Denmark)

    Fang, Lusheng; Li, Bo; Tong, Shurong

    2011-01-01

    is proposed and analyzed, as well as its three categories i.e., the operator gene, the structural gene and the regulator gene. Second, the trigger mechanism that design objectives and constraints trigger the operator gene is constructed. Third, the expression principle of structural gene is analyzed......To improve the design process efficiency, this paper proposes the principle and methodology that design process gene controls the characteristics of design process under the framework of design process reuse and optimization based on design process gene. First, the concept of design process gene...... with the example of design management gene. Last, the regulation mode that the regulator gene regulates the expression of the structural gene is established and it is illustrated by taking the design process management gene as an example. © (2011) Trans Tech Publications....

  11. Design Thinking in Elementary Students' Collaborative Lamp Designing Process

    Science.gov (United States)

    Kangas, Kaiju; Seitamaa-Hakkarainen, Pirita; Hakkarainen, Kai

    2013-01-01

    Design and Technology education is potentially a rich environment for successful learning, if the management of the whole design process is emphasised, and students' design thinking is promoted. The aim of the present study was to unfold the collaborative design process of one team of elementary students, in order to understand their multimodal…

  12. Shuttle Kit Freezer Refrigeration Unit Conceptual Design

    Science.gov (United States)

    Copeland, R. J.

    1975-01-01

    The refrigerated food/medical sample storage compartment as a kit to the space shuttle orbiter is examined. To maintain the -10 F in the freezer kit, an active refrigeration unit is required, and an air cooled Stirling Cycle refrigerator was selected. The freezer kit contains two subsystems, the refrigeration unit, and the storage volume. The freezer must provide two basic capabilities in one unit. One requirement is to store 215 lbs of food which is consumed in a 30-day period by 7 people. The other requirement is to store 128.3 lbs of medical samples consisting of both urine and feces. The unit can be mounted on the lower deck of the shuttle cabin, and will occupy four standard payload module compartments on the forward bulkhead. The freezer contains four storage compartments.

  13. describing a collaborative clothing design process between ...

    African Journals Online (AJOL)

    user

    ISSN 0378-5254 Journal of Family Ecology and Consumer Sciences, Vol 43, 2015. Designing success: describing a ... PROCESS BETWEEN APPRENTICE DESIGNERS AND EXPERT DESIGN ...... decision-making. Thinking Skills and ...

  14. Analysis, synthesis and design of chemical processes

    Energy Technology Data Exchange (ETDEWEB)

    Turton, R. [West Virginia Univ., Morgantown, WV (United States); Bailie, R.C.; Whiting, W.B.

    1998-12-31

    The book illustrates key concepts through a running example from the real world: the manufacture of benzene; covers design, economic considerations, troubleshooting and health/environmental safety; and includes exclusive software for estimating chemical manufacturing equipment capital costs. This book will help chemical engineers optimize the efficiency of production processes, by providing both a philosophical framework and detailed information about chemical process design. Design is the focal point of the chemical engineering practice. This book helps engineers and senior-level students hone their design skills through process design rather than simply plant design. It introduces all the basics of process simulation. Learn how to size equipment, optimize flowsheets, evaluate the economics of projects, and plan the operation of processes. Learn how to use Process Flow Diagrams; choose the operating conditions for a process; and evaluate the performance of existing processes and equipment. Finally, understand how chemical process design impacts health, safety, the environment and the community.

  15. Formal analysis of design process dynamics

    NARCIS (Netherlands)

    Bosse, T.; Jonker, C.M.; Treur, J.

    2010-01-01

    This paper presents a formal analysis of design process dynamics. Such a formal analysis is a prerequisite to come to a formal theory of design and for the development of automated support for the dynamics of design processes. The analysis was geared toward the identification of dynamic design

  16. Formal analysis of design process dynamics

    NARCIS (Netherlands)

    Bosse, T.; Jonker, C.M.; Treur, J.

    2010-01-01

    This paper presents a formal analysis of design process dynamics. Such a formal analysis is a prerequisite to come to a formal theory of design and for the development of automated support for the dynamics of design processes. The analysis was geared toward the identification of dynamic design prope

  17. Study on Professional Process of Bra Design

    Institute of Scientific and Technical Information of China (English)

    李明菊; 徐朝晖

    2001-01-01

    The process of bra design in the underwear industry is studied. Several important aspects of the process were identified: sizing, fabric selection, pattern development and grading, the use of CAD system, fitting and wear trials. Although the design process relies heavily on the expertise and experience of designers, the modern technology such as CAD can facilitate and optimize the design process, and the fitting process on life models is essential for the underwear design. The differences between domestic underwear companies and foreign major ones mainly lie at the lack of dress form specially used for underwear design, lack of CAD/CAM or not making full use of them, and most of all, lack of the professional bra designers or even skillful pattern designers. The prospects and future model of bra design process were also elaborated in this paper.

  18. Hafnium transistor process design for neural interfacing.

    Science.gov (United States)

    Parent, David W; Basham, Eric J

    2009-01-01

    A design methodology is presented that uses 1-D process simulations of Metal Insulator Semiconductor (MIS) structures to design the threshold voltage of hafnium oxide based transistors used for neural recording. The methodology is comprised of 1-D analytical equations for threshold voltage specification, and doping profiles, and 1-D MIS Technical Computer Aided Design (TCAD) to design a process to implement a specific threshold voltage, which minimized simulation time. The process was then verified with a 2-D process/electrical TCAD simulation. Hafnium oxide films (HfO) were grown and characterized for dielectric constant and fixed oxide charge for various annealing temperatures, two important design variables in threshold voltage design.

  19. Managing Analysis Models in the Design Process

    Science.gov (United States)

    Briggs, Clark

    2006-01-01

    Design of large, complex space systems depends on significant model-based support for exploration of the design space. Integrated models predict system performance in mission-relevant terms given design descriptions and multiple physics-based numerical models. Both the design activities and the modeling activities warrant explicit process definitions and active process management to protect the project from excessive risk. Software and systems engineering processes have been formalized and similar formal process activities are under development for design engineering and integrated modeling. JPL is establishing a modeling process to define development and application of such system-level models.

  20. Ergonomics approaches to sociotechnical design processes

    DEFF Research Database (Denmark)

    Broberg, Ole

    2003-01-01

    A five-year design process of a continuous process wok has been studied with the aim of elucidating the conditions for integrating work environment aspects. The design process was seen as a network building activity and as a social shaping process of the artefact. A work environment log is sugges...

  1. Designing future learning. A posthumanist approach to researching design processes

    DEFF Research Database (Denmark)

    Juelskjær, Malou

    I investigate how a design process – leading up to the design of a new education building - enact, transform and highlight tacit everyday practices and experiences in an education setting, whereby becoming an art of managing. I apply a post-humanist performative perspective, highlighting entangled...... agencies rather than focusing on human agency. I focus on the design process rather than the designer. The design process accelerated and performed past and future experiences of schooling, learning, teaching. This called for analytical attention to agential forces of not only the material but also...... and temporalities matter in design processes. Furthermore, the analysis emphasise how design translate affective economies and that attention to those affective economies are vital for the result of the design process....

  2. Process Variations and Probabilistic Integrated Circuit Design

    CERN Document Server

    Haase, Joachim

    2012-01-01

    Uncertainty in key parameters within a chip and between different chips in the deep sub micron era plays a more and more important role. As a result, manufacturing process spreads need to be considered during the design process.  Quantitative methodology is needed to ensure faultless functionality, despite existing process variations within given bounds, during product development.   This book presents the technological, physical, and mathematical fundamentals for a design paradigm shift, from a deterministic process to a probability-orientated design process for microelectronic circuits.  Readers will learn to evaluate the different sources of variations in the design flow in order to establish different design variants, while applying appropriate methods and tools to evaluate and optimize their design.  Trains IC designers to recognize problems caused by parameter variations during manufacturing and to choose the best methods available to mitigate these issues during the design process; Offers both qual...

  3. Separation process design for isolation and purification of natural products

    DEFF Research Database (Denmark)

    Malwade, Chandrakant R.

    selection of separation techniques and operating conditions. The key factor in designing separation processes with multiple unit operations is to determine the synergy between them which in turn demands molecular level understanding of process streams. Therefore, the methodology is fortified with process......, thereby providing process information crucial for determining synergistic effects between different unit operations. In this work, the formulated methodology has been used to isolate and purify artemisinin, an antimalarial drug, from dried leaves of the plant Artemisia annua. A process flow sheet...... is generated consisting of maceration, flash column chromatography and crystallization unit operations for extraction, partial purification and final purification of artemisinin, respectively. PAT framework is used extensively to characterize the process streams at molecular level and the generated process...

  4. 锻造芯棒冷却水循环利用装置的设计研发%Design and research of recycle unit for mandrel cooling water in forging process

    Institute of Scientific and Technical Information of China (English)

    曹英强

    2012-01-01

    设计了使用芯棒锻造时芯棒所用冷却水的循环利用装置.该装置包括水箱、供水装置、收集装置、冷却循环过滤装置以及相应的电控系统.通过对芯棒冷却水的回收、过滤再利用,避免了冷却水的浪费,有利于保护环境,降低生产成本.%The recycling unit for cooling water used by mandrel during forging process has been designed in the text.The system includes water tank,water supply system,receiving system,filter system for cooling recycle and related electric control system.With recycle,filtering and reuse of mandrel cooling water,the system can avoid wasting cooling water.By promoting the system,the water needed in the forging machine has been saved in large quantity,which can protect environment and reduce producing cost.

  5. Gaps in the Design Process

    Energy Technology Data Exchange (ETDEWEB)

    Veers, Paul

    2016-10-04

    The design of offshore wind plants is a relatively new field. The move into U.S. waters will have unique environmental conditions, as well as expectations from the authorities responsible for managing the development. Wind turbines are required to test their assumed design conditions with the site conditions of the plant. There are still some outstanding issues on how we can assure that the design for both the turbine and the foundation are appropriate for the site and will have an acceptable level of risk associated with the particular installation.

  6. Launch Vehicle Design Process Characterization Enables Design/Project Tool

    Science.gov (United States)

    Blair, J. C.; Ryan, R. S.; Schutzenhofer, L. A.; Robinson, Nancy (Technical Monitor)

    2001-01-01

    The objectives of the project described in this viewgraph presentation included the following: (1) Provide an overview characterization of the launch vehicle design process; and (2) Delineate design/project tool to identify, document, and track pertinent data.

  7. Graphic Design in Libraries: A Conceptual Process

    Science.gov (United States)

    Ruiz, Miguel

    2014-01-01

    Providing successful library services requires efficient and effective communication with users; therefore, it is important that content creators who develop visual materials understand key components of design and, specifically, develop a holistic graphic design process. Graphic design, as a form of visual communication, is the process of…

  8. Graphic Design in Libraries: A Conceptual Process

    Science.gov (United States)

    Ruiz, Miguel

    2014-01-01

    Providing successful library services requires efficient and effective communication with users; therefore, it is important that content creators who develop visual materials understand key components of design and, specifically, develop a holistic graphic design process. Graphic design, as a form of visual communication, is the process of…

  9. Rates of reaction and process design data for the Hydrocarb Process

    Energy Technology Data Exchange (ETDEWEB)

    Steinberg, M.; Kobayashi, Atsushi [Brookhaven National Lab., Upton, NY (United States); Tung, Yuanki [Hydrocarb Corp., New York, NY (United States)

    1992-08-01

    In support of studies for developing the coprocessing of fossil fuels with biomass by the Hydrocarb Process, experimental and process design data are reported. The experimental work includes the hydropryolysis of biomass and the thermal decomposition of methane in a tubular reactor. The rates of reaction and conversion were obtained at temperature and pressure conditions pertaining to a Hydrocarb Process design. A Process Simulation Computer Model was used to design the process and obtain complete energy and mass balances. Multiple feedstocks including biomass with natural gas and biomass with coal were evaluated. Additional feedstocks including green waste, sewage sludge and digester gas were also evaluated for a pilot plant unit.

  10. Sensitivity of Process Design due to Uncertainties in Property Estimates

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Jones, Mark Nicholas; Sarup, Bent;

    2012-01-01

    The objective of this paper is to present a systematic methodology for performing analysis of sensitivity of process design due to uncertainties in property estimates. The methodology provides the following results: a) list of properties with critical importance on design; b) acceptable levels...... of accuracy for different thermo-physical property prediction models; and c) design variables versus properties relationships. The application of the methodology is illustrated through a case study of an extractive distillation process and sensitivity analysis of designs of various unit operations found...

  11. Representing the Learning Design of Units of Learning

    Directory of Open Access Journals (Sweden)

    Bill Olivier

    2004-07-01

    Full Text Available In order to capture current educational practices in eLearning courses, more advanced ‘learning design’ capabilities are needed than are provided by the open eLearning specifications hitherto available. Specifically, these fall short in terms of multi-role workflows, collaborative peer-interaction, personalization and support for learning services. We present a new specification that both extends and integrates current specifications to support the portable representation of units of learning (e.g. lessons, learning events that have advanced learning designs. This is the Learning Design specification. It enables the creation of a complete, abstract and portable description of the pedagogical approach taken in a course, which can then be realized by a conforming system. It can model multi-role teaching-learning processes and supports personalization of learning routes. The underlying generic pedagogical modelling language has been translated into a specification (a standard developed and agreed upon by domain and industry experts that was developed in the context of IMS, one of the major bodies involved in the development of interoperability specifications in the field of eLearning. The IMS Learning Design specification is discussed in this article in the context of its current status, its limitations and its future development.

  12. NASA System Engineering Design Process

    Science.gov (United States)

    Roman, Jose

    2011-01-01

    This slide presentation reviews NASA's use of systems engineering for the complete life cycle of a project. Systems engineering is a methodical, disciplined approach for the design, realization, technical management, operations, and retirement of a system. Each phase of a NASA project is terminated with a Key decision point (KDP), which is supported by major reviews.

  13. Mesh-particle interpolations on graphics processing units and multicore central processing units.

    Science.gov (United States)

    Rossinelli, Diego; Conti, Christian; Koumoutsakos, Petros

    2011-06-13

    Particle-mesh interpolations are fundamental operations for particle-in-cell codes, as implemented in vortex methods, plasma dynamics and electrostatics simulations. In these simulations, the mesh is used to solve the field equations and the gradients of the fields are used in order to advance the particles. The time integration of particle trajectories is performed through an extensive resampling of the flow field at the particle locations. The computational performance of this resampling turns out to be limited by the memory bandwidth of the underlying computer architecture. We investigate how mesh-particle interpolation can be efficiently performed on graphics processing units (GPUs) and multicore central processing units (CPUs), and we present two implementation techniques. The single-precision results for the multicore CPU implementation show an acceleration of 45-70×, depending on system size, and an acceleration of 85-155× for the GPU implementation over an efficient single-threaded C++ implementation. In double precision, we observe a performance improvement of 30-40× for the multicore CPU implementation and 20-45× for the GPU implementation. With respect to the 16-threaded standard C++ implementation, the present CPU technique leads to a performance increase of roughly 2.8-3.7× in single precision and 1.7-2.4× in double precision, whereas the GPU technique leads to an improvement of 9× in single precision and 2.2-2.8× in double precision.

  14. Integrated Process Design and Control of Reactive Distillation Processes

    DEFF Research Database (Denmark)

    Mansouri, Seyed Soheil; Sales-Cruz, Mauricio; Huusom, Jakob Kjøbsted

    2015-01-01

    In this work, integrated design and control of reactive distillation processes is presented. Simple graphical design methods that are similar in concept to non-reactive distillation processes are used, such as reactive McCabe-Thiele method and driving force approach. The methods are based...... of this approach, it is shown that designing the reactive distillation process at the maximum driving force results in an optimal design in terms of controllability and operability. It is verified that the reactive distillation design option is less sensitive to the disturbances in the feed at the highest driving...

  15. Biomimetic design processes in architecture: morphogenetic and evolutionary computational design.

    Science.gov (United States)

    Menges, Achim

    2012-03-01

    Design computation has profound impact on architectural design methods. This paper explains how computational design enables the development of biomimetic design processes specific to architecture, and how they need to be significantly different from established biomimetic processes in engineering disciplines. The paper first explains the fundamental difference between computer-aided and computational design in architecture, as the understanding of this distinction is of critical importance for the research presented. Thereafter, the conceptual relation and possible transfer of principles from natural morphogenesis to design computation are introduced and the related developments of generative, feature-based, constraint-based, process-based and feedback-based computational design methods are presented. This morphogenetic design research is then related to exploratory evolutionary computation, followed by the presentation of two case studies focusing on the exemplary development of spatial envelope morphologies and urban block morphologies.

  16. The cognition in the design process

    Directory of Open Access Journals (Sweden)

    Tiago Barros Pontes e Silva

    2016-03-01

    Full Text Available The purpose of this document is to present an approach of understanding of the design process from the framework of cognitive psychology. It is anchored in models of cognitive architecture and problem-solving approaches in order to suggest a metacognition practice for the designers. Is presented an approach of design as a process of problem solving, including common heuristics, their analysis and synthesis nature and contributions from Cognitive Psychology in metacognition processes, creativity and evaluation.

  17. Designing with video focusing the user-centred design process

    CERN Document Server

    Ylirisku, Salu Pekka

    2007-01-01

    Digital video for user-centered co-design is an emerging field of design, gaining increasing interest in both industry and academia. It merges the techniques and approaches of design ethnography, participatory design, interaction analysis, scenario-based design, and usability studies. This book covers the complete user-centered design project. It illustrates in detail how digital video can be utilized throughout the design process, from early user studies to making sense of video content and envisioning the future with video scenarios to provoking change with video artifacts. The text includes

  18. Adaptive subsystem aided design synthesis units REA on minicomputers

    Directory of Open Access Journals (Sweden)

    Yu. F. Zin'kovskii

    1986-04-01

    Full Text Available Questions of construction sub-aided design CAD integrated electronics units based on the principles of adaptation and self-tuning software and mathematical software for solving a specific problem.

  19. Modeling of biopharmaceutical processes. Part 2: Process chromatography unit operation

    DEFF Research Database (Denmark)

    Kaltenbrunner, Oliver; McCue, Justin; Engel, Philip;

    2008-01-01

    Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent...

  20. Architecture and Design of Medical Processor Units for Medical Networks

    CERN Document Server

    Ahamed, Syed V; 10.5121/ijcnc.2010.2602

    2011-01-01

    This paper introduces analogical and deductive methodologies for the design medical processor units (MPUs). From the study of evolution of numerous earlier processors, we derive the basis for the architecture of MPUs. These specialized processors perform unique medical functions encoded as medical operational codes (mopcs). From a pragmatic perspective, MPUs function very close to CPUs. Both processors have unique operation codes that command the hardware to perform a distinct chain of subprocesses upon operands and generate a specific result unique to the opcode and the operand(s). In medical environments, MPU decodes the mopcs and executes a series of medical sub-processes and sends out secondary commands to the medical machine. Whereas operands in a typical computer system are numerical and logical entities, the operands in medical machine are objects such as such as patients, blood samples, tissues, operating rooms, medical staff, medical bills, patient payments, etc. We follow the functional overlap betw...

  1. Total Ship Design Process Modeling

    Science.gov (United States)

    2012-04-30

    Microsoft Project® or Primavera ®, and perform process simulations that can investigate risk, cost, and schedule trade-offs. Prior efforts to capture...planning in the face of disruption, delay, and late‐changing  requirements. ADePT is interfaced with  PrimaVera , the AEC  industry favorite program

  2. Proton Testing of Advanced Stellar Compass Digital Processing Unit

    DEFF Research Database (Denmark)

    Thuesen, Gøsta; Denver, Troelz; Jørgensen, Finn E

    1999-01-01

    The Advanced Stellar Compass Digital Processing Unit was radiation tested with 300 MeV protons at Proton Irradiation Facility (PIF), Paul Scherrer Institute, Switzerland.......The Advanced Stellar Compass Digital Processing Unit was radiation tested with 300 MeV protons at Proton Irradiation Facility (PIF), Paul Scherrer Institute, Switzerland....

  3. Aesthetic design process: Descriptive design research and ways forward

    OpenAIRE

    Jagtap, Santosh; Jagtap, Sachin

    2015-01-01

    Consumer response to designed products has a profound effect on how products are interpreted, approached and used. Product design is crucial in determining this consumer response. Research in this field has been centered on studying the relationship between product features and subjective responses of users and consumers to those features. The subject of aesthetic or styling design process has been relatively neglected despite the important role of this process in fulfilling intended consumer...

  4. Design of the Secondary Optical Elements for Concentrated Photovoltaic Units with Fresnel Lenses

    Directory of Open Access Journals (Sweden)

    Yi-Cheng Chen

    2015-10-01

    Full Text Available The goal of this presented study was to determine the optimum parameters of secondary optical elements (SOEs for concentrated photovoltaic (CPV units with flat Fresnel lenses. Three types of SOEs are under consideration in the design process, including kaleidoscope with equal optical path design (KOD, kaleidoscope with flat top surface (KFTS, and open-truncated tetrahedral pyramid with specular walls (SP. The function of using a SOE with a Fresnel lens in a CPV unit is to achieve high optical efficiency, low sensitivity to the sun tracking error, and improved uniformity of irradiance distribution on the solar cell. Ray tracing technique was developed to simulate the optical characteristics of the CPV unit with various design parameters of each type of SOE. Finally, an optimum KOD-type SOE was determined by parametric design process. The resulting optical performance of the CPV unit with the optimum SOE was evaluated in both single-wavelength and broadband simulation of solar spectrum.

  5. Practicing universal design to actual hand tool design process.

    Science.gov (United States)

    Lin, Kai-Chieh; Wu, Chih-Fu

    2015-09-01

    UD evaluation principles are difficult to implement in product design. This study proposes a methodology for implementing UD in the design process through user participation. The original UD principles and user experience are used to develop the evaluation items. Difference of product types was considered. Factor analysis and Quantification theory type I were used to eliminate considered inappropriate evaluation items and to examine the relationship between evaluation items and product design factors. Product design specifications were established for verification. The results showed that converting user evaluation into crucial design verification factors by the generalized evaluation scale based on product attributes as well as the design factors applications in product design can improve users' UD evaluation. The design process of this study is expected to contribute to user-centered UD application.

  6. A Ten-Step Process for Developing Teaching Units

    Science.gov (United States)

    Butler, Geoffrey; Heslup, Simon; Kurth, Lara

    2015-01-01

    Curriculum design and implementation can be a daunting process. Questions quickly arise, such as who is qualified to design the curriculum and how do these people begin the design process. According to Graves (2008), in many contexts the design of the curriculum and the implementation of the curricular product are considered to be two mutually…

  7. Target value design: applications to newborn intensive care units.

    Science.gov (United States)

    Rybkowski, Zofia K; Shepley, Mardelle McCuskey; Ballard, H Glenn

    2012-01-01

    There is a need for greater understanding of the health impact of various design elements in neonatal intensive care units (NICUs) as well as cost-benefit information to make informed decisions about the long-term value of design decisions. This is particularly evident when design teams are considering the transition from open-bay NICUs to single-family-room (SFR) units. This paper introduces the guiding principles behind target value design (TVD)-a price-led design methodology that is gaining acceptance in healthcare facility design within the Lean construction methodology. The paper also discusses the role that set-based design plays in TVD and its application to NICUs.

  8. Improvning the design process by integrating design analysis

    OpenAIRE

    Eriksson, Martin; Burman, Åke

    2005-01-01

    A common denominator in most design literature is the goal of improving methods and techniques for the design process, thus contributing to increased efficiency of the design activities. It is a striking fact that the majority of the improvements suggested focus solely on qualitative methods and techniques, thereby neglecting to recognize the improvement potential inherent in quantitative methods and techniques. At the Division of Machine Design at Lund University, a number of ...

  9. THEORETICAL FRAMES FOR DESIGNING REVERSE LOGISTICS PROCESSES

    OpenAIRE

    Grabara, Janusz K.; Sebastian Kot

    2009-01-01

    Logistics processes of return flow became more and more important in present business practice. Because of better customer satisfaction, environmental and financial aspects many enterprises deal with reverse logistics performance. The paper is a literature review focused on the design principles of reverse logistics processes Keywords: reverse logistics, designing.

  10. Facilitating Teamwork in the Design Process

    DEFF Research Database (Denmark)

    Bang, Anne Louise; Nissen, Kirsten

    2009-01-01

    By approaching the Repertory Grid as an exploratory design game and drawing on insight in diagrammatic reasoning we argue that this approach is useful in supporting team work in the design process. In this paper we draw on two courses inviting textile design students to contribute to the developm...

  11. Design study report. Volume 2: Electronic unit

    Science.gov (United States)

    1973-01-01

    The recording system discussed is required to record and reproduce wideband data from either of the two primary Earth Resources Technology Satellite sensors: Return Beam Vidicon (RBV) camera or Multi-Spectral Scanner (MSS). The camera input is an analog signal with a bandwidth from dc to 3.5 MHz; this signal is accommodated through FM recording techniques which provide a recorder signal-to-noise ratio in excess of 39 db, black-to-white signal/rms noise, over the specified bandwidth. The MSS provides, as initial output, 26 narrowband channels. These channels are multiplexed prior to transmission, or recording, into a single 15 Megabit/second digital data stream. Within the recorder, the 15 Megabit/second NRZL signal is processed through the same FM electronics as the RBV signal, but the basic FM standards are modified to provide an internal, 10.5 MHz baseland response with signal-to-noise ratio of about 25 db. Following FM demodulation, however, the MSS signal is digitally re-shaped and re-clocked so that good bit stability and signal-to-noise exist at the recorder output.

  12. PROPOSAL OF SPATIAL OPTIMIZATION OF PRODUCTION PROCESS IN PROCESS DESIGNER

    Directory of Open Access Journals (Sweden)

    Peter Malega

    2015-03-01

    Full Text Available This contribution is focused on optimizing the use of space in the production process using software Process Designer. The aim of this contribution is to suggest possible improvements to the existing layout of the selected production process. Production process was analysed in terms of inputs, outputs and course of actions. Nowadays there are many software solutions aimed at optimizing the use of space. One of these software products is the Process Designer, which belongs to the product line Tecnomatix. This software is primarily aimed at production planning. With Process Designer is possible to design the layout of production and subsequently to analyse the production or to change according to the current needs of the company.

  13. A Stellar Reference Unit Design Study for SIRTF

    DEFF Research Database (Denmark)

    Jørgensen, John Leif; Liebe, Carl Christian

    1996-01-01

    A design study for a stellar reference unit, or star tracker, for SIRTF was conducted in FY96 in conjunction with the Tracking Sensors Group of the Avionic Equipment Section of JPL. The resulting design was derived from the Oersted, autonomous, Advanced Stellar Compass, star tracker. The projecte...... of star tracker integration with the cryogenic telescope structure....

  14. A Stellar Reference Unit Design Study for SIRTF

    DEFF Research Database (Denmark)

    Jørgensen, John Leif; Liebe, Carl Christian

    1996-01-01

    A design study for a stellar reference unit, or star tracker, for SIRTF was conducted in FY96 in conjunction with the Tracking Sensors Group of the Avionic Equipment Section of JPL. The resulting design was derived from the Oersted, autonomous, Advanced Stellar Compass, star tracker. The projected...

  15. Sustainable Process Design of Lignocellulose based Biofuel

    DEFF Research Database (Denmark)

    Mangnimit, Saranya; Malakul, Pomthong; Gani, Rafiqul

    the deermination of sustainable process options, if they exist. . The paper will highlight an improved alternative process design compared to a base case (published) design in terms of production cost, waste, energy usage and environmental impacts, criteria that are asociated with sustainable process design...... a combustion processing step, carbondioxide and other important greenhouse gases are released. This is considered non-renewable and non-sustainable energy and may be one of the major causes of global warming and therefore, climate change concerns coupled with high oil prices. This isdriving efforts to increase...

  16. Low cost solar array project production process and equipment task. A Module Experimental Process System Development Unit (MEPSDU)

    Science.gov (United States)

    1981-01-01

    Technical readiness for the production of photovoltaic modules using single crystal silicon dendritic web sheet material is demonstrated by: (1) selection, design and implementation of solar cell and photovoltaic module process sequence in a Module Experimental Process System Development Unit; (2) demonstration runs; (3) passing of acceptance and qualification tests; and (4) achievement of a cost effective module.

  17. Variant Designing in the Preliminary Small Ship Design Process

    Directory of Open Access Journals (Sweden)

    Karczewski Artur

    2017-06-01

    Full Text Available Ship designing is a complex process, as the ship itself is a complex, technical multi-level object which operates in the air/water boundary environment and is exposed to the action of many different external and internal factors resulting from the adopted technical solutions, type of operation, and environmental conditions. A traditional ship design process consists of a series of subsequent multistage iterations, which gradually increase the design identification level. The paper presents problems related to the design of a small untypical vessel with the aid of variant methodology making use of optimisation algorithms. The computer-aided design methodology has been developed which does not need permanent reference to already built real ships and empirical-statistical relations. Possibilities were indicated for integrating together early design stages, and parallel designing of hull shape and parameters.

  18. Optimized Technology for Residuum Processing in the ARGG Unit

    Institute of Scientific and Technical Information of China (English)

    Pan Luoqi; Yuan hongxing; Nie Baiqiu

    2006-01-01

    The influence of feedstock property on operation in the FCC unit was studied to identify the cause leading to deteriorated products distribution related with increasingly heavier feedstock for the ARGG unit. In order to maximize the economic benefits of the ARGG unit a string of measures, including the modification of catalyst formulation, retention of high catalyst activity, application of mixed termination agents to control the reaction temperature and once-through operation, and optimization of catalyst regeneration technique, were adopted to adapt the ARGG unit to processing of the heavy feedstock with its carbon residue equating to 7% on an average. The heavy oil processing technology has brought about apparent economic benefits.

  19. Meta-model Based Model Organization and Transformation of Design Pattern Units in MDA

    Institute of Scientific and Technical Information of China (English)

    Chang-chun YANG; Zi-yi ZHAO; Jing Sun

    2010-01-01

    Tb achieve the purpose of applying design patterns which are various in kind and constant in changing in MDA from idea and application,one way is used to solve the problem of pattern disappearance which occurs at the process of pattern instantiation,to guarantee the independenceJpatterns,and at the same time,to apply this process to miltiple design patterns.Ib solve these two problems,the modeling method of design pattern units based on seta-models is adopted,I.e.,to divide the basic operations into atones in the metamodel tier and then combine the atones to complete design pattern units seta-models without business logic.After one process of conversion,the kxupose of making up various pattern units seta-model and dividing business logic and pattern logic is achieved.

  20. Property Based Process and Product Synthesis and Design

    DEFF Research Database (Denmark)

    Eden, Mario Richard

    2003-01-01

    This thesis describes the development of a general framework for solving process and product design problems. Targeting the desired performance of the system in a systematic manner relieves the iterative nature of conventional design techniques. Furthermore, conventional component based methods...... roles a property model plays at different stages of the solution to a design problem, it is discovered that by decoupling the constitutive equations, that make up the property model, from the balance and constraint equations of the process or product model, a significant reduction in problem complexity...... in terms of the constitutive (synthesis/design) variables instead of the process variables, thus providing the synthesis/design targets. The second reverse problem (reverse property prediction) solves the constitutive equations to identify unit operations, operating conditions and/or products by matching...

  1. Integrated Process Design and Control of Reactive Distillation Processes

    DEFF Research Database (Denmark)

    Mansouri, Seyed Soheil; Sales-Cruz, Mauricio; Huusom, Jakob Kjøbsted

    2015-01-01

    In this work, integrated process design and control of reactive distillation processes is presented. Simple graphical design methods that are similar in concept to non-reactive distillation processes are used, such as reactive McCabe-Thiele method and driving force approach. The methods are based...... on the element concept, which is used to translate a system of compounds into elements. The operation of the reactive distillation column at the highest driving force and other candidate points is analyzed through analytical solution as well as rigorous open-loop and closed-loop simulations. By application...... of this approach, it is shown that designing the reactive distillation process at the maximum driving force results in an optimal design in terms of controllability and operability. It is verified that the reactive distillation design option is less sensitive to the disturbances in the feed at the highest driving...

  2. Integrated Process Design, Control and Analysis of Intensified Chemical Processes

    DEFF Research Database (Denmark)

    Mansouri, Seyed Soheil

    approach is to tackle process design and controllability issues simultaneously, in the early stages of process design. This simultaneous synthesis approach provides optimal/near optimal operation and more efficient control of conventional (non-reactive binary distillation columns) as well as complex...... chemical processes; for example, intensified processes such as reactive distillation. Most importantly, it identifies and eliminates potentially promising design alternatives that may have controllability problems later. To date, a number of methodologies have been proposed and applied on various problems...... design of the process as well as the controller structure. Through analytical, steady-state and closed-loop dynamic analysis it is verified that the control structure, disturbance rejection and energy requirement of the reactive distillation column is better than any other operation point...

  3. Systematic sustainable process design and analysis of biodiesel processes

    DEFF Research Database (Denmark)

    Mansouri, Seyed Soheil; Ismail, Muhammad Imran; Babi, Deenesh Kavi

    2013-01-01

    technology. Second, the evaluation of this superstructure for systematic screening to obtain an appropriate base case design. This is done by first reducing the search space using a sustainability analysis, which provides key indicators for process bottlenecks of different flowsheet configurations...... and then by further reducing the search space by using economic evaluation and life cycle assessment. Third, the determination of sustainable design with/without process intensification using a phenomena-based synthesis/design method. A detailed step by step application of the framework is highlighted through...... process intensification opportunities. This work focuses on three main aspects that have been incorporated into a systematic computer-aided framework for sustainable process design. First, the creation of a generic superstructure, which consists of all possible process alternatives based on available...

  4. Designing reactive distillation processes with improved efficiency

    NARCIS (Netherlands)

    Almeida-Rivera, C.P.

    2005-01-01

    In this dissertation a life-span inspired perspective is taken on the conceptual design of grassroots reactive distillation processes. Attention was paid to the economic performance of the process and to potential losses of valuable resources over the process life span. The research was cast in a se

  5. Designing future learning. A posthumanist approach to researching design processes

    DEFF Research Database (Denmark)

    Juelskjær, Malou

    I investigate how a design process – leading up to the design of a new education building - enact, transform and highlight tacit everyday practices and experiences in an education setting, whereby becoming an art of managing. I apply a post-humanist performative perspective, highlighting entangled...... agencies rather than focusing on human agency. I focus on the design process rather than the designer. The design process accelerated and performed past and future experiences of schooling, learning, teaching. This called for analytical attention to agential forces of not only the material but also...... the spatio-temporal. The concept of spacetimemattering from the work of Karen Barad (2007) highlights the performativity, the continuous coming into being through entanglement and differentiation, of space, time, matter and meaning. I draw on this thinking in order to re-consider how multiple spatialities...

  6. Hidden realities inside PBL design processes

    DEFF Research Database (Denmark)

    Pihl, Ole Verner

    2015-01-01

    Design Process, but is a group-based architecture and design education better than that which is individually based? How does PBL affect space, form, and creative processes? Hans Kiib, professor and one of the founders of the Department of Architecture and Design in Aalborg, describes his intentions...... within the group work, as it is closer related to the actual PBL process”. Is the Integrated Design Process (Knudstrup 2004) and is Colb (1975) still current and valid? Can we still use these methodologies when we must create “learning for an unknown future,” as Ronald Barnett (2004) claims that we...... investigates the creative processes of the collective and the individual and clarifies some of the hidden realities behind the PBL-based creative processes, both through an inquiry with the students and a more methodological and theoretical approach. The paper also explores how to integrate artistic...

  7. Multidisciplinary systems engineering architecting the design process

    CERN Document Server

    Crowder, James A; Demijohn, Russell

    2016-01-01

    This book presents Systems Engineering from a modern, multidisciplinary engineering approach, providing the understanding that all aspects of systems design, systems, software, test, security, maintenance and the full life-cycle must be factored in to any large-scale system design; up front, not factored in later. It lays out a step-by-step approach to systems-of-systems architectural design, describing in detail the documentation flow throughout the systems engineering design process. It provides a straightforward look and the entire systems engineering process, providing realistic case studies, examples, and design problems that will enable students to gain a firm grasp on the fundamentals of modern systems engineering.  Included is a comprehensive design problem that weaves throughout the entire text book, concluding with a complete top-level systems architecture for a real-world design problem.

  8. Minimization of entropy production in separate and connected process units

    Energy Technology Data Exchange (ETDEWEB)

    Roesjorde, Audun

    2004-08-01

    The objective of this thesis was to further develop a methodology for minimizing the entropy production of single and connected chemical process units. When chemical process equipment is designed and operated at the lowest entropy production possible, the energy efficiency of the equipment is enhanced. We have found for single process units that the entropy production could be reduced with up to 20-40%, given the degrees of freedom in the optimization. In processes, our results indicated that even bigger reductions were possible. The states of minimum entropy production were studied and important painter's for obtaining significant reductions in the entropy production were identified. Both from sustain ability and economical viewpoints knowledge of energy efficient design and operation are important. In some of the systems we studied, nonequilibrium thermodynamics was used to model the entropy production. In Chapter 2, we gave a brief introduction to different industrial applications of nonequilibrium thermodynamics. The link between local transport phenomena and overall system description makes nonequilibrium thermodynamics a useful tool for understanding design of chemical process units. We developed the methodology of minimization of entropy production in several steps. First, we analyzed and optimized the entropy production of single units: Two alternative concepts of adiabatic distillation; diabatic and heat-integrated distillation, were analyzed and optimized in Chapter 3 to 5. In diabatic distillation, heat exchange is allowed along the column, and it is this feature that increases the energy efficiency of the distillation column. In Chapter 3, we found how a given area of heat transfer should be optimally distributed among the trays in a column separating a mixture of propylene and propane. The results showed that heat exchange was most important on the trays close to the re boiler and condenser. In Chapter 4 and 5, we studied how the entropy

  9. Strategies for Stabilizing Nitrogenous Compounds in ECLSS Wastewater: Top-Down System Design and Unit Operation Selection with Focus on Bio-Regenerative Processes for Short and Long Term Scenarios

    Science.gov (United States)

    Lunn, Griffin M.

    2011-01-01

    Water recycling and eventual nutrient recovery is crucial for surviving in or past low earth orbit. New approaches and syste.m architecture considerations need to be addressed to meet current and future system requirements. This paper proposes a flexible system architecture that breaks down pretreatment , steps into discrete areas where multiple unit operations can be considered. An overview focusing on the urea and ammonia conversion steps allows an analysis on each process's strengths and weaknesses and synergy with upstream and downstream processing. Process technologies to be covered include chemical pretreatment, biological urea hydrolysis, chemical urea hydrolysis, combined nitrification-denitrification, nitrate nitrification, anammox denitrification, and regenerative ammonia absorption through struvite formation. Biological processes are considered mainly for their ability to both maximize water recovery and to produce nutrients for future plant systems. Unit operations can be considered for traditional equivalent system mass requirements in the near term or what they can provide downstream in the form of usable chemicals or nutrients for the long term closed-loop ecological control and life support system. Optimally this would allow a system to meet the former but to support the latter without major modification.

  10. Environmental Challenges in the Design Process

    DEFF Research Database (Denmark)

    Petersen, Mads Dines; Knudstrup, Mary-Ann

    2016-01-01

    Present article is based on qualitative interviews with eight offices involved in the early conceptual stages of the design process. It investigates what experiences they have with their design process especially in relation to address environmental issues. The data from the interviews are analyzed...... through a coding scheme that focuses on what experiences they have in the early stages of the design process with the brief, the environmental concerns and the challenges they meet in this work. From the interviews it is seen that the direction today is to have an increased focus on a multidisciplinary...

  11. An Integrated Course and Design Project in Chemical Process Design.

    Science.gov (United States)

    Rockstraw, David A.; And Others

    1997-01-01

    Describes a chemical engineering course curriculum on process design, analysis, and simulation. Includes information regarding the sequencing of engineering design classes and the location of the classes within the degree program at New Mexico State University. Details of course content are provided. (DDR)

  12. Optimization criteria for the design of orbital replacement units (ORUs)

    Science.gov (United States)

    Schulze, Manfred W.

    A reduction of life cycle costs of spacecraft or Space Station elements can be achieved by a modular build up which allows an in orbit replacement, maintenance, and service of functional units named Orbital Replacement Units (ORUs). The criteria for an optimal design for an ORU is presented. Requirements involving the user spacecraft configuration, the servicing vehicle, the handling by astronauts and remote manipulator system are considered.

  13. Parametric design studies of toroidal magnetic energy storage units

    Science.gov (United States)

    Herring, J. Stephen

    Superconducting magnetic energy storage (SMES) units have a number of advantages as storage devices. Electrical current is the input, output and stored medium, allowing for completely solid-state energy conversion. The magnets themselves have no moving parts. The round trip efficiency is higher than those for batteries, compressed air or pumped hydro. Output power can be very high, allowing complete discharge of the unit within a few seconds. Finally, the unit can be designed for a very large number of cycles, limited basically by fatigue in the structural components. A small systems code was written to produce and evaluate self-consistent designs for toroidal superconducting energy storage units. The units can use either low temperature or high temperature superconductors. The coils have D shape where the conductor and its stabilizer/structure is loaded only in tension and the centering forces are borne by a bucking cylinder. The coils are convectively cooled from a cryogenic reservoir in the bore of the coils. The coils are suspended in a cylindrical metal shell which protects the magnet during rail, automotive or shipboard use. It is important to note that the storage unit does not rely on its surroundings for structural support, other than normal gravity and inertial loads. Designs are presented for toroidal energy storage units produced by the systems code. A wide range of several parameters have been considered, resulting in units storing from 1 MJ to 72 GJ. Maximum fields range from 5 T to 20 T. The masses and volumes of the coils, bucking cylinder, coolant, insulation and outer shell are calculated. For unattended use, the allowable operating time using only the boiloff of the cryogenic fluid for refrigeration is calculated. For larger units, the coils were divided into modules suitable for normal truck or rail transport.

  14. Engineering design: A cognitive process approach

    Science.gov (United States)

    Strimel, Greg Joseph

    The intent of this dissertation was to identify the cognitive processes used by advanced pre-engineering students to solve complex engineering design problems. Students in technology and engineering education classrooms are often taught to use an ideal engineering design process that has been generated mostly by educators and curriculum developers. However, the review of literature showed that it is unclear as to how advanced pre-engineering students cognitively navigate solving a complex and multifaceted problem from beginning to end. Additionally, it was unclear how a student thinks and acts throughout their design process and how this affects the viability of their solution. Therefore, Research Objective 1 was to identify the fundamental cognitive processes students use to design, construct, and evaluate operational solutions to engineering design problems. Research Objective 2 was to determine identifiers within student cognitive processes for monitoring aptitude to successfully design, construct, and evaluate technological solutions. Lastly, Research Objective 3 was to create a conceptual technological and engineering problem-solving model integrating student cognitive processes for the improved development of problem-solving abilities. The methodology of this study included multiple forms of data collection. The participants were first given a survey to determine their prior experience with engineering and to provide a description of the subjects being studied. The participants were then presented an engineering design challenge to solve individually. While they completed the challenge, the participants verbalized their thoughts using an established "think aloud" method. These verbalizations were captured along with participant observational recordings using point-of-view camera technology. Additionally, the participant design journals, design artifacts, solution effectiveness data, and teacher evaluations were collected for analysis to help achieve the

  15. Design and Implementation of Fixed Point Arithmetic Unit

    Directory of Open Access Journals (Sweden)

    S Ramanathan

    2016-06-01

    Full Text Available This paper aims at Implementation of Fixed Point Arithmetic Unit. The real number is represented in Qn.m format where n is the number of bits to the left of the binary point and m is the number of bits to the right of the binary point. The Fixed Point Arithmetic Unit was designed using Verilog HDL. The Fixed Point Arithmetic Unit incorporates adder, multiplier and subtractor. We carried out the simulations in ModelSim and Cadence IUS, used Cadence RTL Compiler for synthesis and used Cadence SoC Encounter for physical design and targeted 180 nm Technology for ASIC implementation. From the synthesis result it is found that our design consumes 1.524 mW of power and requires area 20823.26 μm2 .

  16. Design variables and constraints in fashion store design processes

    DEFF Research Database (Denmark)

    Haug, Anders; Borch Münster, Mia

    2015-01-01

    a set of subsystems, while considering their mutual interdependencies. Research limitations/implications: – The proposed framework may be used as a point of departure and a frame of reference for future research into fashion store design. Practical implications: – The paper may support retail designers......Purpose: – Several frameworks of retail store environment variables exist, but as shown by this paper, they are not particularly well-suited for supporting fashion store design processes. Thus, in order to provide an improved understanding of fashion store design, the purpose of this paper...... is to identify the most important store design variables, organise these variables into categories, understand the design constraints between categories, and determine the most influential stakeholders. Design/methodology/approach: – Based on a discussion of existing literature, the paper defines a framework...

  17. Parallelization of heterogeneous reactor calculations on a graphics processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Malofeev, V. M., E-mail: vm-malofeev@mail.ru; Pal’shin, V. A. [National Research Center Kurchatov Institute (Russian Federation)

    2016-12-15

    Parallelization is applied to the neutron calculations performed by the heterogeneous method on a graphics processing unit. The parallel algorithm of the modified TREC code is described. The efficiency of the parallel algorithm is evaluated.

  18. Hidden realities inside PBL design processes

    DEFF Research Database (Denmark)

    Pihl, Ole Verner

    2015-01-01

    Design Process, but is a group-based architecture and design education better than that which is individually based? How does PBL affect space, form, and creative processes? Hans Kiib, professor and one of the founders of the Department of Architecture and Design in Aalborg, describes his intentions...... within the group work, as it is closer related to the actual PBL process”. Is the Integrated Design Process (Knudstrup 2004) and is Colb (1975) still current and valid? Can we still use these methodologies when we must create “learning for an unknown future,” as Ronald Barnett (2004) claims that we...... are passing from a complex world into one based on super complexity? Could Gaston Bachelard (1958), who writes in his book The Poetic of Space "that poets and artists are born phenomenologists," help architecture and design students in their journey to find his/her own professional expression? This paper...

  19. Chemical Process Design: An Integrated Teaching Approach.

    Science.gov (United States)

    Debelak, Kenneth A.; Roth, John A.

    1982-01-01

    Reviews a one-semester senior plant design/laboratory course, focusing on course structure, student projects, laboratory assignments, and course evaluation. Includes discussion of laboratory exercises related to process waste water and sludge. (SK)

  20. Molecular Thermodynamics for Chemical Process Design

    Science.gov (United States)

    Prausnitz, J. M.

    1976-01-01

    Discusses that aspect of thermodynamics which is particularly important in chemical process design: the calculation of the equilibrium properties of fluid mixtures, especially as required in phase-separation operations. (MLH)

  1. Diffusion tensor fiber tracking on graphics processing units.

    Science.gov (United States)

    Mittmann, Adiel; Comunello, Eros; von Wangenheim, Aldo

    2008-10-01

    Diffusion tensor magnetic resonance imaging has been successfully applied to the process of fiber tracking, which determines the location of fiber bundles within the human brain. This process, however, can be quite lengthy when run on a regular workstation. We present a means of executing this process by making use of the graphics processing units of computers' video cards, which provide a low-cost parallel execution environment that algorithms like fiber tracking can benefit from. With this method we have achieved performance gains varying from 14 to 40 times on common computers. Because of accuracy issues inherent to current graphics processing units, we define a variation index in order to assess how close the results obtained with our method are to those generated by programs running on the central processing units of computers. This index shows that results produced by our method are acceptable when compared to those of traditional programs.

  2. Sensor Network Design for Nonlinear Processes

    Institute of Scientific and Technical Information of China (English)

    李博; 陈丙珍

    2003-01-01

    This paper presents a method to design a cost-optimal nonredundant sensor network to observe all variables in a general nonlinear process. A mixed integer linear programming model was used to minimize the cost with data classification to check the observability of all unmeasured variables. This work is a starting point for designing sensor networks for general nonlinear processes based on various criteria, such as reliability and accuracy.

  3. Product quality driven food process design

    OpenAIRE

    Hadiyanto, M.

    2007-01-01

    Consumers evaluate food products on their quality, and thus the product quality is a main target in industrial food production. In the last decade there has been a remarkable increase of interest of the food industry to put food product quality central in innovation. However, quality itself is seldom considered as a starting point for the design of production systems. The objective of this thesis is to advance food process innovation by procedures for food process design which start from the ...

  4. Design and implementation of interface units for high speed fiber optics local area networks and broadband integrated services digital networks

    Science.gov (United States)

    Tobagi, Fouad A.; Dalgic, Ismail; Pang, Joseph

    1990-01-01

    The design and implementation of interface units for high speed Fiber Optic Local Area Networks and Broadband Integrated Services Digital Networks are discussed. During the last years, a number of network adapters that are designed to support high speed communications have emerged. This approach to the design of a high speed network interface unit was to implement package processing functions in hardware, using VLSI technology. The VLSI hardware implementation of a buffer management unit, which is required in such architectures, is described.

  5. The Engineering Process in Construction & Design

    Science.gov (United States)

    Stoner, Melissa A.; Stuby, Kristin T.; Szczepanski, Susan

    2013-01-01

    Recent research suggests that high-impact activities in science and math classes promote positive attitudinal shifts in students. By implementing high-impact activities, such as designing a school and a skate park, mathematical thinking can be linked to the engineering design process. This hands-on approach, when possible, to demonstrate or…

  6. SOLVING GLOBAL PROBLEMS USING COLLABORATIVE DESIGN PROCESSES

    DEFF Research Database (Denmark)

    Lenau, Torben Anker; Mejborn, Christina Okai

    2011-01-01

    In this paper we argue that use of collaborative design processes is a powerful means of bringing together different stakeholders and generating ideas in complex design situations. The collaborative design process was used in a workshop with international participants where the goal was to propos...... physical models strongly enhanced mutual understanding and exchange of ideas. Furthermore, the introduction of biological solution analogies also showed to be fruitful for the generation of new ideas for product design.......In this paper we argue that use of collaborative design processes is a powerful means of bringing together different stakeholders and generating ideas in complex design situations. The collaborative design process was used in a workshop with international participants where the goal was to propose...... forward proposed solutions for how to design, brand and make business models for how to solve aspects of the sanitation problem. The workshop showed that it was possible to work freely with such a taboo topic and that in particular the use of visualisation tools, i.e. drawing posters and building simple...

  7. Command decoder unit. [performance tests of data processing terminals and data converters for space shuttle orbiters

    Science.gov (United States)

    1976-01-01

    The design and testing of laboratory hardware (a command decoder unit) used in evaluating space shuttle instrumentation, data processing, and ground check-out operations is described. The hardware was a modification of another similar instrumentation system. A data bus coupler was designed and tested to interface the equipment to a central bus controller (computer). A serial digital data transfer mechanism was also designed. Redundant power supplies and overhead modules were provided to minimize the probability of a single component failure causing a catastrophic failure. The command decoder unit is packaged in a modular configuration to allow maximum user flexibility in configuring a system. Test procedures and special test equipment for use in testing the hardware are described. Results indicate that the unit will allow NASA to evaluate future software systems for use in space shuttles. The units were delivered to NASA and appear to be adequately performing their intended function. Engineering sketches and photographs of the command decoder unit are included.

  8. Business Process Compliance through Reusable Units of Compliant Processes

    NARCIS (Netherlands)

    Shumm, D.; Turetken, O.; Kokash, N.; Elgammal, A.; Leymann, F.; Heuvel, J. van den

    2010-01-01

    Compliance management is essential for ensuring that organizational business processes and supporting information systems are in accordance with a set of prescribed requirements originating from laws, regulations, and various legislative or technical documents such as Sarbanes-Oxley Act or ISO 17799

  9. Design for embedded image processing on FPGAs

    CERN Document Server

    Bailey, Donald G

    2011-01-01

    "Introductory material will consider the problem of embedded image processing, and how some of the issues may be solved using parallel hardware solutions. Field programmable gate arrays (FPGAs) are introduced as a technology that provides flexible, fine-grained hardware that can readily exploit parallelism within many image processing algorithms. A brief review of FPGA programming languages provides the link between a software mindset normally associated with image processing algorithms, and the hardware mindset required for efficient utilization of a parallel hardware design. The bulk of the book will focus on the design process, and in particular how designing an FPGA implementation differs from a conventional software implementation. Particular attention is given to the techniques for mapping an algorithm onto an FPGA implementation, considering timing, memory bandwidth and resource constraints, and efficient hardware computational techniques. Extensive coverage will be given of a range of image processing...

  10. Industrial best practices of conceptual process design

    NARCIS (Netherlands)

    Harmsen, G.J.

    2004-01-01

    The chemical process industry aims particularly at energy, capital expenditure and variable feedstock cost savings due to fierce global competition, the Kyoto Protocol and requirements for sustainable development. Increasingly conceptual process design methods are used in the industry to achieve the

  11. Biocatalytic Process Design and Reaction Engineering

    Directory of Open Access Journals (Sweden)

    R. Wohlgemuth

    2017-07-01

    Full Text Available Biocatalytic processes occurring in nature provide a wealth of inspiration for manufacturing processes with high molecular economy. The molecular and engineering aspects of bioprocesses converting available raw materials into valuable products are therefore of much industrial interest. Modular reaction platforms and straightforward working paths, from the fundamental understanding of biocatalytic systems in nature to the design and reaction engineering of novel biocatalytic processes, have been important for shortening development times. Building on broadly applicable reaction platforms and tools for designing biocatalytic processes and their reaction engineering are key success factors. Process integration and intensification aspects are illustrated with biocatalytic processes to numerous small-molecular weight compounds, which have been prepared by novel and highly selective routes, for applications in the life sciences and biomedical sciences.

  12. Integrating ergonomic knowledge into engineering design processes

    DEFF Research Database (Denmark)

    Hall-Andersen, Lene Bjerg

    Integrating ergonomic knowledge into engineering design processes has been shown to contribute to healthy and effective designs of workplaces. However, it is also well-recognized that, in practice, ergonomists often have difficulties gaining access to and impacting engineering design processes....... This PhD dissertation takes its point of departure in a recent development in Denmark in which many larger engineering consultancies chose to established ergonomic departments in house. In the ergonomic profession, this development was seen as a major opportunity to gain access to early design phases....... Present study contributes new perspectives on possibilities and barriers for integrating ergonomic knowledge in design by exploring the integration activities under new conditions. A case study in an engineering consultancy in Denmark was carried out. A total of 23 persons were interviewed...

  13. Planar Inlet Design and Analysis Process (PINDAP)

    Science.gov (United States)

    Slater, John W.; Gruber, Christopher R.

    2005-01-01

    The Planar Inlet Design and Analysis Process (PINDAP) is a collection of software tools that allow the efficient aerodynamic design and analysis of planar (two-dimensional and axisymmetric) inlets. The aerodynamic analysis is performed using the Wind-US computational fluid dynamics (CFD) program. A major element in PINDAP is a Fortran 90 code named PINDAP that can establish the parametric design of the inlet and efficiently model the geometry and generate the grid for CFD analysis with design changes to those parameters. The use of PINDAP is demonstrated for subsonic, supersonic, and hypersonic inlets.

  14. Occupational safety in the fusion design process

    Energy Technology Data Exchange (ETDEWEB)

    Moshonas, K. E-mail: kmoshonas@sympatico.ca; Langman, V.J

    2001-04-01

    The radiological hazards associated with the operation and maintenance of fusion machines are cause for safety and regulatory concerns. Current experience in the nuclear industry, and at operating tokamaks confirm that a high level of occupational safety can be achieved through an effective planning process. For fusion facilities with increased hazard levels resulting from the introduction of large quantities of tritium, and higher neutron flux and fluence, a process must be implemented during the design phase to address both the worker safety and the regulatory requirements. Such a process has been developed and was used for the radiological occupational safety assessment of the International Thermonuclear Experimental Reactor (ITER). The purpose of this paper is to describe the approach used, including, the implementation of the as low as reasonably achievable (ALARA) principle for individual and collective doses in an evolving design, and the demonstration of adequate radiological occupational safety during the design process.

  15. Optimization Approaches for Designing Quantum Reversible Arithmetic Logic Unit

    Science.gov (United States)

    Haghparast, Majid; Bolhassani, Ali

    2016-03-01

    Reversible logic is emerging as a promising alternative for applications in low-power design and quantum computation in recent years due to its ability to reduce power dissipation, which is an important research area in low power VLSI and ULSI designs. Many important contributions have been made in the literatures towards the reversible implementations of arithmetic and logical structures; however, there have not been many efforts directed towards efficient approaches for designing reversible Arithmetic Logic Unit (ALU). In this study, three efficient approaches are presented and their implementations in the design of reversible ALUs are demonstrated. Three new designs of reversible one-digit arithmetic logic unit for quantum arithmetic has been presented in this article. This paper provides explicit construction of reversible ALU effecting basic arithmetic operations with respect to the minimization of cost metrics. The architectures of the designs have been proposed in which each block is realized using elementary quantum logic gates. Then, reversible implementations of the proposed designs are analyzed and evaluated. The results demonstrate that the proposed designs are cost-effective compared with the existing counterparts. All the scales are in the NANO-metric area.

  16. Design of the Acoustic Signal Receiving Unit of Acoustic Telemetry While Drilling

    Directory of Open Access Journals (Sweden)

    Li Zhigang

    2016-01-01

    Full Text Available Signal receiving unit is one of the core units of the acoustic telemetry system. A new type of acoustic signal receiving unit is designed to solve problems of the existing devices. The unit is a short joint in whole. It not only can receive all the acoustic signals transmitted along the drill string, without losing any signal, but will not bring additional vibration and interference. In addition, the structure of the amplitude transformer is designed, which can amplify the signal amplitude and improve the receiving efficiency. The design of the wireless communication module makes the whole device can be used in normal drilling process when the drill string is rotating. So, it does not interfere with the normal drilling operation.

  17. Adaptive-optics Optical Coherence Tomography Processing Using a Graphics Processing Unit*

    Science.gov (United States)

    Shafer, Brandon A.; Kriske, Jeffery E.; Kocaoglu, Omer P.; Turner, Timothy L.; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T.

    2015-01-01

    Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability. PMID:25570838

  18. Adaptive-optics optical coherence tomography processing using a graphics processing unit.

    Science.gov (United States)

    Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T

    2014-01-01

    Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.

  19. On the Process of Software Design

    DEFF Research Database (Denmark)

    Hertzum, Morten

    2008-01-01

    Software design is a complex undertaking. This study delineates and analyses three major constituents of this complexity: the formative element entailed in articulating and reaching closure on a design, the progress imperative entailed in making estimates and tracking status, and the collaboration...... challenge entailed in learning within and across projects. Empirical data from two small to medium-size projects illustrate how practicing software designers struggle with the complexity induced by these constituents and suggest implications for user-centred design. These implications concern collaborative...... grounding, long-loop learning, and the need for a more managed design process while acknowledging that methods are not an alternative to the project knowledge created, negotiated, and refined by designers. Specifically, insufficient collaborative grounding will cause project knowledge to gradually...

  20. PORFLOW Simulations Supporting Saltstone Disposal Unit Design Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G. P. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hang, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Taylor, G. A. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-12-10

    SRNL was requested by SRR to perform PORFLOW simulations to support potential cost-saving design modifications to future Saltstone Disposal Units in Z-Area (SRR-CWDA-2015-00120). The design sensitivity cases are defined in a modeling input specification document SRR-CWDA-2015-00133 Rev. 1. A high-level description of PORFLOW modeling and interpretation of results are provided in SRR-CWDA-2015-00169. The present report focuses on underlying technical issues and details of PORFLOW modeling not addressed by the input specification and results interpretation documents. Design checking of PORFLOW modeling is documented in SRNL-L3200-2015-00146.

  1. Investigation of a design performance measurement tool for improving collaborative design during a design process

    OpenAIRE

    Yin, Yuanyuan

    2009-01-01

    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University. With rapid growth of global competition, the design process is becoming more and more complex due largely to cross-functional team collaboration, dynamic design processes, and unpredictable design outcomes. Thus, it is becoming progressively more difficult to support and improve design activities effectively during a design process, especially from a collaboration perspective. Although a grea...

  2. Unit Operations for the Food Industry: Equilibrium Processes & Mechanical Operations

    OpenAIRE

    Guiné, Raquel

    2013-01-01

    Unit operations are an area of engineering that is at the same time very fascinating and most essential for the industry in general and the food industry in particular. This book was prepared in a way to achieve simultaneously the academic and practical perspectives. It is organized into two parts: the unit operations based on equilibrium processes and the mechanical operations. Each topic starts with a presentation of the fundamental concepts and principles, followed by a discussion of ...

  3. Formalizing the Process of Constructing Chains of Lexical Units

    Directory of Open Access Journals (Sweden)

    Grigorij Chetverikov

    2015-06-01

    Full Text Available Formalizing the Process of Constructing Chains of Lexical Units The paper investigates mathematical aspects of describing the construction of chains of lexical units on the basis of finite-predicate algebra. Analyzing the construction peculiarities is carried out and application of the method of finding the power of linear logical transformation for removing characteristic words of a dictionary entry is given. Analysis and perspectives of the results of the study are provided.

  4. Improved simulation design factors for unconventional crude vacuum units : cracked gas make and stripping section performance

    Energy Technology Data Exchange (ETDEWEB)

    Remesat, D. [Koch-Glitsch Canada LP, Calgary, AB (Canada)

    2008-10-15

    Operating data for unconventional heavy oil vacuum crude units were reviewed in order to optimize the design of vacuum columns. Operational data from heavy crude vacuum units operating with stripping and velocity were used to investigate the application of a proven vacuum distillation tower simulation topology designed for use with heavy oil and bitumen upgrader feeds. Design factors included a characterization of the crude oils or bitumens processed in the facility; the selection of thermodynamic models; and the non-equilibrium simulation topology. Amounts of generated cracked gas were calculated, and entrainment and stripping section performance was evaluated. Heater designs for ensuring the even distribution of heat flux were discussed. Data sets from vacuum units processing crude oils demonstrated that the amount of offgas flow increased as the transfer line temperature increased. The resulting instability caused increased coke generation and light hydrocarbon formation. Results also indicated that overhead vacuum ejector design and size as well as heat transfer capabilities of quench and pumparound zones must be considered when designing vacuum column units. Steam stripping lowered hydrocarbon partial pressure to allow materials to boil at lower temperatures. It was concluded that setting appropriate entrainment values will ensure the accuracy of sensitivity analyses for transfer line designs, inlet feed devices, and wash bed configurations. 9 refs., figs.

  5. Capturing Creativity in Collaborative Design Processes

    DEFF Research Database (Denmark)

    Pedersen, J. U.; Onarheim, Balder

    2015-01-01

    This paper is concerned with the question of how we can capture creativity in collaborative design processes consisting of two or more individuals collaborating in the process of producing innovative outputs. Traditionally, methods for detecting creativity are focused on the cognitive and mental...... processes of the solitary individual. A new framework for studying and capturing creativity, which goes beyond individual cognitive processes by examining the applied creative process of individuals in context, is proposed. We apply a context sensitive framework that embraces the creative collaborative...... process and present the process in a visual overview with the use of a visual language of symbols. The framework, entitled C3, Capturing Creativity in Context, is presented and subsequently evaluated based on a pilot study utilizing C3. Here it was found that the framework was particularly useful...

  6. Systematic Sustainable Process Design and Analysis of Biodiesel Processes

    Directory of Open Access Journals (Sweden)

    Seyed Soheil Mansouri

    2013-09-01

    Full Text Available Biodiesel is a promising fuel alternative compared to traditional diesel obtained from conventional sources such as fossil fuel. Many flowsheet alternatives exist for the production of biodiesel and therefore it is necessary to evaluate these alternatives using defined criteria and also from process intensification opportunities. This work focuses on three main aspects that have been incorporated into a systematic computer-aided framework for sustainable process design. First, the creation of a generic superstructure, which consists of all possible process alternatives based on available technology. Second, the evaluation of this superstructure for systematic screening to obtain an appropriate base case design. This is done by first reducing the search space using a sustainability analysis, which provides key indicators for process bottlenecks of different flowsheet configurations and then by further reducing the search space by using economic evaluation and life cycle assessment. Third, the determination of sustainable design with/without process intensification using a phenomena-based synthesis/design method. A detailed step by step application of the framework is highlighted through a biodiesel production case study.

  7. Design To Manufacturing Process:Bumpy Road?

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    Ntegration between design and manufacturing is one of the topics that normally hits a lot of discussion in the product development and PLM space.To support this process becomes more and more important in a modern enterprise manufacturing organization.You can ask me why? Let me put is simple this is one of the most important processes that can drive cost optimization in the companies.Everything a company is making need to be first designed and later manufacturing.If it breaks nothing can help.

  8. SIMULATION IN THERMAL DESIGN FOR ELECTRONIC CONTROL UNIT OF ELECTRONIC UNIT PUMP

    Institute of Scientific and Technical Information of China (English)

    XU Quankui; ZHU Keqing; ZHUO Bin; MAO Xiaojian; WANG Junxi

    2008-01-01

    The high working junction temperature of power component is the most common reason of its failure. So the thermal design is of vital importance in electronic control unit (ECU) design. By means of circuit simulation, the thermal design of ECU for electronic unit pump (EUP) fuel system is applied. The power dissipation model of each power component in the ECU is created and simulated. According to the analyses of simulation results, the factors which affect the power dissipation of components are analyzed. Then the ways for reducing the power dissipation of power components are carried out. The power dissipation of power components at different engine state is calculated and analyzed. The maximal power dissipation of each power component in all possible engine state is also carried out based on these simulations. A cooling system is designed based on these studies. The tests show that the maximum total power dissipation of ECU drops from 43.2 W to 33.84 W after these simulations and optimizations. These applications of simulations in thermal design of ECU can greatly increase the quality of the design, save the design cost and shorten design time

  9. Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification

    NARCIS (Netherlands)

    Miao, Yongwu; Sloep, Peter; Koper, Rob

    2009-01-01

    Miao, Y., Sloep, P. B., & Koper, R. (2008). Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification. Presentation at the ICWL 2008 conference. August, 20, 2008, Jinhua, China.

  10. Sedimentation process and design of settling systems

    CERN Document Server

    De, Alak

    2017-01-01

    This book is designed to serve as a comprehensive source of information of sedimentation processes and design of settling systems, especially as applied to design of such systems in civil and environmental engineering. The book begins with an introduction to sedimentation as a whole and goes on to cover the development and details of various settling theories. The book traces the chronological developments of the comprehensive knowledge of settling studies and design of settling systems from 1889.A new concept of 'Velocity Profile Theorem', tool for settling problem analysis, has been employed to the analysis of the phenomenon of short circuiting. Complete theory of tube settling has been developed and its application to the computation of residual solids from the assorted solids through the same has been demonstrated. Experimental verification of the tube settling theory has also been presented. Field-oriented compatible design and operation methodology of settling system has been developed from the detailed...

  11. Process design for Al backside contacts

    Energy Technology Data Exchange (ETDEWEB)

    Chalfoun, L.L.; Kimerling, L.C. [Massachusetts Inst. of Technology, Cambridge, MA (United States)

    1995-08-01

    It is known that properly alloyed aluminum backside contacts can improve silicon solar cell efficiency. To use this knowledge to fullest advantage, we have studied the gettering process that occurs during contact formation and the microstructure of the contact and backside junction region. With an understanding of the alloying step, optimized fabrication processes can be designed. To study gettering, single crystal silicon wafers were coated with aluminim on both sides and subjected to heat treatments. Results are described.

  12. Integration of process design and controller design for chemical processes using model-based methodology

    DEFF Research Database (Denmark)

    Abd.Hamid, Mohd-Kamaruddin; Sin, Gürkan; Gani, Rafiqul

    2010-01-01

    In this paper, a novel systematic model-based methodology for performing integrated process design and controller design (IPDC) for chemical processes is presented. The methodology uses a decomposition method to solve the IPDC typically formulated as a mathematical programming (optimization...... with constraints) problem. Accordingly the optimization problem is decomposed into four sub-problems: (i) pre-analysis, (ii) design analysis, (iii) controller design analysis, and (iv) final selection and verification, which are relatively easier to solve. The methodology makes use of thermodynamic-process...... insights and the reverse design approach to arrive at the final process design–controller design decisions. The developed methodology is illustrated through the design of: (a) a single reactor, (b) a single separator, and (c) a reactor–separator-recycle system and shown to provide effective solutions...

  13. Model-Based Integrated Process Design and Controller Design of Chemical Processes

    DEFF Research Database (Denmark)

    Abd Hamid, Mohd Kamaruddin Bin

    This thesis describes the development and application of a new systematic modelbased methodology for performing integrated process design and controller design (IPDC) of chemical processes. The new methodology is simple to apply, easy to visualize and efficient to solve. Here, the IPDC problem...... and verification. Using thermodynamic and process insights, a bounded search space is first identified. This feasible solution space is further reduced to satisfy the process design and controller design constraints in sub-problems 2 and 3, respectively, until in the final sub-problem all feasible candidates...... are ordered according to the defined performance criteria (objective function). The final selected design is then verified through rigorous simulation. In the pre-analysis sub-problem, the concepts of attainable region and driving force are used to locate the optimal process-controller design solution...

  14. High Power Silicon Carbide (SiC) Power Processing Unit Development

    Science.gov (United States)

    Scheidegger, Robert J.; Santiago, Walter; Bozak, Karin E.; Pinero, Luis R.; Birchenough, Arthur G.

    2015-01-01

    NASA GRC successfully designed, built and tested a technology-push power processing unit for electric propulsion applications that utilizes high voltage silicon carbide (SiC) technology. The development specifically addresses the need for high power electronics to enable electric propulsion systems in the 100s of kilowatts. This unit demonstrated how high voltage combined with superior semiconductor components resulted in exceptional converter performance.

  15. Batch process. Optimum designing and operation of a batch process; Bacchi purosesu

    Energy Technology Data Exchange (ETDEWEB)

    Hasebe, S. [Kyoto Univ. (Japan). Faculty of Engineering

    1997-09-05

    Since the control of a batch process becomes dynamic, it becomes necessary to handle the process differently from a continuous process in terms of the designing, operating and controlling of the process. This paper describes the characteristics and the problems to be solved of a batch process from three points of view, the designing, operation and controlling of the process. A major problem of a batch process is the designing difficulty. In a batch process, the amount of products capable of being manufactured per unit time by each apparatus and that by the whole plant structured by combining apparatuses are different, and therefore the time and apparatus capacity are wasted in some cases. The actual designing of a batch process involves various factors, such as the seasonal fluctuation of demand for products, the possibility of expanding the apparatuses in the future and the easiness of controlling the process, and the shipment of products during consecutive holidays and periodic maintenance, which are not included in the formulation of mathematical programming problems. Regarding the optimum operation of a batch process and the controlling of the same, descriptions of forming of a dynamic optimum operation pattern and verification of the sequence control system are given. 9 refs., 4 figs.

  16. High-throughput sequence alignment using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Trapnell Cole

    2007-12-01

    Full Text Available Abstract Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU.

  17. Designer networks for time series processing

    DEFF Research Database (Denmark)

    Svarer, C; Hansen, Lars Kai; Larsen, Jan

    1993-01-01

    The conventional tapped-delay neural net may be analyzed using statistical methods and the results of such analysis can be applied to model optimization. The authors review and extend efforts to demonstrate the power of this strategy within time series processing. They attempt to design compact...

  18. Dynamic Process Simulation for Analysis and Design.

    Science.gov (United States)

    Nuttall, Herbert E., Jr.; Himmelblau, David M.

    A computer program for the simulation of complex continuous process in real-time in an interactive mode is described. The program is user oriented, flexible, and provides both numerical and graphic output. The program has been used in classroom teaching and computer aided design. Typical input and output are illustrated for a sample problem to…

  19. Flexible Processing and the Design of Grammar

    Science.gov (United States)

    Sag, Ivan A.; Wasow, Thomas

    2015-01-01

    We explore the consequences of letting the incremental and integrative nature of language processing inform the design of competence grammar. What emerges is a view of grammar as a system of local monotonic constraints that provide a direct characterization of the signs (the form-meaning correspondences) of a given language. This…

  20. Design and simulation of an activated sludge unit associated to a continuous reactor to remove heavy metals

    Energy Technology Data Exchange (ETDEWEB)

    D`Avila, J.S.; Nascimento, R.R. [Ambientec Consultoria Ltda., Aracaju, SE (Brazil)

    1993-12-31

    A software was developed to design and simulate an activated sludge unit associated to a new technology to remove heavy metals from wastewater. In this process, a continuous high efficiency biphasic reactor operates by using particles of activated peat in conjugation with the sludge unit. The results obtained may be useful to increase the efficiency or to reduce the design and operational costs involved in a activated sludge unit. (author). 5 refs., 2 tabs.

  1. Assessment and Development of Engineering Design Processes

    DEFF Research Database (Denmark)

    Ulrikkeholm, Jeppe Bjerrum

    customer demands for customised products. The thesis at hand is based on six scientific articles. Three of the articles are written and presented at scientific conferences whereas the remaining three are submitted to scientific journals. The results of the six papers constitute the main contribution......Many engineering companies are currently facing a significant challenge as they are experiencing increasing demands from their customers for delivery of customised products that have almost the same delivery time, price and quality as mass-produced products. In order to comply with this development......, the engineering companies need to have efficient engineering design processes in place, so they can design customised product variants faster and more efficiently. It is however not an easy task to model and develop such processes. To conduct engineering design is often a highly iterative, illdefined and complex...

  2. Interface design in the process industries

    Science.gov (United States)

    Beaverstock, M. C.; Stassen, H. G.; Williamson, R. A.

    1977-01-01

    Every operator runs his plant in accord with his own mental model of the process. In this sense, one characteristic of an ideal man-machine interface is that it be in harmony with that model. With this theme in mind, the paper first reviews the functions of the process operator and compares them with human operators involved in control situations previously studied outside the industrial environment (pilots, air traffic controllers, helmsmen, etc.). A brief history of the operator interface in the process industry and the traditional methodology employed in its design is then presented. Finally, a much more fundamental approach utilizing a model definition of the human operator's behavior is presented.

  3. Design of the optical fiber unit for industrial equipment

    Science.gov (United States)

    Fedosov, Yuri V.; Romanova, Galina E.; Afanasev, Maxim Ya.

    2017-05-01

    Optical systems with UV lasers are widely used in various areas of manufacturing, for processing materials. To provide high operation accuracy, the equipment with high power UV laser sources requires complicated optical and mechanical unit with control electronic blocks. In order to develop successful and stable equipment and to increase accuracy, the development of each part (optical, mechanical, and electronic) is needed to solve many complex engineering problems. In this article the special features of the development of an optical unit with a high powered UV laser source are considered along with some problems and solutions.

  4. Architectural design of heterogeneous metallic nanocrystals--principles and processes.

    Science.gov (United States)

    Yu, Yue; Zhang, Qingbo; Yao, Qiaofeng; Xie, Jianping; Lee, Jim Yang

    2014-12-16

    CONSPECTUS: Heterogeneous metal nanocrystals (HMNCs) are a natural extension of simple metal nanocrystals (NCs), but as a research topic, they have been much less explored until recently. HMNCs are formed by integrating metal NCs of different compositions into a common entity, similar to the way atoms are bonded to form molecules. HMNCs can be built to exhibit an unprecedented architectural diversity and complexity by programming the arrangement of the NC building blocks ("unit NCs"). The architectural engineering of HMNCs involves the design and fabrication of the architecture-determining elements (ADEs), i.e., unit NCs with precise control of shape and size, and their relative positions in the design. Similar to molecular engineering, where structural diversity is used to create more property variations for application explorations, the architectural engineering of HMNCs can similarly increase the utility of metal NCs by offering a suite of properties to support multifunctionality in applications. The architectural engineering of HMNCs calls for processes and operations that can execute the design. Some enabling technologies already exist in the form of classical micro- and macroscale fabrication techniques, such as masking and etching. These processes, when used singly or in combination, are fully capable of fabricating nanoscopic objects. What is needed is a detailed understanding of the engineering control of ADEs and the translation of these principles into actual processes. For simplicity of execution, these processes should be integrated into a common reaction system and yet retain independence of control. The key to architectural diversity is therefore the independent controllability of each ADE in the design blueprint. The right chemical tools must be applied under the right circumstances in order to achieve the desired outcome. In this Account, after a short illustration of the infinite possibility of combining different ADEs to create HMNC design

  5. Electronic Unit Pump Diesel Engine Control Unit Design for Integrated Powertrain System

    Institute of Scientific and Technical Information of China (English)

    LIU Bo-lan; ZHAO Chang-lu; ZHANG Fu-jun; HUANG Ying

    2005-01-01

    The performance of the electronic unit pump (EUP) diesel engine is studied, it will be used in the integrated powertrain and its multi parameters are controllable. Both the theoretical analysis and experiment research are taken. A control unit for the fuel quantity and timing in crankshaft domain is designed on this basis and the engine experiment test has been done. For the constant speed camshaft driving EUP system, the fuel quantity will increase as the supply angle goes up and injection timing has no effect. The control precision can reach 1°CA. The full injection timing MAP and engine peak performance curves are made successfully.

  6. Using Loop Heat Pipes to Minimize Survival Heater Power for NASA's Evolutionary Xenon Thruster Power Processing Units

    Science.gov (United States)

    Choi, Michael K.

    2017-01-01

    A thermal design concept of using propylene loop heat pipes to minimize survival heater power for NASA's Evolutionary Xenon Thruster power processing units is presented. It reduces the survival heater power from 183 W to 35 W per power processing unit. The reduction is 81%.

  7. Determinants of profitability of smallholder palm oil processing units ...

    African Journals Online (AJOL)

    ... of profitability of smallholder palm oil processing units in Ogun state, Nigeria. ... as well as their geographical spread covering the entire land space of the state. ... The F-ratio value is statistically significant (P<0.01) implying that the model is ...

  8. Reflector antenna analysis using physical optics on Graphics Processing Units

    DEFF Research Database (Denmark)

    Borries, Oscar Peter; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd

    2014-01-01

    The Physical Optics approximation is a widely used asymptotic method for calculating the scattering from electrically large bodies. It requires significant computational work and little memory, and is thus well suited for application on a Graphics Processing Unit. Here, we investigate...... the performance of an implementation and demonstrate that while there are some implementational pitfalls, a careful implementation can result in impressive improvements....

  9. Utilizing Graphics Processing Units for Network Anomaly Detection

    Science.gov (United States)

    2012-09-13

    matching system using deterministic finite automata and extended finite automata resulting in a speedup of 9x over the CPU implementation [SGO09]. Kovach ...pages 14–18, 2009. [Kov10] Nicholas S. Kovach . Accelerating malware detection via a graphics processing unit, 2010. http://www.dtic.mil/dtic/tr

  10. Acceleration of option pricing technique on graphics processing units

    NARCIS (Netherlands)

    Zhang, B.; Oosterlee, C.W.

    2010-01-01

    The acceleration of an option pricing technique based on Fourier cosine expansions on the Graphics Processing Unit (GPU) is reported. European options, in particular with multiple strikes, and Bermudan options will be discussed. The influence of the number of terms in the Fourier cosine series expan

  11. Acceleration of option pricing technique on graphics processing units

    NARCIS (Netherlands)

    Zhang, B.; Oosterlee, C.W.

    2014-01-01

    The acceleration of an option pricing technique based on Fourier cosine expansions on the graphics processing unit (GPU) is reported. European options, in particular with multiple strikes, and Bermudan options will be discussed. The influence of the number of terms in the Fourier cosine series expan

  12. Fast Pyrolysis Process Development Unit for Validating Bench Scale Data

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Robert C. [Iowa State Univ., Ames, IA (United States). Biorenewables Research Lab.. Center for Sustainable Environmental Technologies. Bioeconomy Inst.; Jones, Samuel T. [Iowa State Univ., Ames, IA (United States). Biorenewables Research Lab.. Center for Sustainable Environmental Technologies. Bioeconomy Inst.

    2010-03-31

    The purpose of this project was to prepare and operate a fast pyrolysis process development unit (PDU) that can validate experimental data generated at the bench scale. In order to do this, a biomass preparation system, a modular fast pyrolysis fluidized bed reactor, modular gas clean-up systems, and modular bio-oil recovery systems were designed and constructed. Instrumentation for centralized data collection and process control were integrated. The bio-oil analysis laboratory was upgraded with the addition of analytical equipment needed to measure C, H, O, N, S, P, K, and Cl. To provide a consistent material for processing through the fluidized bed fast pyrolysis reactor, the existing biomass preparation capabilities of the ISU facility needed to be upgraded. A stationary grinder was installed to reduce biomass from bale form to 5-10 cm lengths. A 25 kg/hr rotary kiln drier was installed. It has the ability to lower moisture content to the desired level of less than 20% wt. An existing forage chopper was upgraded with new screens. It is used to reduce biomass to the desired particle size of 2-25 mm fiber length. To complete the material handling between these pieces of equipment, a bucket elevator and two belt conveyors must be installed. The bucket elevator has been installed. The conveyors are being procured using other funding sources. Fast pyrolysis bio-oil, char and non-condensable gases were produced from an 8 kg/hr fluidized bed reactor. The bio-oil was collected in a fractionating bio-oil collection system that produced multiple fractions of bio-oil. This bio-oil was fractionated through two separate, but equally important, mechanisms within the collection system. The aerosols and vapors were selectively collected by utilizing laminar flow conditions to prevent aerosol collection and electrostatic precipitators to collect the aerosols. The vapors were successfully collected through a selective condensation process. The combination of these two mechanisms

  13. Designing Sustainable Urban Social Housing in the United Arab Emirates

    Directory of Open Access Journals (Sweden)

    Khaled Galal Ahmed

    2017-08-01

    Full Text Available The United Arab Emirates is experiencing a challenging turn towards sustainable social housing. Conventional neighborhood planning and design principles are being replaced by those leading to more sustainable urban forms. To trace this challenging move, the research has investigated the degree of consideration of sustainable urban design principles in two social housing neighborhoods in Al Ain City in Abu Dhabi Emirate, UAE. The first represents a conventional urban form based on the neighborhood theory; the other represents the new sustainable design. The ultimate aim is to define the obstacles hindering the full achievement of a sustainable urban form in this housing type. To undertake research investigations, a matrix of the design principles of sustainable urban forms has been initiated in order to facilitate the assessment of the urban forms of the two selected urban communities. Some qualitatively measurable design elements have been defined for each of these principles. The results of the analysis of the shift from ‘conventional’ to ‘sustainable’ case studies have revealed some aspects that would prevent the attainment of fully sustainable urban forms in newly designed social housing neighborhoods. Finally, the research concludes by recommending some fundamental actions to help meet these challenges in future design.

  14. Optimal design of upstream processes in biotransformation technologies.

    Science.gov (United States)

    Dheskali, Endrit; Michailidi, Katerina; de Castro, Aline Machado; Koutinas, Apostolis A; Kookos, Ioannis K

    2017-01-01

    In this work a mathematical programming model for the optimal design of the bioreaction section of biotechnological processes is presented. Equations for the estimation of the equipment cost derived from a recent publication by the US National Renewable Energy Laboratory (NREL) are also summarized. The cost-optimal design of process units and the optimal scheduling of their operation can be obtained using the proposed formulation that has been implemented in software available from the journal web page or the corresponding author. The proposed optimization model can be used to quantify the effects of decisions taken at a lab scale on the industrial scale process economics. It is of paramount important to note that this can be achieved at the early stage of the development of a biotechnological project. Two case studies are presented that demonstrate the usefulness and potential of the proposed methodology.

  15. The design of a nanolithographic process

    Science.gov (United States)

    Johannes, Matthew Steven

    This research delineates the design of a nanolithographic process for nanometer scale surface patterning. The process involves the combination of serial atomic force microscope (AFM) based nanolithography with the parallel patterning capabilities of soft lithography. The union of these two techniques provides for a unique approach to nanoscale patterning that establishes a research knowledge base and tools for future research and prototyping. To successfully design this process a number of separate research investigations were undertaken. A custom 3-axis AFM with feedback control on three positioning axes of nanometer precision was designed in order to execute nanolithographic research. This AFM system integrates a computer aided design/computer aided manufacturing (CAD/CAM) environment to allow for the direct synthesis of nanostructures and patterns using a virtual design interface. This AFM instrument was leveraged primarily to study anodization nanolithography (ANL), a nanoscale patterning technique used to generate local surface oxide layers on metals and semiconductors. Defining research focused on the automated generation of complex oxide nanoscale patterns as directed by CAD/CAM design as well as the implementation of tip-sample current feedback control during ANL to increase oxide uniformity. Concurrently, research was conducted concerning soft lithography, primarily in microcontact printing (muCP), and pertinent experimental and analytic techniques and procedures were investigated. Due to the masking abilities of the resulting oxide patterns from ANL, the results of AFM based patterning experiments are coupled with micromachining techniques to create higher aspect ratio structures at the nanoscale. These relief structures are used as master pattern molds for polymeric stamp formation to reproduce the original in a parallel fashion using muCP stamp formation and patterning. This new method of master fabrication provides for a useful alternative to

  16. ECO LOGIC INTERNATIONAL GAS-PHASE CHEMICAL REDUCTION PROCESS - THE THERMAL DESORPTION UNIT - APPLICATIONS ANALYSIS REPORT

    Science.gov (United States)

    ELI ECO Logic International, Inc.'s Thermal Desorption Unit (TDU) is specifically designed for use with Eco Logic's Gas Phase Chemical Reduction Process. The technology uses an externally heated bath of molten tin in a hydrogen atmosphere to desorb hazardous organic compounds fro...

  17. Catalyzed steam gasification of biomass. Phase 3: Biomass Process Development Unit (PDU) construction and initial operation

    Science.gov (United States)

    Healey, J. J.; Hooverman, R. H.

    1981-12-01

    The design and construction of the process development unit (PDU) are described in detail, examining each system and component in order. Siting, the chip handling system, the reactor feed system, the reactor, the screw conveyor, the ash dump system, the PDU support equipment, control and information management, and shakedown runs are described.

  18. Design of launch systems using continuous improvement process

    Science.gov (United States)

    Brown, Richard W.

    1995-01-01

    The purpose of this paper is to identify a systematic process for improving ground operations for future launch systems. This approach is based on the Total Quality Management (TQM) continuous improvement process. While the continuous improvement process is normally identified with making incremental changes to an existing system, it can be used on new systems if they use past experience as a knowledge base. In the case of the Reusable Launch Vehicle (RLV), the Space Shuttle operations provide many lessons. The TQM methodology used for this paper will be borrowed from the United States Air Force 'Quality Air Force' Program. There is a general overview of the continuous improvement process, with concentration on the formulation phase. During this phase critical analyses are conducted to determine the strategy and goals for the remaining development process. These analyses include analyzing the mission from the customers point of view, developing an operations concept for the future, assessing current capabilities and determining the gap to be closed between current capabilities and future needs and requirements. A brief analyses of the RLV, relative to the Space Shuttle, will be used to illustrate the concept. Using the continuous improvement design concept has many advantages. These include a customer oriented process which will develop a more marketable product and a better integration of operations and systems during the design phase. But, the use of TQM techniques will require changes, including more discipline in the design process and more emphasis on data gathering for operational systems. The benefits will far outweigh the additional effort.

  19. Reproducibility of Mammography Units, Film Processing and Quality Imaging

    Science.gov (United States)

    Gaona, Enrique

    2003-09-01

    The purpose of this study was to carry out an exploratory survey of the problems of quality control in mammography and processors units as a diagnosis of the current situation of mammography facilities. Measurements of reproducibility, optical density, optical difference and gamma index are included. Breast cancer is the most frequently diagnosed cancer and is the second leading cause of cancer death among women in the Mexican Republic. Mammography is a radiographic examination specially designed for detecting breast pathology. We found that the problems of reproducibility of AEC are smaller than the problems of processors units because almost all processors fall outside of the acceptable variation limits and they can affect the mammography quality image and the dose to breast. Only four mammography units agree with the minimum score established by ACR and FDA for the phantom image.

  20. Homology modeling, docking studies and molecular dynamic simulations using graphical processing unit architecture to probe the type-11 phosphodiesterase catalytic site: a computational approach for the rational design of selective inhibitors.

    Science.gov (United States)

    Cichero, Elena; D'Ursi, Pasqualina; Moscatelli, Marco; Bruno, Olga; Orro, Alessandro; Rotolo, Chiara; Milanesi, Luciano; Fossa, Paola

    2013-12-01

    Phosphodiesterase 11 (PDE11) is the latest isoform of the PDEs family to be identified, acting on both cyclic adenosine monophosphate and cyclic guanosine monophosphate. The initial reports of PDE11 found evidence for PDE11 expression in skeletal muscle, prostate, testis, and salivary glands; however, the tissue distribution of PDE11 still remains a topic of active study and some controversy. Given the sequence similarity between PDE11 and PDE5, several PDE5 inhibitors have been shown to cross-react with PDE11. Accordingly, many non-selective inhibitors, such as IBMX, zaprinast, sildenafil, and dipyridamole, have been documented to inhibit PDE11. Only recently, a series of dihydrothieno[3,2-d]pyrimidin-4(3H)-one derivatives proved to be selective toward the PDE11 isoform. In the absence of experimental data about PDE11 X-ray structures, we found interesting to gain a better understanding of the enzyme-inhibitor interactions using in silico simulations. In this work, we describe a computational approach based on homology modeling, docking, and molecular dynamics simulation to derive a predictive 3D model of PDE11. Using a Graphical Processing Unit architecture, it is possible to perform long simulations, find stable interactions involved in the complex, and finally to suggest guideline for the identification and synthesis of potent and selective inhibitors.

  1. Formal design specification of a Processor Interface Unit

    Science.gov (United States)

    Fura, David A.; Windley, Phillip J.; Cohen, Gerald C.

    1992-01-01

    This report describes work to formally specify the requirements and design of a processor interface unit (PIU), a single-chip subsystem providing memory-interface bus-interface, and additional support services for a commercial microprocessor within a fault-tolerant computer system. This system, the Fault-Tolerant Embedded Processor (FTEP), is targeted towards applications in avionics and space requiring extremely high levels of mission reliability, extended maintenance-free operation, or both. The need for high-quality design assurance in such applications is an undisputed fact, given the disastrous consequences that even a single design flaw can produce. Thus, the further development and application of formal methods to fault-tolerant systems is of critical importance as these systems see increasing use in modern society.

  2. Design and Simulation of an Absorption Diffusion Solar Refrigeration Unit

    Directory of Open Access Journals (Sweden)

    B. Chaouachi

    2007-01-01

    Full Text Available The purpose of this study was the design and the simulation of an absorption diffusion refrigerator using solar as source of energy, for domestic use. The design holds account about the climatic conditions and the unit cost due to technical constraints imposed by the technology of the various components of the installation such as the solar generator, the condenser, the absorber and the evaporator. Mass and energy conservation equations were developed for each component of the cycle and solved numerically. The obtained results showed, that the new designed mono pressure absorption cycle of ammonia was suitable well for the cold production by means of the solar energy and that with a simple plate collector we can reach a power, of the order of 900 watts sufficient for domestic use.

  3. Systematic parametric design/calculation of the piston rod unit

    Science.gov (United States)

    Kacani, V.

    2015-08-01

    In this article a modern and economic method for the strength calculation of the piston rod unit and its components under different operating conditions will be presented. Herefore the commercial FEA - Software will be linked with the company-owned calculation tools. The parametric user input will be followed by an automatic Pre- and Postprocessing. Afterwards the strength calculation is processed on all critical points of the piston rod connection, assisted by an extra module, based on general standards and special codes for reciprocating compressors. In this process most arrangements of the piston rod unit as well as the special geometries of the single-components (piston, piston rod and piston nut) can be considered easily. In this article the modeling of the notches, especially on the piston rod, piston as well as the piston nut will be covered in detail.

  4. Synthesis and Design of Processing Networks

    DEFF Research Database (Denmark)

    Quaglia, Alberto; Sarup, Bent; Sin, Gürkan

    2012-01-01

    In this contribution, we propose an integrated business and engineering framework for synthesis and design of processing networks under uncertainty. In our framework, an adapted formulation of the transhipment problem is integrated with a superstructure, leading to a Stochastic Mixed Integer Non...... Linear Program (sMINLP), which is solved to determine simultaneously the optimal strategic and tactical decisions with respect to the processing network, the material flows, raw material and product portfolio. The framework allows time-effective and robust formulation, solution and analysis of largescale...... synthesis problems in presence of uncertainty parameters, contributing to broaden the range of application of stochastic programming and optimization to real industrial problems. The framework is applied to an industrial case study based on soybean processing, to identify the optimal processing network...

  5. Synthesis and Design of Processing Networks

    DEFF Research Database (Denmark)

    Quaglia, Alberto; Sarup, Bent; Sin, Gürkan

    2012-01-01

    In this contribution, we propose an integrated business and engineering framework for synthesis and design of processing networks under uncertainty. In our framework, an adapted formulation of the transhipment problem is integrated with a superstructure, leading to a Stochastic Mixed Integer Non...... Linear Program (sMINLP), which is solved to determine simultaneously the optimal strategic and tactical decisions with respect to the processing network, the material flows, raw material and product portfolio. The framework allows time-effective and robust formulation, solution and analysis of largescale...... synthesis problems in presence of uncertainty parameters, contributing to broaden the range of application of stochastic programming and optimization to real industrial problems. The framework is applied to an industrial case study based on soybean processing, to identify the optimal processing network...

  6. Design of Separation Processes with Ionic Liquids

    DEFF Research Database (Denmark)

    2015-01-01

    A systematic methodology for screening and designing of Ionic Liquid (IL)-based separation processes is proposed and demonstrated using several case studies of both aqueous and non-aqueous systems, for instance, ethanol + water, ethanol + hexane, benzene + hexane, and toluene + methylcyclohexane....... The best four ILs of each mixture are [mmim][dmp], [emim][bti], [emim][etso4] and [hmim][tcb], respectively. All of them were used as entrainers in the extractive distillation. A process simulation of each system was carried out and showed a lower both energy requirement and solvent usage as compared...

  7. Point process models for household distributions within small areal units

    Directory of Open Access Journals (Sweden)

    Zack W. Almquist

    2012-06-01

    Full Text Available Spatio-demographic data sets are increasingly available worldwide, permitting ever more realistic modeling and analysis of social processes ranging from mobility to disease trans- mission. The information provided by these data sets is typically aggregated by areal unit, for reasons of both privacy and administrative cost. Unfortunately, such aggregation does not permit fine-grained assessment of geography at the level of individual households. In this paper, we propose to partially address this problem via the development of point pro- cess models that can be used to effectively simulate the location of individual households within small areal units.

  8. Hygienic Design in the Food Processing Industry

    DEFF Research Database (Denmark)

    Hilbert, Lisbeth Rischel; Hjelm, M.

    2001-01-01

    Bacterial adhesion and biofilm formation are of major concern in food production and processing industry. In 1998 a Danish co-operation programme under the title Centre for Hygienic Design was funded to combine the skills of universities, research institutes and industry to focus on the following...... with cleaning chemicals and cleaning procedures • Optimising design of production equipment • Development of environmentally friendly cleaning procedures for removal of biofilm The partners include food production/processing companies and producers of equipment for the food industry, cleaning chemicals...... approach is to focus on surface material hygienic lifetime. Test of this is made in an industrial test loop run by biotechnology researchers in co-operation with materials producers and a food producer to compare biofilm formation, cleanability and deterioration of different rubber and plastic materials...

  9. Change in requirements during the design process

    DEFF Research Database (Denmark)

    Sudin, Mohd Nizam Bin; Ahmed-Kristensen, Saeema

    2011-01-01

    on a pre-defined coding scheme. The results of the study shows that change in requirements were initiated by internal stakeholders through analysis and evaluation activities during the design process, meanwhile external stakeholders were requested changes during the meeting with consultant. All......Specification is an integral part of the product development process. Frequently, more than a single version of a specification is produced due to changes in requirements. These changes are often necessary to ensure the scope of the design problem is as clear as possible. However, the negative...... effects of such changes include an increase in lead-time and cost. Thus, support to mitigate change in requirements is essential. A thorough understanding of the nature of changes in requirements is essential before a method or tool to mitigate these changes can be proposed. Therefore, a case study...

  10. Conceptual Design of Industrial Process Displays

    DEFF Research Database (Denmark)

    Pedersen, C.R.; Lind, Morten

    1999-01-01

    by a simple example from a plant with batch processes. Later the method is applied to develop a supervisory display for a condenser system in a nuclear power plant. The differences between the continuous plant domain of power production and the batch processes from the example are analysed and broad...... categories of display types are proposed. The problems involved in specification and invention of a supervisory display are analysed and conclusions from these problems are made. It is concluded that the design method proposed provides a framework for the progress of the display design and is useful in pin......-pointing the actual problems. The method was useful in reducing the number of existing displays that could fulfil the requirements of the supervision task. The method provided at the same time a framework for dealing with the problems involved in inventing new displays based on structured analysis. However...

  11. Change in requirements during the design process

    DEFF Research Database (Denmark)

    Sudin, Mohd Nizam Bin; Ahmed-Kristensen, Saeema

    2011-01-01

    Specification is an integral part of the product development process. Frequently, more than a single version of a specification is produced due to changes in requirements. These changes are often necessary to ensure the scope of the design problem is as clear as possible. However, the negative...... effects of such changes include an increase in lead-time and cost. Thus, support to mitigate change in requirements is essential. A thorough understanding of the nature of changes in requirements is essential before a method or tool to mitigate these changes can be proposed. Therefore, a case study...... approach was employed to understand the nature of change in requirements during the design process - particularly concerning the initiation, discovery, and motivation of these changes. Semi-structured interviews were adopted as the data collection method. The interviews were transcribed and analysed based...

  12. Optimization Solutions for Improving the Performance of the Parallel Reduction Algorithm Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2012-01-01

    Full Text Available In this paper, we research, analyze and develop optimization solutions for the parallel reduction function using graphics processing units (GPUs that implement the Compute Unified Device Architecture (CUDA, a modern and novel approach for improving the software performance of data processing applications and algorithms. Many of these applications and algorithms make use of the reduction function in their computational steps. After having designed the function and its algorithmic steps in CUDA, we have progressively developed and implemented optimization solutions for the reduction function. In order to confirm, test and evaluate the solutions' efficiency, we have developed a custom tailored benchmark suite. We have analyzed the obtained experimental results regarding: the comparison of the execution time and bandwidth when using graphic processing units covering the main CUDA architectures (Tesla GT200, Fermi GF100, Kepler GK104 and a central processing unit; the data type influence; the binary operator's influence.

  13. A Block-Asynchronous Relaxation Method for Graphics Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Antz, Hartwig [Karlsruhe Inst. of Technology (KIT) (Germany); Tomov, Stanimire [Univ. of Tennessee, Knoxville, TN (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Univ. of Manchester (United Kingdom); Heuveline, Vincent [Karlsruhe Inst. of Technology (KIT) (Germany)

    2011-11-30

    In this paper, we analyze the potential of asynchronous relaxation methods on Graphics Processing Units (GPUs). For this purpose, we developed a set of asynchronous iteration algorithms in CUDA and compared them with a parallel implementation of synchronous relaxation methods on CPU-based systems. For a set of test matrices taken from the University of Florida Matrix Collection we monitor the convergence behavior, the average iteration time and the total time-to-solution time. Analyzing the results, we observe that even for our most basic asynchronous relaxation scheme, despite its lower convergence rate compared to the Gauss-Seidel relaxation (that we expected), the asynchronous iteration running on GPUs is still able to provide solution approximations of certain accuracy in considerably shorter time then Gauss- Seidel running on CPUs. Hence, it overcompensates for the slower convergence by exploiting the scalability and the good fit of the asynchronous schemes for the highly parallel GPU architectures. Further, enhancing the most basic asynchronous approach with hybrid schemes – using multiple iterations within the ”subdomain” handled by a GPU thread block and Jacobi-like asynchronous updates across the ”boundaries”, subject to tuning various parameters – we manage to not only recover the loss of global convergence but often accelerate convergence of up to two times (compared to the effective but difficult to parallelize Gauss-Seidel type of schemes), while keeping the execution time of a global iteration practically the same. This shows the high potential of the asynchronous methods not only as a stand alone numerical solver for linear systems of equations fulfilling certain convergence conditions but more importantly as a smoother in multigrid methods. Due to the explosion of parallelism in todays architecture designs, the significance and the need for asynchronous methods, as the ones described in this work, is expected to grow.

  14. User-Centered Design (UCD) Process Description

    Science.gov (United States)

    2014-12-01

    mockups and prototypes. CONCLUSIONS AND RECOMMENDATIONS UCD provides guidance for improving total system performance by considering the real- world...Artifacts from the UCD process will focus and guide the hardware and software integration efforts and will support systems engineering goals to achieve...against essential story scenarios, eventually leading to the development of high-fidelity mockups and prototypes. Figure 1. User-centered design (UCD

  15. The Processes Involved in Designing Software.

    Science.gov (United States)

    1980-08-01

    body of relevant knowledge. There has been a limited amount of research on the process of design or on problems that are difficult enough to require the...refinement of those subproblems. Our results are therefore potentially limited to similar straightforward problems. In tasks for which the...They first break the problem Into Its major constituents, thus forming a solution moodl . During each Iteration, subproblems from the previous cycle are

  16. Modeling and design of a combined transverse and axial flow threshing unit for rice harvesters

    Directory of Open Access Journals (Sweden)

    Zhong Tang

    2014-11-01

    Full Text Available The thorough investigation of both grain threshing and grain separating processes is a crucial consideration for effective structural design and variable optimization of the tangential flow threshing cylinder and longitudinal axial flow threshing cylinder composite units (TLFC unit of small and medium-sized (SME combine harvesters. The objective of this paper was to obtain the structural variables of a TLFC unit by theoretical modeling and experimentation on a tangential flow threshing cylinder unit (TFC unit and longitudinal axial flow threshing cylinder unit (LFC unit. Threshing and separation equations for five types of threshing teeth (knife bar, trapezoidal tooth, spike tooth, rasp bar, and rectangular bar, were obtained using probability theory. Results demonstrate that the threshing and separation capacity of the knife bar TFC unit was stronger than the other threshing teeth. The length of the LFC unit was divided into four sections, with helical blades on the first section (0-0.17 m, the spike tooth on the second section (0.17-1.48 m, the trapezoidal tooth on the third section (1.48-2.91 m, and the discharge plate on the fourth section (2.91-3.35 m. Test results showed an un-threshed grain rate of 0.243%, un-separated grain rate of 0.346%, and broken grain rate of 0.184%. Evidenced by these results, threshing and separation performance is significantly improved by analyzing and optimizing the structure and variables of a TLFC unit. The results of this research can be used to successfully design the TLFC unit of small and medium-sized (SME combine harvesters.

  17. MULTI-WORLD MECHANISM FOR MODELING EVOLUTIONARY DESIGN PROCESS FROM CONCEPTUAL DESIGN TO DETAILED DESIGN

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A multi-world mechanism is developed for modeling evolutionary design process from conceptual design to detailed design. In this mechanism, the evolutionary design database is represented by a sequence of worlds corresponding to the design descriptions at different design stages. In each world, only the differences with its ancestor world are recorded. When the design descriptions in one world are changed, these changes are then propagated to its descendant worlds automatically. Case study is conducted to show the effectiveness of this evolutionary design database model.

  18. Efficient Design in a DC to DC Converter Unit

    Science.gov (United States)

    Bruemmer, Joel E.; Williams, Fitch R.; Schmitz, Gregory V.

    2002-01-01

    Space Flight hardware requires high power conversion efficiencies due to limited power availability and weight penalties of cooling systems. The International Space Station (ISS) Electric Power System (EPS) DC-DC Converter Unit (DDCU) power converter is no exception. This paper explores the design methods and tradeoffs that were utilized to accomplish high efficiency in the DDCU. An isolating DC to DC converter was selected for the ISS power system because of requirements for separate primary and secondary grounds and for a well-regulated secondary output voltage derived from a widely varying input voltage. A flyback-current-fed push-pull topology or improved Weinberg circuit was chosen for this converter because of its potential for high efficiency and reliability. To enhance efficiency, a non-dissipative snubber circuit for the very-low-Rds-on Field Effect Transistors (FETs) was utilized, redistributing the energy that could be wasted during the switching cycle of the power FETs. A unique, low-impedance connection system was utilized to improve contact resistance over a bolted connection. For improved consistency in performance and to lower internal wiring inductance and losses a planar bus system is employed. All of these choices contributed to the design of a 6.25 KW regulated dc to dc converter that is 95 percent efficient. The methodology used in the design of this DC to DC Converter Unit may be directly applicable to other systems that require a conservative approach to efficient power conversion and distribution.

  19. Chip Design Process Optimization Based on Design Quality Assessment

    Science.gov (United States)

    Häusler, Stefan; Blaschke, Jana; Sebeke, Christian; Rosenstiel, Wolfgang; Hahn, Axel

    2010-06-01

    Nowadays, the managing of product development projects is increasingly challenging. Especially the IC design of ASICs with both analog and digital components (mixed-signal design) is becoming more and more complex, while the time-to-market window narrows at the same time. Still, high quality standards must be fulfilled. Projects and their status are becoming less transparent due to this complexity. This makes the planning and execution of projects rather difficult. Therefore, there is a need for efficient project control. A main challenge is the objective evaluation of the current development status. Are all requirements successfully verified? Are all intermediate goals achieved? Companies often develop special solutions that are not reusable in other projects. This makes the quality measurement process itself less efficient and produces too much overhead. The method proposed in this paper is a contribution to solve these issues. It is applied at a German design house for analog mixed-signal IC design. This paper presents the results of a case study and introduces an optimized project scheduling on the basis of quality assessment results.

  20. 3 CFR - Designation of Officers of the United States Section, International Boundary and Water Commission...

    Science.gov (United States)

    2010-01-01

    ..., International Boundary and Water Commission, United States and Mexico To Act as the Commissioner of the United... Designation of Officers of the United States Section, International Boundary and Water Commission, United... of the United States Section, International Boundary and Water Commission, United......

  1. Moving bed biofilm reactor technology: process applications, design, and performance.

    Science.gov (United States)

    McQuarrie, James P; Boltz, Joshua P

    2011-06-01

    The moving bed biofilm reactor (MBBR) can operate as a 2- (anoxic) or 3-(aerobic) phase system with buoyant free-moving plastic biofilm carriers. These systems can be used for municipal and industrial wastewater treatment, aquaculture, potable water denitrification, and, in roughing, secondary, tertiary, and sidestream applications. The system includes a submerged biofilm reactor and liquid-solids separation unit. The MBBR process benefits include the following: (1) capacity to meet treatment objectives similar to activated sludge systems with respect to carbon-oxidation and nitrogen removal, but requires a smaller tank volume than a clarifier-coupled activated sludge system; (2) biomass retention is clarifier-independent and solids loading to the liquid-solids separation unit is reduced significantly when compared with activated sludge systems; (3) the MBBR is a continuous-flow process that does not require a special operational cycle for biofilm thickness, L(F), control (e.g., biologically active filter backwashing); and (4) liquid-solids separation can be achieved with a variety of processes, including conventional and compact high-rate processes. Information related to system design is fragmented and poorly documented. This paper seeks to address this issue by summarizing state-of-the art MBBR design procedures and providing the reader with an overview of some commercially available systems and their components.

  2. DESIGN OF INSTRUCTION LIST (IL PROCESSOR FOR PROCESS CONTROL

    Directory of Open Access Journals (Sweden)

    Mrs. Shilpa Rudrawar

    2012-06-01

    Full Text Available Programmable Logic Controller (PLC is a device that allows an Electro-Mechanical engineer to automate his mechanical process in an efficient manner. Safety critical high speed application requires quick response. In order to improve the speed of executing PLC instructions, the IL processor is researched. Hierarchical approach has been used so that basic units can be modeled using behavioral programming. These basic units are combined using structural programming. Hardwired control approach is used to design the control unit. The proposed IL (Instruction List processor work upon our developed IL instructions which are compatible with the programming language IL according to the norm IEC 61131-3. This can accelerate the instructions execution, ultimately improve real-time performance comparing to the traditional sequential execution of PLC program by giving quick response at such safety critical high speed application. The design is to be implemented on FPGA for verification purpose. To validate the advance of the proposed design, two ladder programs are compiled to the instruction set of proposed IL processor as well as in IL programming language.

  3. Accelerated space object tracking via graphic processing unit

    Science.gov (United States)

    Jia, Bin; Liu, Kui; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    In this paper, a hybrid Monte Carlo Gauss mixture Kalman filter is proposed for the continuous orbit estimation problem. Specifically, the graphic processing unit (GPU) aided Monte Carlo method is used to propagate the uncertainty of the estimation when the observation is not available and the Gauss mixture Kalman filter is used to update the estimation when the observation sequences are available. A typical space object tracking problem using the ground radar is used to test the performance of the proposed algorithm. The performance of the proposed algorithm is compared with the popular cubature Kalman filter (CKF). The simulation results show that the ordinary CKF diverges in 5 observation periods. In contrast, the proposed hybrid Monte Carlo Gauss mixture Kalman filter achieves satisfactory performance in all observation periods. In addition, by using the GPU, the computational time is over 100 times less than that using the conventional central processing unit (CPU).

  4. Ising Processing Units: Potential and Challenges for Discrete Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Coffrin, Carleton James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Nagarajan, Harsha [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bent, Russell Whitford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-05

    The recent emergence of novel computational devices, such as adiabatic quantum computers, CMOS annealers, and optical parametric oscillators, presents new opportunities for hybrid-optimization algorithms that leverage these kinds of specialized hardware. In this work, we propose the idea of an Ising processing unit as a computational abstraction for these emerging tools. Challenges involved in using and bench- marking these devices are presented, and open-source software tools are proposed to address some of these challenges. The proposed benchmarking tools and methodology are demonstrated by conducting a baseline study of established solution methods to a D-Wave 2X adiabatic quantum computer, one example of a commercially available Ising processing unit.

  5. Design and test hardware for a solar array switching unit

    Science.gov (United States)

    Patil, A. R.; Cho, B. H.; Sable, D.; Lee, F. C.

    1992-01-01

    This paper describes the control of a pulse width modulated (PWM) type sequential shunt switching unit (SSU) for spacecraft applications. It is found that the solar cell output capacitance has a significant impact on SSU design. Shorting of this cell capacitance by the PWM switch causes input current surges. These surges are minimized by the use of a series filter inductor. The system with a filter is analyzed for ripple and the control to output-voltage transfer function. Stable closed loop design considerations are discussed. The results are supported by modeling and measurements of loop gain and of closed-loop bus impedance on test hardware for NASA's 120 V Earth Observation System (EOS). The analysis and modeling are also applicable to NASA's 160 V Space Station power system.

  6. A Universal Quantum Network Quantum Central Processing Unit

    Institute of Scientific and Technical Information of China (English)

    WANG An-Min

    2001-01-01

    A new construction scheme of a universal quantum network which is compatible with the known quantum gate- assembly schemes is proposed. Our quantum network is standard, easy-assemble, reusable, scalable and even potentially programmable. Moreover, we can construct a whole quantum network to implement the generalquantum algorithm and quantum simulation procedure. In the above senses, it is a realization of the quantum central processing unit.

  7. Accelerating Malware Detection via a Graphics Processing Unit

    Science.gov (United States)

    2010-09-01

    Processing Unit . . . . . . . . . . . . . . . . . . 4 PE Portable Executable . . . . . . . . . . . . . . . . . . . . . 4 COFF Common Object File Format...operating systems for the future [Szo05]. The PE format is an updated version of the common object file format ( COFF ) [Mic06]. Microsoft released a new...pro.mspx, Accessed July 2010, 2001. 79 Mic06. Microsoft. Common object file format ( coff ). MSDN, November 2006. Re- vision 4.1. Mic07a. Microsoft

  8. An Architecture of Deterministic Quantum Central Processing Unit

    OpenAIRE

    Xue, Fei; Chen, Zeng-Bing; Shi, Mingjun; Zhou, Xianyi; Du, Jiangfeng; Han, Rongdian

    2002-01-01

    We present an architecture of QCPU(Quantum Central Processing Unit), based on the discrete quantum gate set, that can be programmed to approximate any n-qubit computation in a deterministic fashion. It can be built efficiently to implement computations with any required accuracy. QCPU makes it possible to implement universal quantum computation with a fixed, general purpose hardware. Thus the complexity of the quantum computation can be put into the software rather than the hardware.

  9. BitTorrent Processing Unit BPU发展观望

    Institute of Scientific and Technical Information of China (English)

    Zone; 杨原青

    2007-01-01

    在电脑发展的早期,无论是运算处理、还是图形处理、还是输入、输出处理,都由CPU(Central Processing Unit,中央处理器)一力承担,然而随着处理专用化发展,1999年NVIDIA率先将图形处理独立出来,提出了GPU(Graphics Processing unit,绘图处理单元)概念。八年过去,现在GPU已经成为图形处理的中坚力量,并让所玩家耳熟能详。而近期,台湾2家公刊则提出了BPU(BitTorrent Processing Unit,BT处理单元)概念。下面,就让我们一起看看,这款极为新鲜的概念产品。

  10. Reliability Methods for Shield Design Process

    Science.gov (United States)

    Tripathi, R. K.; Wilson, J. W.

    2002-01-01

    Providing protection against the hazards of space radiation is a major challenge to the exploration and development of space. The great cost of added radiation shielding is a potential limiting factor in deep space operations. In this enabling technology, we have developed methods for optimized shield design over multi-segmented missions involving multiple work and living areas in the transport and duty phase of space missions. The total shield mass over all pieces of equipment and habitats is optimized subject to career dose and dose rate constraints. An important component of this technology is the estimation of two most commonly identified uncertainties in radiation shield design, the shielding properties of materials used and the understanding of the biological response of the astronaut to the radiation leaking through the materials into the living space. The largest uncertainty, of course, is in the biological response to especially high charge and energy (HZE) ions of the galactic cosmic rays. These uncertainties are blended with the optimization design procedure to formulate reliability-based methods for shield design processes. The details of the methods will be discussed.

  11. Saving Material with Systematic Process Designs

    Science.gov (United States)

    Kerausch, M.

    2011-08-01

    Global competition is forcing the stamping industry to further increase quality, to shorten time-to-market and to reduce total cost. Continuous balancing between these classical time-cost-quality targets throughout the product development cycle is required to ensure future economical success. In today's industrial practice, die layout standards are typically assumed to implicitly ensure the balancing of company specific time-cost-quality targets. Although die layout standards are a very successful approach, there are two methodical disadvantages. First, the capabilities for tool design have to be continuously adapted to technological innovations; e.g. to take advantage of the full forming capability of new materials. Secondly, the great variety of die design aspects have to be reduced to a generic rule or guideline; e.g. binder shape, draw-in conditions or the use of drawbeads. Therefore, it is important to not overlook cost or quality opportunities when applying die design standards. This paper describes a systematic workflow with focus on minimizing material consumption. The starting point of the investigation is a full process plan for a typical structural part. All requirements are definedaccording to a predefined set of die design standards with industrial relevance are fulfilled. In a first step binder and addendum geometry is systematically checked for material saving potentials. In a second step, blank shape and draw-in are adjusted to meet thinning, wrinkling and springback targets for a minimum blank solution. Finally the identified die layout is validated with respect to production robustness versus splits, wrinkles and springback. For all three steps the applied methodology is based on finite element simulation combined with a stochastical variation of input variables. With the proposed workflow a well-balanced (time-cost-quality) production process assuring minimal material consumption can be achieved.

  12. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    Science.gov (United States)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  13. Sustainable Process Design of Lignocellulose based Biofuel

    DEFF Research Database (Denmark)

    Mangnimit, Saranya; Malakul, Pomthong; Gani, Rafiqul

    the production and use of alternative and sustainable energy sources as rapidly as possible. Biofuel is a type of alternative energy that can be produced from many sources including sugar substances (such as sugarcane juice and molasses), starchy materials (such as corn and cassava), and lignocellulosic...... available, and are also non-food crops. In this respect, Cassava rhizome has several characteristics that make it a potential feedstock for fuel ethanol production. It has high content of cellulose and hemicelluloses . The objective of this paper is to present a study focused on the sustainable process...... design of bioethanol production from cassava rhizome using various computer aided tools through a systematic and effiicient work-flow, The study includes process simulation, sustainability analysis, economic evaluation and life cycle assessment (LCA) according to a well-defined workflow that guarantees...

  14. Innovative machine designs for radiation processing

    Science.gov (United States)

    Vroom, David

    2007-12-01

    In the 1990s Raychem Corporation established a program to investigate the commercialization of several promising applications involving the combined use of its core competencies in materials science, radiation chemistry and e-beam radiation technology. The applications investigated included those that would extend Raychem's well known heat recoverable polymer and wire and cable product lines as well as new potential applications such as remediation of contaminated aqueous streams. A central part of the program was the development of new accelerator technology designed to improve quality, lower processing costs and efficiently process conformable materials such at liquids. A major emphasis with this new irradiation technology was to look at the accelerator and product handling systems as one integrated, not as two complimentary systems.

  15. Sustainable process design & analysis of hybrid separations

    DEFF Research Database (Denmark)

    Kumar Tula, Anjan; Befort, Bridgette; Garg, Nipun

    2016-01-01

    Distillation is an energy intensive operation in chemical process industries. There are around 40,000 distillation columns in operation in the US, requiring approximately 40% of the total energy consumption in US chemical process industries. However, analysis of separations by distillation has...... shown that more than 50% of energy is spent in purifying the last 5-10% of the distillate product. Membrane modules on the other hand can achieve high purity separations at lower energy costs, but if the flux is high, it requires large membrane area. A hybrid scheme where distillation and membrane...... modules are combined such that each operates at its highest efficiency, has the potential for significant energy reduction without significant increase of capital costs. This paper presents a method for sustainable design of hybrid distillation-membrane schemes with guaranteed reduction of energy...

  16. Unit Process Wetlands for Removal of Trace Organic Contaminants and Pathogens from Municipal Wastewater Effluents

    Science.gov (United States)

    Jasper, Justin T.; Nguyen, Mi T.; Jones, Zackary L.; Ismail, Niveen S.; Sedlak, David L.; Sharp, Jonathan O.; Luthy, Richard G.; Horne, Alex J.; Nelson, Kara L.

    2013-01-01

    Abstract Treatment wetlands have become an attractive option for the removal of nutrients from municipal wastewater effluents due to their low energy requirements and operational costs, as well as the ancillary benefits they provide, including creating aesthetically appealing spaces and wildlife habitats. Treatment wetlands also hold promise as a means of removing other wastewater-derived contaminants, such as trace organic contaminants and pathogens. However, concerns about variations in treatment efficacy of these pollutants, coupled with an incomplete mechanistic understanding of their removal in wetlands, hinder the widespread adoption of constructed wetlands for these two classes of contaminants. A better understanding is needed so that wetlands as a unit process can be designed for their removal, with individual wetland cells optimized for the removal of specific contaminants, and connected in series or integrated with other engineered or natural treatment processes. In this article, removal mechanisms of trace organic contaminants and pathogens are reviewed, including sorption and sedimentation, biotransformation and predation, photolysis and photoinactivation, and remaining knowledge gaps are identified. In addition, suggestions are provided for how these treatment mechanisms can be enhanced in commonly employed unit process wetland cells or how they might be harnessed in novel unit process cells. It is hoped that application of the unit process concept to a wider range of contaminants will lead to more widespread application of wetland treatment trains as components of urban water infrastructure in the United States and around the globe. PMID:23983451

  17. USE OF LEAN PRODUCTION INSTRUMENTS IN DESIGNING THE EDUCATIONAL PROCESS

    Directory of Open Access Journals (Sweden)

    Elietta P. Burnasheva

    2016-03-01

    Full Text Available Introduction: the concept of lean production seeks not a reduction of costs but complete elimination of losses that do not bring added value to the product or service. In any system, in all processes – from production and assembly, to hospitality, education, health, transport and social services – there are hidden losses. Teaching itself is a kind of production process in which a certain “product” (student acquires the added value (knowledge and skills, that is why it has become topical in educational institution to establish the working group on introduction of lean production into the learning process. The article presents the factors that are to be taken into account while designing the educational process based on the lean production principles. Materials and Methods: methods of analysis of existing system of vocational training in higher school, monitoring of the results of educational practice, modeling and experimental work in the process of analytical work were used. Results: important direction for eliminating losses in the educational process is the development of the interlinked curricula, allowing to avoid repeated study of a number of didactic units in the organization of continuous training in the system “Vocational education – Higher education”. In order to eliminate the possibility of incompetent graduate one should focus on the organisation of objective final control. Losses in education are caused by to the mismatch between labour market demand and the spectrum of areas of training in educational institutions. Discussion and Conclusions: the lean production possibilities are defined as instrumental in ensuring the organisation of “the process of lean learning”: by applying some lean production instruments such as the designing of the educational process, preventing of “faulty work” while training students, the attuning of the training system to employers’ requests, the visualisation of the education

  18. Designer cell signal processing circuits for biotechnology.

    Science.gov (United States)

    Bradley, Robert W; Wang, Baojun

    2015-12-25

    Microorganisms are able to respond effectively to diverse signals from their environment and internal metabolism owing to their inherent sophisticated information processing capacity. A central aim of synthetic biology is to control and reprogramme the signal processing pathways within living cells so as to realise repurposed, beneficial applications ranging from disease diagnosis and environmental sensing to chemical bioproduction. To date most examples of synthetic biological signal processing have been built based on digital information flow, though analogue computing is being developed to cope with more complex operations and larger sets of variables. Great progress has been made in expanding the categories of characterised biological components that can be used for cellular signal manipulation, thereby allowing synthetic biologists to more rationally programme increasingly complex behaviours into living cells. Here we present a current overview of the components and strategies that exist for designer cell signal processing and decision making, discuss how these have been implemented in prototype systems for therapeutic, environmental, and industrial biotechnological applications, and examine emerging challenges in this promising field.

  19. Antenna Design Considerations for the Advanced Extravehicular Mobility Unit

    Science.gov (United States)

    Bakula, Casey J.; Theofylaktos, Onoufrios

    2015-01-01

    NASA is designing an Advanced Extravehicular Mobility Unit (AEMU)to support future manned missions beyond low-Earth orbit (LEO). A key component of the AEMU is the communications assembly that allows for the wireless transfer of voice, video, and suit telemetry. The Extravehicular Mobility Unit (EMU) currently used on the International Space Station (ISS) contains a radio system with a single omni-directional resonant cavity antenna operating slightly above 400 MHz capable of transmitting and receiving data at a rate of about 125 kbps. Recent wireless communications architectures are calling for the inclusion of commercial wireless standards such as 802.11 that operate in higher frequency bands at much higher data rates. The current AEMU radio design supports a 400 MHz band for low-rate mission-critical data and a high-rate band based on commercial wireless local area network (WLAN) technology to support video, communication with non-extravehicular activity (EVA) assets such as wireless sensors and robotic assistants, and a redundant path for mission-critical EVA data. This paper recommends the replacement of the existing EMU antenna with a new antenna that maintains the performance characteristics of the current antenna but with lower weight and volume footprints. NASA has funded several firms to develop such an antenna over the past few years, and the most promising designs are variations on the basic patch antenna. This antenna technology at UHF is considered by the authors to be mature and ready for infusion into NASA AEMU technology development programs.

  20. Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification

    NARCIS (Netherlands)

    Miao, Yongwu; Sloep, Peter; Koper, Rob

    2008-01-01

    Miao, Y., Sloep, P. B., & Koper, R. (2008). Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification. In F. W. B. Li, J. Zhao, T. K. Shih, R. W. H. Lau, Q. Li & D. McLeod (Eds.), Advances in Web Based Learning - Proceedings of the 7th

  1. Fast calculation of HELAS amplitudes using graphics processing unit (GPU)

    CERN Document Server

    Hagiwara, K; Okamura, N; Rainwater, D L; Stelzer, T

    2009-01-01

    We use the graphics processing unit (GPU) for fast calculations of helicity amplitudes of physics processes. As our first attempt, we compute $u\\overline{u}\\to n\\gamma$ ($n=2$ to 8) processes in $pp$ collisions at $\\sqrt{s} = 14$TeV by transferring the MadGraph generated HELAS amplitudes (FORTRAN) into newly developed HEGET ({\\bf H}ELAS {\\bf E}valuation with {\\bf G}PU {\\bf E}nhanced {\\bf T}echnology) codes written in CUDA, a C-platform developed by NVIDIA for general purpose computing on the GPU. Compared with the usual CPU programs, we obtain 40-150 times better performance on the GPU.

  2. Design of a didactic unit: the energy; Diseno de una unidad didactica: la energia

    Energy Technology Data Exchange (ETDEWEB)

    Meneses V, J.A.; Caballero S, C. [Facultad de Educacion, Universidad de Burgos (Venezuela)

    2003-07-01

    In order to design didactic units a model is proposed which includes the following items: justify the subject of study, carry out a didactic approach and scientific analysis, specify the main principles, spell out the teaching materials and their sequence, define the teaching process and the activities programme, and finally to agree on the criteria and assessment strategies involved. An example of a lesson about the energy concept is shown. (Author)

  3. Product- and Process Units in the CRITT Translation Process Research Database

    DEFF Research Database (Denmark)

    Carl, Michael

    The first version of the "Translation Process Research Database" (TPR DB v1.0) was released In August 2012, containing logging data of more than 400 translation and text production sessions. The current version of the TPR DB, (v1.4), contains data from more than 940 sessions, which represents more...... than 300 hours of text production. The database provides the raw logging data, as well as Tables of pre-processed product- and processing units. The TPR-DB includes various types of simple and composed product and process units that are intended to support the analysis and modelling of human text...... reception, production, and translation processes. In this talk I describe some of the functions and features of the TPR-DB v1.4, and how they can be deployed in empirical human translation process research....

  4. Enhanced teaching and student learning through a simulator-based course in chemical unit operations design

    Science.gov (United States)

    Ghasem, Nayef

    2016-07-01

    This paper illustrates a teaching technique used in computer applications in chemical engineering employed for designing various unit operation processes, where the students learn about unit operations by designing them. The aim of the course is not to teach design, but rather to teach the fundamentals and the function of unit operation processes through simulators. A case study presenting the teaching method was evaluated using student surveys and faculty assessments, which were designed to measure the quality and effectiveness of the teaching method. The results of the questionnaire conclusively demonstrate that this method is an extremely efficient way of teaching a simulator-based course. In addition to that, this teaching method can easily be generalised and used in other courses. A student's final mark is determined by a combination of in-class assessments conducted based on cooperative and peer learning, progress tests and a final exam. Results revealed that peer learning can improve the overall quality of student learning and enhance student understanding.

  5. Design of the magnetic diagnostics unit onboard LISA Pathfinder

    CERN Document Server

    Diaz-Aguiló, Marc; Ramos-Castro, Juan; Lobo, Alberto; García-Berro, Enrique

    2012-01-01

    LISA (Laser Interferometer Space Antenna) is a joint mission of ESA and NASA which aims to be the first space-borne gravita- tional wave observatory. Due to the high complexity and technological challenges that LISA will face, ESA decided to launch a technological demonstrator, LISA Pathfinder. The payload of LISA Pathfinder is the so-called LISA Technology Package, and will be the highest sensitivity geodesic explorer flown to date. The LISA Technology Package is designed to measure relative accelerations between two test masses in nominal free fall (geodesic motion). The magnetic, thermal and radiation disturbances affecting the payload are monitored and dealt by the diagnostics subsystem. The diagnostics subsystem consists of several modules, and one of these is the magnetic diagnostics unit. Its main function is the assessment of differential acceleration noise between test masses due to the magnetic effects. To do so, it has to determine the magnetic characteristics of the test masses, namely their magne...

  6. Use of general purpose graphics processing units with MODFLOW.

    Science.gov (United States)

    Hughes, Joseph D; White, Jeremy T

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.

  7. Fast analytical scatter estimation using graphics processing units.

    Science.gov (United States)

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  8. Model-Based Integrated Process Design and Controller Design of Chemical Processes

    DEFF Research Database (Denmark)

    Abd Hamid, Mohd Kamaruddin Bin

    and verification. Using thermodynamic and process insights, a bounded search space is first identified. This feasible solution space is further reduced to satisfy the process design and controller design constraints in sub-problems 2 and 3, respectively, until in the final sub-problem all feasible candidates...... may or may not be able to find the optimal solution, depending on the performance of their search algorithms and computational demand, this method using the attainable region and driving force concepts is simple and able to find at least near-optimal designs (if not optimal) to IPDC problems...... tested using a series of case studies that represents three different systems in chemical processes: a single reactor system, a single separator system and a reactor-separator-recycle system....

  9. Process simulation during the design process makes the difference : Process simulations applied to a traditional design

    NARCIS (Netherlands)

    Traversari, R.; Goedhart, R.; Schraagen, J.M.C.

    2013-01-01

    Objective: The objective is evaluation of a traditionally designed operating room using simulation of various surgical workflows. Background: A literature search showed that there is no evidence for an optimal operating room layout regarding the position and size of an ultraclean ventilation (UCV) c

  10. Process simulation during the design process makes the difference: process simulations applied to a traditional design

    NARCIS (Netherlands)

    Traversari, R.; Goedhart, R.; Schraagen, J.M.C.

    2013-01-01

    Objective: The objective is evaluation of a traditionally designed operating room using simulation of various surgical workflows.Background: A literature search showed that there is no evidence for an optimal operating room layout regarding the position and size of an ultraclean ventilation (UCV) ca

  11. Process simulation during the design process makes the difference: process simulations applied to a traditional design

    NARCIS (Netherlands)

    Traversari, R.; Goedhart, R.; Schraagen, Johannes Martinus Cornelis

    2013-01-01

    Objective: The objective is evaluation of a traditionally designed operating room using simulation of various surgical workflows.Background: A literature search showed that there is no evidence for an optimal operating room layout regarding the position and size of an ultraclean ventilation (UCV)

  12. [Visitation policy, design and comfort in Spanish intensive care units].

    Science.gov (United States)

    Escudero, D; Martín, L; Viña, L; Quindós, B; Espina, M J; Forcelledo, L; López-Amor, L; García-Arias, B; del Busto, C; de Cima, S; Fernández-Rey, E

    2015-01-01

    To determine the design and comfort in the Intensive Care Units (ICUs), by analysing visiting hours, information, and family participation in patient care. Descriptive, multicentre study. Spanish ICUs. A questionnaire e-mailed to members of the Spanish Society of Intensive Care Medicine, Critical and Coronary Units (SEMICYUC), subscribers of the Electronic Journal Intensive Care Medicine, and disseminated through the blog Proyecto HU-CI. A total of 135 questionnaires from 131 hospitals were analysed. Visiting hours: 3.8% open 24h, 9.8% open daytime, and 67.7% have 2 visits a day. Information: given only by the doctor in 75.2% of the cases, doctor and nurse together in 4.5%, with a frequency of once a day in 79.7%. During weekends, information is given in 95.5% of the cases. Information given over the phone 74.4%. Family participation in patient care: hygiene 11%, feeding 80.5%, physiotherapy 17%. Personal objects allowed: mobile phone 41%, computer 55%, sound system 77%, and television 30%. Architecture and comfort: all individual cubicles 60.2%, natural light 54.9%, television 7.5%, ambient music 12%, clock in the cubicle 15.8%, environmental noise meter 3.8%, and a waiting room near the ICU 68.4%. Visiting policy is restrictive, with a closed ICU being the predominating culture. On average, technological communication devices are not allowed. Family participation in patient care is low. The ICU design does not guarantee privacy or provide a desirable level of comfort. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.

  13. Congestion estimation technique in the optical network unit registration process.

    Science.gov (United States)

    Kim, Geunyong; Yoo, Hark; Lee, Dongsoo; Kim, Youngsun; Lim, Hyuk

    2016-07-01

    We present a congestion estimation technique (CET) to estimate the optical network unit (ONU) registration success ratio for the ONU registration process in passive optical networks. An optical line terminal (OLT) estimates the number of collided ONUs via the proposed scheme during the serial number state. The OLT can obtain congestion level among ONUs to be registered such that this information may be exploited to change the size of a quiet window to decrease the collision probability. We verified the efficiency of the proposed method through simulation and experimental results.

  14. Heterogeneous Multicore Parallel Programming for Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Francois Bodin

    2009-01-01

    Full Text Available Hybrid parallel multicore architectures based on graphics processing units (GPUs can provide tremendous computing power. Current NVIDIA and AMD Graphics Product Group hardware display a peak performance of hundreds of gigaflops. However, exploiting GPUs from existing applications is a difficult task that requires non-portable rewriting of the code. In this paper, we present HMPP, a Heterogeneous Multicore Parallel Programming workbench with compilers, developed by CAPS entreprise, that allows the integration of heterogeneous hardware accelerators in a unintrusive manner while preserving the legacy code.

  15. Porting a Hall MHD Code to a Graphic Processing Unit

    Science.gov (United States)

    Dorelli, John C.

    2011-01-01

    We present our experience porting a Hall MHD code to a Graphics Processing Unit (GPU). The code is a 2nd order accurate MUSCL-Hancock scheme which makes use of an HLL Riemann solver to compute numerical fluxes and second-order finite differences to compute the Hall contribution to the electric field. The divergence of the magnetic field is controlled with Dedner?s hyperbolic divergence cleaning method. Preliminary benchmark tests indicate a speedup (relative to a single Nehalem core) of 58x for a double precision calculation. We discuss scaling issues which arise when distributing work across multiple GPUs in a CPU-GPU cluster.

  16. Line-by-line spectroscopic simulations on graphics processing units

    Science.gov (United States)

    Collange, Sylvain; Daumas, Marc; Defour, David

    2008-01-01

    We report here on software that performs line-by-line spectroscopic simulations on gases. Elaborate models (such as narrow band and correlated-K) are accurate and efficient for bands where various components are not simultaneously and significantly active. Line-by-line is probably the most accurate model in the infrared for blends of gases that contain high proportions of H 2O and CO 2 as this was the case for our prototype simulation. Our implementation on graphics processing units sustains a speedup close to 330 on computation-intensive tasks and 12 on memory intensive tasks compared to implementations on one core of high-end processors. This speedup is due to data parallelism, efficient memory access for specific patterns and some dedicated hardware operators only available in graphics processing units. It is obtained leaving most of processor resources available and it would scale linearly with the number of graphics processing units in parallel machines. Line-by-line simulation coupled with simulation of fluid dynamics was long believed to be economically intractable but our work shows that it could be done with some affordable additional resources compared to what is necessary to perform simulations on fluid dynamics alone. Program summaryProgram title: GPU4RE Catalogue identifier: ADZY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 62 776 No. of bytes in distributed program, including test data, etc.: 1 513 247 Distribution format: tar.gz Programming language: C++ Computer: x86 PC Operating system: Linux, Microsoft Windows. Compilation requires either gcc/g++ under Linux or Visual C++ 2003/2005 and Cygwin under Windows. It has been tested using gcc 4.1.2 under Ubuntu Linux 7.04 and using Visual C

  17. REDUNDANT ELECTRIC MOTOR DRIVE CONTROL UNIT DESIGN USING AUTOMATA-BASED APPROACH

    Directory of Open Access Journals (Sweden)

    Yuri Yu. Yankin

    2014-11-01

    Full Text Available Implementation of redundant unit for motor drive control based on programmable logic devices is discussed. Continuous redundancy method is used. As compared to segregated standby redundancy and whole system standby redundancy, such method provides preservation of all unit functions in case of redundancy and gives the possibility for continuous monitoring of major and redundant elements. Example of that unit is given. Electric motor drive control channel block diagram contains two control units – the major and redundant; it also contains four power supply units. Control units programming was carried out using automata-based approach. Electric motor drive control channel model was developed; it provides complex simulation of control state-machine and power converter. Through visibility and hierarchy of finite state machines debug time was shortened as compared to traditional programming. Control state-machine description using hardware description language is required for its synthesis with FPGA-devices vendor design software. This description was generated automatically by MATLAB software package. To verify results two prototype control units, two prototype power supply units, and device mock-up were developed and manufactured. Units were installed in the device mock-up. Prototype units were created in accordance with requirements claimed to deliverable hardware. Control channel simulation and tests results in the perfect state and during imitation of major element fault are presented. Automata-based approach made it possible to observe and debug control state-machine transitions during simulation of transient processes, occurring at imitation of faults. Results of this work can be used in development of fault tolerant electric motor drive control channels.

  18. Hygienic Design in the Food Processing Industry

    DEFF Research Database (Denmark)

    Hilbert, Lisbeth Rischel; Hjelm, M.

    2001-01-01

    Bacterial adhesion and biofilm formation are of major concern in food production and processing industry. In 1998 a Danish co-operation programme under the title Centre for Hygienic Design was funded to combine the skills of universities, research institutes and industry to focus on the following...... goals: • Development of materials with low bioadhesion (defined as resistance towards biofilm formation) - and in this context evaluation of quantitative techniques for examination of bioadhesion • Improvement of surface material hygienic life time by selecting surface materials in combination...... approach is to focus on surface material hygienic lifetime. Test of this is made in an industrial test loop run by biotechnology researchers in co-operation with materials producers and a food producer to compare biofilm formation, cleanability and deterioration of different rubber and plastic materials...

  19. Optimal design issues of a gas-to-liquid process

    Energy Technology Data Exchange (ETDEWEB)

    Rafiee, Ahmad

    2012-07-01

    Interests in Fischer-Tropsch (FT) synthesis is increasing rapidly due to the recent improvements of the technology, clean-burning fuels (low sulphur, low aromatics) derived from the FT process and the realization that the process can be used to monetize stranded natural gas resources. The economy of GTL plants depends very much on the natural gas price and there is a strong incentive to reduce the investment cost and in addition there is a need to improve energy efficiency and carbon efficiency. A model is constructed based on the available information in open literature. This model is used to simulate the GTL process with UNISIM DESIGN process simulator. In the FT reactor with cobalt based catalyst, Co2 is inert and will accumulate in the system. Five placements of Co2 removal unit in the GTL process are evaluated from an economical point of view. For each alternative, the process is optimized with respect to steam to carbon ratio, purge ratio of light ends, amount of tail gas recycled to syngas and FT units, reactor volume, and Co2 recovery. The results show that carbon and energy efficiencies and the annual net cash flow of the process with or without Co2 removal unit are not significantly different and there is not much to gain by removing Co2 from the process. It is optimal to recycle about 97 % of the light ends to the process (mainly to the FT unit) to obtain higher conversion of CO and H2 in the reactor. Different syngas configurations in a gas-to-liquid (GTL) plant are studied including auto-thermal reformer (ATR), combined reformer, and series arrangement of Gas Heated Reformer (GHR) and ATR. The Fischer-Tropsch (FT) reactor is based on cobalt catalyst and the degrees of freedom are; steam to carbon ratio, purge ratio of light ends, amount of tail gas recycled to synthesis gas (syngas) and Fischer-Tropsch (FT) synthesis units, and reactor volume. The production rate of liquid hydrocarbons is maximized for each syngas configuration. Installing a steam

  20. Active microchannel fluid processing unit and method of making

    Science.gov (United States)

    Bennett, Wendy D [Kennewick, WA; Martin, Peter M [Kennewick, WA; Matson, Dean W [Kennewick, WA; Roberts, Gary L [West Richland, WA; Stewart, Donald C [Richland, WA; Tonkovich, Annalee Y [Pasco, WA; Zilka, Jennifer L [Pasco, WA; Schmitt, Stephen C [Dublin, OH; Werner, Timothy M [Columbus, OH

    2001-01-01

    The present invention is an active microchannel fluid processing unit and method of making, both relying on having (a) at least one inner thin sheet; (b) at least one outer thin sheet; (c) defining at least one first sub-assembly for performing at least one first unit operation by stacking a first of the at least one inner thin sheet in alternating contact with a first of the at least one outer thin sheet into a first stack and placing an end block on the at least one inner thin sheet, the at least one first sub-assembly having at least a first inlet and a first outlet; and (d) defining at least one second sub-assembly for performing at least one second unit operation either as a second flow path within the first stack or by stacking a second of the at least one inner thin sheet in alternating contact with second of the at least one outer thin sheet as a second stack, the at least one second sub-assembly having at least a second inlet and a second outlet.

  1. Design of nanomaterial synthesis by aerosol processes.

    Science.gov (United States)

    Buesser, Beat; Pratsinis, Sotiris E

    2012-01-01

    Aerosol synthesis of materials is a vibrant field of particle technology and chemical reaction engineering. Examples include the manufacture of carbon blacks, fumed SiO(2), pigmentary TiO(2), ZnO vulcanizing catalysts, filamentary Ni, and optical fibers, materials that impact transportation, construction, pharmaceuticals, energy, and communications. Parallel to this, development of novel, scalable aerosol processes has enabled synthesis of new functional nanomaterials (e.g., catalysts, biomaterials, electroceramics) and devices (e.g., gas sensors). This review provides an access point for engineers to the multiscale design of aerosol reactors for the synthesis of nanomaterials using continuum, mesoscale, molecular dynamics, and quantum mechanics models spanning 10 and 15 orders of magnitude in length and time, respectively. Key design features are the rapid chemistry; the high particle concentrations but low volume fractions; the attainment of a self-preserving particle size distribution by coagulation; the ratio of the characteristic times of coagulation and sintering, which controls the extent of particle aggregation; and the narrowing of the aggregate primary particle size distribution by sintering.

  2. Design of Nanomaterial Synthesis by Aerosol Processes

    Science.gov (United States)

    Buesser, Beat; Pratsinis, Sotiris E.

    2013-01-01

    Aerosol synthesis of materials is a vibrant field of particle technology and chemical reaction engineering. Examples include the manufacture of carbon blacks, fumed SiO2, pigmentary TiO2, ZnO vulcanizing catalysts, filamentary Ni, and optical fibers, materials that impact transportation, construction, pharmaceuticals, energy, and communications. Parallel to this, development of novel, scalable aerosol processes has enabled synthesis of new functional nanomaterials (e.g., catalysts, biomaterials, electroceramics) and devices (e.g., gas sensors). This review provides an access point for engineers to the multiscale design of aerosol reactors for the synthesis of nanomaterials using continuum, mesoscale, molecular dynamics, and quantum mechanics models spanning 10 and 15 orders of magnitude in length and time, respectively. Key design features are the rapid chemistry; the high particle concentrations but low volume fractions; the attainment of a self-preserving particle size distribution by coagulation; the ratio of the characteristic times of coagulation and sintering, which controls the extent of particle aggregation; and the narrowing of the aggregate primary particle size distribution by sintering. PMID:22468598

  3. Optimal designing of phosgene recovery system for 3-chloro-4-methyl phenyl isocyanate process units%3-氯-4-甲基苯基异氰酸酯装置光气回收系统优化设计

    Institute of Scientific and Technical Information of China (English)

    毕荣山; 李明

    2014-01-01

    传统冷凝回收3-氯-4-甲基苯基异氰酸酯(CMPI)生产装置尾气的方法具有以下不足:不能保证光气完全回收,而一旦光气进入尾气破坏系统,不仅会增加产品的成本,而且还会降低系统安全性、增加碱液消耗和废水排放;文献中研究的甲苯二异氰酸酯装置的尾气回收系统,由于反应条件的不同,不能适用于常压CMPI装置。本文以年产2000 t CMPI常压反应装置为例,对其光气回收系统进行了模拟分析和优化设计,确定了满足工艺要求的最佳理论塔板数为12,最佳吸收剂甲苯用量为780 kg/h。对常压装置与高压装置光气回收系统进行了比较,认为常压装置的光气回收系统需要更多的吸收剂用量,但是不需要在塔中增加中间冷却器,并给出了理论上的解释。考虑异氰酸酯生产工艺的共性,本文研究结论可以推广应用于其他类似异氰酸酯装置。%The traditional method of phosgene recovery for 3-chloro-4-methyl phenyl isocyanate (CMPI) equipments has following disadvantages:The phosgene cannot be recovered completely. Once the phosgene goes to the off-gas,there would be some bad consequence such as:the security of system is decrease;the security of system,the cost of production,the consumption of sodium hydroxide and the discharge of waste water are increase. The research in references which studied the phosgene recovery for producing toluene diisocyanate cannot be used in CMPI system for the difference of reactive conditions. An equipment of 2000 t/a for producing CMPI was taken for example,the phosgene recovery system was simulated and optimally designed:Under the condition of meeting the technological requirements,the number of theoretical plates was identified as 12 and the optimal feeding rate was defined as 780 kg/h. We as compared the phosgene recovery systems of normal pressure reactive system and that of high pressure. The results showed that

  4. Laser processing with specially designed laser beam

    Science.gov (United States)

    Asratyan, A. A.; Bulychev, N. A.; Feofanov, I. N.; Kazaryan, M. A.; Krasovskii, V. I.; Lyabin, N. A.; Pogosyan, L. A.; Sachkov, V. I.; Zakharyan, R. A.

    2016-04-01

    The possibility of using laser systems to form beams with special spatial configurations has been studied. The laser systems applied had a self-conjugate cavity based on the elements of copper vapor lasers (LT-5Cu, LT-10Cu, LT-30Cu) with an average power of 5, 10, or 30 W. The active elements were pumped by current pulses of duration 80-100 ns. The duration of laser generation pulses was up to 25 ns. The generator unit included an unstable cavity, where one reflector was a special mirror with a reflecting coating. Various original optical schemes used were capable of exploring spatial configurations and energy characteristics of output laser beams in their interaction with micro- and nanoparticles fabricated from various materials. In these experiments, the beam dimensions of the obtained zones varied from 0.3 to 5 µm, which is comparable with the minimum permissible dimensions determined by the optical elements applied. This method is useful in transforming a large amount of information at the laser pulse repetition rate of 10-30 kHz. It was possible to realize the high-precision micromachining and microfabrication of microscale details by direct writing, cutting and drilling (with the cutting width and through-hole diameters ranging from 3 to 100 µm) and produce microscale, deep, intricate and narrow grooves on substrate surfaces of metals and nonmetal materials. This system is used for producing high-quality microscale details without moving the object under treatment. It can also be used for microcutting and microdrilling in a variety of metals such as molybdenum, copper and stainless steel, with a thickness of up to 300 µm, and in nonmetals such as silicon, sapphire and diamond with a thickness ranging from 10 µm to 1 mm with different thermal parameters and specially designed laser beam.

  5. Frida integral field unit opto-mechanical design

    Science.gov (United States)

    Cuevas, Salvador; Eikenberry, Stephen S.; Bringas, Vicente; Corrales, Adi; Espejo, Carlos; Lucero, Diana; Rodriguez, Alberto; Sánchez, Beatriz; Uribe, Jorge

    2012-09-01

    FRIDA (inFRared Imager and Dissector for the Adaptive optics system of the Gran Telescopio Canarias) has been designed as a cryogenic and diffraction limited instrument that will offer broad and narrow band imaging and integral field spectroscopy (IFS). Both, the imaging mode and IFS observing modes will use the same Teledyne 2Kx2K detector. This instrument will be installed at Nasmyth B station, behind the GTC Adaptive Optics system. FRIDA will provide the IFS mode using a 30 slices Integral Field Unit (IFU). This IFU design is based on University of Florida FISICA where the mirror block arrays are diamond turned on monolithic metal blocks. FRIDA IFU is conformed mainly by 3 mirror blocks with 30 spherical mirrors each. It also has a Schwarzschild relay based on two off axis spherical mirrors and an afocal system of two parabolic off axis mirrors. Including two insertion mirrors the IFU holds 96 metal mirrors. Each block or individual mirror is attached on its own mechanical mounting. In order to study beam interferences with mechanical parts, ghosts and scattered light, an iterative optical-mechanical modeling was developed. In this work this iterative modeling is described including pictures showing actual ray tracing on the opto-mechanical components.

  6. Accelerating Radio Astronomy Cross-Correlation with Graphics Processing Units

    CERN Document Server

    Clark, M A; Greenhill, L J

    2011-01-01

    We present a highly parallel implementation of the cross-correlation of time-series data using graphics processing units (GPUs), which is scalable to hundreds of independent inputs and suitable for the processing of signals from "Large-N" arrays of many radio antennas. The computational part of the algorithm, the X-engine, is implementated efficiently on Nvidia's Fermi architecture, sustaining up to 79% of the peak single precision floating-point throughput. We compare performance obtained for hardware- and software-managed caches, observing significantly better performance for the latter. The high performance reported involves use of a multi-level data tiling strategy in memory and use of a pipelined algorithm with simultaneous computation and transfer of data from host to device memory. The speed of code development, flexibility, and low cost of the GPU implementations compared to ASIC and FPGA implementations have the potential to greatly shorten the cycle of correlator development and deployment, for case...

  7. Significantly reducing registration time in IGRT using graphics processing units

    DEFF Research Database (Denmark)

    Noe, Karsten Østergaard; Denis de Senneville, Baudouin; Tanderup, Kari

    2008-01-01

    Purpose/Objective For online IGRT, rapid image processing is needed. Fast parallel computations using graphics processing units (GPUs) have recently been made more accessible through general purpose programming interfaces. We present a GPU implementation of the Horn and Schunck method...... respiration phases in a free breathing volunteer and 41 anatomical landmark points in each image series. The registration method used is a multi-resolution GPU implementation of the 3D Horn and Schunck algorithm. It is based on the CUDA framework from Nvidia. Results On an Intel Core 2 CPU at 2.4GHz each...... registration took 30 minutes. On an Nvidia Geforce 8800GTX GPU in the same machine this registration took 37 seconds, making the GPU version 48.7 times faster. The nine image series of different respiration phases were registered to the same reference image (full inhale). Accuracy was evaluated on landmark...

  8. Fast free-form deformation using graphics processing units.

    Science.gov (United States)

    Modat, Marc; Ridgway, Gerard R; Taylor, Zeike A; Lehmann, Manja; Barnes, Josephine; Hawkes, David J; Fox, Nick C; Ourselin, Sébastien

    2010-06-01

    A large number of algorithms have been developed to perform non-rigid registration and it is a tool commonly used in medical image analysis. The free-form deformation algorithm is a well-established technique, but is extremely time consuming. In this paper we present a parallel-friendly formulation of the algorithm suitable for graphics processing unit execution. Using our approach we perform registration of T1-weighted MR images in less than 1 min and show the same level of accuracy as a classical serial implementation when performing segmentation propagation. This technology could be of significant utility in time-critical applications such as image-guided interventions, or in the processing of large data sets. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  9. Design of Heat Exchanger Network for VCM Distillation Unit Using Pinch Technology

    Directory of Open Access Journals (Sweden)

    VISHAL G. BOKAN

    2015-06-01

    Full Text Available In process industries, heat exchanger networks represent an important part of the plant structure. The purpose of the networks is to maximize heat recovery, thereby lowering the overall plant costs. In process industries, during operation of any heat exchanger network (HEN, the major aim is to focus on the best performance of the network As in present condition of fuel crises is one of the major problem faced by many country & industrial utility is majorly depend on this. There is technique called process integration which is used for integrate heat within loop so optimize the given process and minimize the heating load and cooling load .In the present study of heat integration on VCM (vinyl chloride monomer distillation unit, Heat exchanger network (HEN is designed by using Aspen energy analyzer V8.0 software. This software implements a methodology for HEN synthesis with the use of pinch technology. Several heat integration networks are designed with different ΔT min and total annualized cost compared to obtain the optimal design. The network with a ΔT min of 90C is the most optimal where the largest energy savings are obtained with the appropriate use of utilities (Save 15.3764% for hot utilities and 47.52% for cold utilities compared with the current plant configuration. Percentage reduction in total operating cost is 18.333%. From calculation Payback Period for new design is 3.15 year. This save could be done through a plant revamp, with the addition of two heat exchangers. This improvement are done in the process associated with this technique are not due to the use of advance unit operation, but to the generation of heat integration scheme. The Pinch Design Method can be employed to give good designs in rapid time and with minimum data.

  10. Architectural and performance considerations for a 10(7)-instruction/sec optoelectronic central processing unit.

    Science.gov (United States)

    Arrathoon, R; Kozaitis, S

    1987-11-01

    Architectural considerations for a multiple-instruction, single-data-based optoelectronic central processing unit operating at 10(7) instructions per second are detailed. Central to the operation of this device is a giant fiber-optic content-addressable memory in a programmable logic array configuration. The design includes four instructions and emphasizes the fan-in and fan-out capabilities of optical systems. Interconnection limitations and scaling issues are examined.

  11. On the design of chemical processes with improved controllability characteristics

    NARCIS (Netherlands)

    Meeuse, F.M.

    2003-01-01

    Traditionally, process design and control system design are carried out sequentially. The premise underlying this sequential approach is that the decisions made in the process design phase do not limit the control design. However, it is generally known that incongruent designs can occur quite

  12. On the design of chemical processes with improved controllability characteristics

    NARCIS (Netherlands)

    Meeuse, F.M.

    2003-01-01

    Traditionally, process design and control system design are carried out sequentially. The premise underlying this sequential approach is that the decisions made in the process design phase do not limit the control design. However, it is generally known that incongruent designs can occur quite easily

  13. Exploiting graphics processing units for computational biology and bioinformatics.

    Science.gov (United States)

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.

  14. The First Prototype for the FastTracker Processing Unit

    CERN Document Server

    Andreani, A; The ATLAS collaboration; Beretta, M; Bogdan, M; Citterio, M; Alberti, F; Giannetti, P; Lanza, A; Magalotti, D; Piendibene, M; Shochet, M; Stabile, A; Tang, J; Tompkins, L

    2012-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity and the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present the first prototype of a new Processing Unit, the core of the FastTracker processor for Atlas, whose computing power is such that a couple of hundreds of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV in the ATLAS events up to Phase II instantaneous luminosities (5×1034 cm-2 s-1) with an event input rate of 100 kHz and a latency below hundreds of microseconds. We plan extremely powerful, very compact and low consumption units for the far future, essential to increase efficiency and purity of the Level 2 selected samples through the intensive use of tracking. This strategy requires massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generall...

  15. Graphics processing units in bioinformatics, computational biology and systems biology.

    Science.gov (United States)

    Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela

    2016-07-08

    Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools. © The Author 2016. Published by Oxford University Press.

  16. Research on Key Technologies of Unit-Based CNC Machine Tool Assembly Design

    Directory of Open Access Journals (Sweden)

    Zhongqi Sheng

    2014-01-01

    Full Text Available Assembly is the part that produces the maximum workload and consumed time during product design and manufacturing process. CNC machine tool is the key basic equipment in manufacturing industry and research on assembly design technologies of CNC machine tool has theoretical significance and practical value. This study established a simplified ASRG for CNC machine tool. The connection between parts, semantic information of transmission, and geometric constraint information were quantified to assembly connection strength to depict the assembling difficulty level. The transmissibility based on trust relationship was applied on the assembly connection strength. Assembly unit partition based on assembly connection strength was conducted, and interferential assembly units were identified and revised. The assembly sequence planning and optimization of parts in each assembly unit and between assembly units was conducted using genetic algorithm. With certain type of high speed CNC turning center, as an example, this paper explored into the assembly modeling, assembly unit partition, and assembly sequence planning and optimization and realized the optimized assembly sequence of headstock of CNC machine tool.

  17. Custom Unit Pump Design and Testing for the EVA PLSS

    Science.gov (United States)

    Schuller, Michael; Kurwitz, Cable; Goldman, Jeff; Morris, Kim; Trevino, Luis

    2009-01-01

    This paper describes the effort by the Texas Engineering Experiment Station (TEES) and Honeywell for NASA to design and test a pre-flight prototype pump for use in the Extra-vehicular activity (EVA) portable life support subsystem (PLSS). Major design decisions were driven by the need to reduce the pump s mass, power, and volume compared to the existing PLSS pump. In addition, the pump must accommodate a much wider range of abnormal conditions than the existing pump, including vapor/gas bubbles and increased pressure drop when employed to cool two suits simultaneously. A positive displacement, external gear type pump was selected because it offers the most compact and highest efficiency solution over the required range of flow rates and pressure drops. An additional benefit of selecting a gear pump design is that it is self priming and capable of ingesting non-condensable gas without becoming air locked. The chosen pump design consists of a 28 V DC, brushless, sealless, permanent magnet motor driven, external gear pump that utilizes a Honeywell development that eliminates the need for magnetic coupling. Although the planned flight unit will use a sensorless motor with custom designed controller, the pre-flight prototype to be provided for this project incorporates Hall effect sensors, allowing an interface with a readily available commercial motor controller. This design approach reduced the cost of this project and gives NASA more flexibility in future PLSS laboratory testing. The pump design was based on existing Honeywell designs, but incorporated features specifically for the PLSS application, including all of the key features of the flight pump. Testing at TEES verified that the pump meets the design requirements for range of flow rates, pressure drop, power consumption, working fluid temperature, operating time, gas ingestion , and restart capability under both ambient and vacuum conditions. The pump operated between 40 and 240 lbm/hr flowrate, 35 to 100 F

  18. Universal Design: Process, Principles, and Applications

    Science.gov (United States)

    Burgstahler, Sheryl

    2009-01-01

    Designing any product or environment involves the consideration of many factors, including aesthetics, engineering options, environmental issues, safety concerns, industry standards, and cost. Typically, designers focus their attention on the average user. In contrast, universal design (UD), according to the Center for Universal Design," is…

  19. Design and intensification of industrial DADPM process

    NARCIS (Netherlands)

    Benneker, Anne Maria; van der Ham, Aloysius G.J.; de Waele, B.; de Zeeuw, A.J.; van den Berg, Henderikus

    2016-01-01

    Process intensification is an essential method for the improvement of energy and material efficiency, waste reduction and simplification of industrial processes. In this research a Process Intensification methodology developed by Lutze, Gani and Woodley at the Computer Aided Process Engineering

  20. Interoperability of Design Intent in Integrated Product and Process Design

    Institute of Scientific and Technical Information of China (English)

    M; W; Fu; W; F; Lu; A; Y; C; Nee; S; K; Ong

    2002-01-01

    Concurrent engineering is currently an overwhelming t rend in product development since it takes the down-stream product development issues and product life-cycle issues into consideration in up-front design pro cess. At this stage, evaluation, verification and validation of design concepts and design schemes is critical to the right-to-market product development. I dentification and extraction of design intent from the part digital mock-ups in different data formats and created in different CAD syste...

  1. On the design of chemical processes with improved controllability characteristics

    OpenAIRE

    Meeuse, F.M.

    2003-01-01

    Traditionally, process design and control system design are carried out sequentially. The premise underlying this sequential approach is that the decisions made in the process design phase do not limit the control design. However, it is generally known that incongruent designs can occur quite easily. In the literature two different classes of approaches are being described that consider the control performance of the design alternatives from the earliest design stages: (i) Anticipating sequen...

  2. CONSTRUCTION METHOD OF KNOWLEDGE MAP BASED ON DESIGN PROCESS

    Institute of Scientific and Technical Information of China (English)

    SU Hai; JIANG Zuhua

    2007-01-01

    Due to the increasing amount and complexity of knowledge in product design, the knowledge map based on design process is presented as a tool to reuse product design process, promote the product design knowledge sharing. The relationship between design task flow and knowledge flow is discussed; A knowledge organizing method based on design task decomposition and a visualization method to support the knowledge retrieving and sharing in product design are proposed. And a knowledge map system to manage the knowledge in product design process is built with Visual C++ and SVG. Finally, a brief case study is provided to illustrate the construction and application of knowledge map in fuel pump design.

  3. Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)

    Science.gov (United States)

    Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.

    2016-05-01

    This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.

  4. MASSIVELY PARALLEL LATENT SEMANTIC ANALYSES USING A GRAPHICS PROCESSING UNIT

    Energy Technology Data Exchange (ETDEWEB)

    Cavanagh, J.; Cui, S.

    2009-01-01

    Latent Semantic Analysis (LSA) aims to reduce the dimensions of large term-document datasets using Singular Value Decomposition. However, with the ever-expanding size of datasets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. A graphics processing unit (GPU) can solve some highly parallel problems much faster than a traditional sequential processor or central processing unit (CPU). Thus, a deployable system using a GPU to speed up large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a PC cluster. Due to the GPU’s application-specifi c architecture, harnessing the GPU’s computational prowess for LSA is a great challenge. We presented a parallel LSA implementation on the GPU, using NVIDIA® Compute Unifi ed Device Architecture and Compute Unifi ed Basic Linear Algebra Subprograms software. The performance of this implementation is compared to traditional LSA implementation on a CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1 000x1 000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran fi ve to six times faster than the CPU version. The large variation is due to architectural benefi ts of the GPU for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  5. Bandwidth Enhancement between Graphics Processing Units on the Peripheral Component Interconnect Bus

    Directory of Open Access Journals (Sweden)

    ANTON Alin

    2015-10-01

    Full Text Available General purpose computing on graphics processing units is a new trend in high performance computing. Present day applications require office and personal supercomputers which are mostly based on many core hardware accelerators communicating with the host system through the Peripheral Component Interconnect (PCI bus. Parallel data compression is a difficult topic but compression has been used successfully to improve the communication between parallel message passing interface (MPI processes on high performance computing clusters. In this paper we show that special pur pose compression algorithms designed for scientific floating point data can be used to enhance the bandwidth between 2 graphics processing unit (GPU devices on the PCI Express (PCIe 3.0 x16 bus in a homebuilt personal supercomputer (PSC.

  6. Research on an Intelligent Decision Support System for a Conceptual Innovation Design of Pumping Units Based on TRIZ

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Attention is concentrated on how to perform the innovative design during the process of pumping unit conceptual design, and how to enhance design efficiency and inspire creativity. Aiming at the shortages of conceptual design, introducing the theory of inventive problem solving (TRIZ) into the mechanical product design for producing innovative ideas, and using the advanced computer-aided technique, the intelligent decision support system (IDSS) based on TRIZ (TRIZ-IDSS) has been constructed. The construction method, system structure, conceptual production, decision-making and evaluation of the problem solving subsystem are discussed. The innovative conceptual design of pumping units indicates that the system can help the engineers open up a new space of thinking, overcome the thinking inertia, and put forward innovative design concepts. This system also can offer the scientific instructions for the innovative design of mechanical products.

  7. Design of the Wendelstein 7-X inertially cooled Test Divertor Unit Scraper Element

    Energy Technology Data Exchange (ETDEWEB)

    Lumsdaine, Arnold, E-mail: lumsdainea@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Boscary, Jean [Max Planck Institute for Plasma Physics, Garching (Germany); Fellinger, Joris [Max Planck Institute for Plasma Physics, Greifswald (Germany); Harris, Jeff [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Hölbe, Hauke; König, Ralf [Max Planck Institute for Plasma Physics, Greifswald (Germany); Lore, Jeremy; McGinnis, Dean [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Neilson, Hutch; Titus, Peter [Princeton Plasma Physics Lab, Princeton, NJ (United States); Tretter, Jörg [Max Planck Institute for Plasma Physics, Garching (Germany)

    2015-10-15

    Highlights: • The justification for the installation of the Test Divertor Unit Scraper Element is given. • Specially designed operational scenarios for the component are presented. • Plans for the design of the component are detailed. - Abstract: The Wendelstein 7-X stellarator is scheduled to begin operation in 2015, and to achieve full power steady-state operation in 2019. Computational simulations have indicated that for certain plasma configurations in the steady-state operation, the ends of the divertor targets may receive heat fluxes beyond their qualified technological limit. To address this issue, a high heat-flux “scraper element” (HHF-SE) has been designed that can protect the sensitive divertor target region. The surface profile of the HHF-SE has been carefully designed to meet challenging engineering requirements and severe spatial limitations through an iterative process involving physics simulations, engineering analysis, and computer aided design rendering. The desire to examine how the scraper element interacts with the plasma, both in terms of how it protects the divertor, and how it affects the neutral pumping efficiency, has led to the consideration of installing an inertially cooled version during the short pulse operation phase. This Test Divertor Unit Scraper Element (TDU-SE) would replicate the surface profile of the HHF-SE. The design and instrumentation of this component must be completed carefully in order to satisfy the requirements of the machine operation, as well as to support the possible installation of the HHF-SE for steady-state operation.

  8. 联合特钢公司烧结烟气脱硫工艺设计与应用%Design and Application of Sintering Fume Desulfurization Process of United Special Steel

    Institute of Scientific and Technical Information of China (English)

    王贺建; 王莹

    2015-01-01

    天津天钢联合特钢有限公司(以下称联合特钢公司)为了减少SO2排放量,满足国家颁布的钢铁行业排放标准,将石灰-石膏湿法脱硫技术应用于2×230 m2烧结机,使烧结机头烟气SO2脱效率达95%以上,SO2平均排放浓度低于30 mg/m3,各项运行性能指标良好,实现了节能减排,满足了国家排放标准,取得了较高的经济效益。%In order to reduce SO2 emission and reach the emission standard of national steel industry, lime-gypsum wet method desulfurization technology was adopted for 2í230 m2 sintering machines at Tianjin Tiangang United Special Steel Co., Ltd. The SO2 removal rate of fume at sintering machine head arrived at over 95% and average SO2 emission condensation was lower than 30 mg/m3. With good performance indices, energy saving and emission reducing were achieved and national emission standard was met. High economic benefits were obtained.

  9. Design of educational artifacts as support to learning process.

    Science.gov (United States)

    Resende, Adson Eduardo; Vasconcelos, Flávio Henrique

    2012-01-01

    The aim of this paper is to identify utilization schemes developed by students and teachers in their interaction with educational workstations in the electronic measurement and instrumentation laboratory at the Department of Electrical Engineering in the Federal University of Minas Gerais (UFMG), Brazil. After that, these schemes were used to design a new workstation. For this, it was important to bear in mind that the mentioned artifacts contain two key characteristics: (1) one from the designers themselves, resulting from their experience and their technical knowledge of what they are designing and (2) the experience from users and the means through which they take advantage of and develop these artifacts, in turn rendering them appropriate to perform the proposed task - the utilization schemes developed in the process of mediation between the user and the artifact. The satisfactory fusion of these two points makes these artifacts a functional unit - the instruments. This research aims to demonstrate that identifying the utilization schemes by taking advantage of user experience and incorporating this within the design, facilitates its appropriation and, consequently, its efficiency as an instrument of learning.

  10. 77 FR 38857 - Design, Inspection, and Testing Criteria for Air Filtration and Adsorption Units of Normal...

    Science.gov (United States)

    2012-06-29

    ... COMMISSION Design, Inspection, and Testing Criteria for Air Filtration and Adsorption Units of Normal..., Inspection, and Testing Criteria for Air Filtration and Adsorption Units of Normal Atmosphere Cleanup Systems..., entitled, ``Design, Inspection, and Testing Criteria for Air Filtration and Adsorption Units of...

  11. Computer simulation for designing waste reduction in chemical processing

    Energy Technology Data Exchange (ETDEWEB)

    Mallick, S.K. [Oak Ridge Inst. for Science and Technology, TN (United States); Cabezas, H.; Bare, J.C. [Environmental Protection Agency, Cincinnati, OH (United States)

    1996-12-31

    A new methodology has been developed for implementing waste reduction in the design of chemical processes using computer simulation. The methodology is based on a generic pollution balance around a process. For steady state conditions, the pollution balance equation is used as the basis to define a pollution index with units of pounds of pollution per pound of products. The pollution balance has been modified by weighing the mass of each pollutant by a chemical ranking of environmental impact. The chemical ranking expresses the well known fact that all chemicals do not have the same environmental impact, e.g., all chemicals are not equally toxic. Adding the chemical ranking effectively converts the pollutant mass balance into a balance over environmental impact. A modified pollution index or impact index with units of environmental impact per mass of products is derived from the impact balance. The impact index is a measure of the environmental effects due to the waste generated by a process. It is extremely useful when comparing the effect of the pollution generated by alternative processes or process conditions in the manufacture of any given product. The following three different schemes for the chemical ranking have been considered: (i) no ranking, i.e., considering that all chemicals have the same environmental impact, (ii) a simple numerical ranking of wastes from 0 to 3 according to the authors judgement of the impact of each chemical, and (iii) ranking wastes according to a scientifically derived combined index of human health and environmental effects. Use of the methodology has been illustrated with an example of production of synthetic ammonia. 3 refs., 2 figs., 1 tab.

  12. Efficient magnetohydrodynamic simulations on graphics processing units with CUDA

    Science.gov (United States)

    Wong, Hon-Cheng; Wong, Un-Hong; Feng, Xueshang; Tang, Zesheng

    2011-10-01

    Magnetohydrodynamic (MHD) simulations based on the ideal MHD equations have become a powerful tool for modeling phenomena in a wide range of applications including laboratory, astrophysical, and space plasmas. In general, high-resolution methods for solving the ideal MHD equations are computationally expensive and Beowulf clusters or even supercomputers are often used to run the codes that implemented these methods. With the advent of the Compute Unified Device Architecture (CUDA), modern graphics processing units (GPUs) provide an alternative approach to parallel computing for scientific simulations. In this paper we present, to the best of the author's knowledge, the first implementation of MHD simulations entirely on GPUs with CUDA, named GPU-MHD, to accelerate the simulation process. GPU-MHD supports both single and double precision computations. A series of numerical tests have been performed to validate the correctness of our code. Accuracy evaluation by comparing single and double precision computation results is also given. Performance measurements of both single and double precision are conducted on both the NVIDIA GeForce GTX 295 (GT200 architecture) and GTX 480 (Fermi architecture) graphics cards. These measurements show that our GPU-based implementation achieves between one and two orders of magnitude of improvement depending on the graphics card used, the problem size, and the precision when comparing to the original serial CPU MHD implementation. In addition, we extend GPU-MHD to support the visualization of the simulation results and thus the whole MHD simulation and visualization process can be performed entirely on GPUs.

  13. Accelerating sparse linear algebra using graphics processing units

    Science.gov (United States)

    Spagnoli, Kyle E.; Humphrey, John R.; Price, Daniel K.; Kelmelis, Eric J.

    2011-06-01

    The modern graphics processing unit (GPU) found in many standard personal computers is a highly parallel math processor capable of over 1 TFLOPS of peak computational throughput at a cost similar to a high-end CPU with excellent FLOPS-to-watt ratio. High-level sparse linear algebra operations are computationally intense, often requiring large amounts of parallel operations and would seem a natural fit for the processing power of the GPU. Our work is on a GPU accelerated implementation of sparse linear algebra routines. We present results from both direct and iterative sparse system solvers. The GPU execution model featured by NVIDIA GPUs based on CUDA demands very strong parallelism, requiring between hundreds and thousands of simultaneous operations to achieve high performance. Some constructs from linear algebra map extremely well to the GPU and others map poorly. CPUs, on the other hand, do well at smaller order parallelism and perform acceptably during low-parallelism code segments. Our work addresses this via hybrid a processing model, in which the CPU and GPU work simultaneously to produce results. In many cases, this is accomplished by allowing each platform to do the work it performs most naturally. For example, the CPU is responsible for graph theory portion of the direct solvers while the GPU simultaneously performs the low level linear algebra routines.

  14. Toward automating the database design process

    Energy Technology Data Exchange (ETDEWEB)

    Asprey, P.L.

    1979-04-25

    One organization's approach to designing complex, interrelated databases is described. The problems encountered and the techniques developed are discussed. A set of software tools to aid the designer and to produce an initial database design directly is presented. 5 figures.

  15. Making values explicit during the design process

    NARCIS (Netherlands)

    Steen, M.G.D.; Poel, I. van de

    2012-01-01

    When people design products and services, they often do so to help realize specific values. Design is a value-driven activity, although the values often remain implicit and unarticulated. Here we reflect on a design-driven research project in which a series of innovative telecommunication, multimedi

  16. GENETIC ALGORITHM ON GENERAL PURPOSE GRAPHICS PROCESSING UNIT: PARALLELISM REVIEW

    Directory of Open Access Journals (Sweden)

    A.J. Umbarkar

    2013-01-01

    Full Text Available Genetic Algorithm (GA is effective and robust method for solving many optimization problems. However, it may take more runs (iterations and time to get optimal solution. The execution time to find the optimal solution also depends upon the niching-technique applied to evolving population. This paper provides the information about how various authors, researchers, scientists have implemented GA on GPGPU (General purpose Graphics Processing Units with and without parallelism. Many problems have been solved on GPGPU using GA. GA is easy to parallelize because of its SIMD nature and therefore can be implemented well on GPGPU. Thus, speedup can definitely be achieved if bottleneck in GAs are identified and implemented effectively on GPGPU. Paper gives review of various applications solved using GAs on GPGPU with the future scope in the area of optimization.

  17. Centralization of Intensive Care Units: Process Reengineering in a Hospital

    Directory of Open Access Journals (Sweden)

    Arun Kumar

    2010-03-01

    Full Text Available Centralization of intensive care units (ICUs is a concept that has been around for several decades and the OECD countries have led the way in adopting this in their operations. Singapore Hospital was built in 1981, before the concept of centralization of ICUs took off. The hospital's ICUs were never centralized and were spread out across eight different blocks with the specialization they were associated with. Coupled with the acquisitions of the new concept of centralization and its benefits, the hospital recognizes the importance of having a centralized ICU to better handle major disasters. Using simulation models, this paper attempts to study the feasibility of centralization of ICUs in Singapore Hospital, subject to space constraints. The results will prove helpful to those who consider reengineering the intensive care process in hospitals.

  18. Simulating Lattice Spin Models on Graphics Processing Units

    CERN Document Server

    Levy, Tal; Rabani, Eran; 10.1021/ct100385b

    2012-01-01

    Lattice spin models are useful for studying critical phenomena and allow the extraction of equilibrium and dynamical properties. Simulations of such systems are usually based on Monte Carlo (MC) techniques, and the main difficulty is often the large computational effort needed when approaching critical points. In this work, it is shown how such simulations can be accelerated with the use of NVIDIA graphics processing units (GPUs) using the CUDA programming architecture. We have developed two different algorithms for lattice spin models, the first useful for equilibrium properties near a second-order phase transition point and the second for dynamical slowing down near a glass transition. The algorithms are based on parallel MC techniques, and speedups from 70- to 150-fold over conventional single-threaded computer codes are obtained using consumer-grade hardware.

  19. Molecular Dynamics Simulation of Macromolecules Using Graphics Processing Unit

    CERN Document Server

    Xu, Ji; Ge, Wei; Yu, Xiang; Yang, Xiaozhen; Li, Jinghai

    2010-01-01

    Molecular dynamics (MD) simulation is a powerful computational tool to study the behavior of macromolecular systems. But many simulations of this field are limited in spatial or temporal scale by the available computational resource. In recent years, graphics processing unit (GPU) provides unprecedented computational power for scientific applications. Many MD algorithms suit with the multithread nature of GPU. In this paper, MD algorithms for macromolecular systems that run entirely on GPU are presented. Compared to the MD simulation with free software GROMACS on a single CPU core, our codes achieve about 10 times speed-up on a single GPU. For validation, we have performed MD simulations of polymer crystallization on GPU, and the results observed perfectly agree with computations on CPU. Therefore, our single GPU codes have already provided an inexpensive alternative for macromolecular simulations on traditional CPU clusters and they can also be used as a basis to develop parallel GPU programs to further spee...

  20. Integrating post-Newtonian equations on graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Herrmann, Frank; Tiglio, Manuel [Department of Physics, Center for Fundamental Physics, and Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Silberholz, John [Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Bellone, Matias [Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Cordoba 5000 (Argentina); Guerberoff, Gustavo, E-mail: tiglio@umd.ed [Facultad de Ingenieria, Instituto de Matematica y Estadistica ' Prof. Ing. Rafael Laguardia' , Universidad de la Republica, Montevideo (Uruguay)

    2010-02-07

    We report on early results of a numerical and statistical study of binary black hole inspirals. The two black holes are evolved using post-Newtonian approximations starting with initially randomly distributed spin vectors. We characterize certain aspects of the distribution shortly before merger. In particular we note the uniform distribution of black hole spin vector dot products shortly before merger and a high correlation between the initial and final black hole spin vector dot products in the equal-mass, maximally spinning case. More than 300 million simulations were performed on graphics processing units, and we demonstrate a speed-up of a factor 50 over a more conventional CPU implementation. (fast track communication)

  1. Air pollution modelling using a graphics processing unit with CUDA

    CERN Document Server

    Molnar, Ferenc; Meszaros, Robert; Lagzi, Istvan; 10.1016/j.cpc.2009.09.008

    2010-01-01

    The Graphics Processing Unit (GPU) is a powerful tool for parallel computing. In the past years the performance and capabilities of GPUs have increased, and the Compute Unified Device Architecture (CUDA) - a parallel computing architecture - has been developed by NVIDIA to utilize this performance in general purpose computations. Here we show for the first time a possible application of GPU for environmental studies serving as a basement for decision making strategies. A stochastic Lagrangian particle model has been developed on CUDA to estimate the transport and the transformation of the radionuclides from a single point source during an accidental release. Our results show that parallel implementation achieves typical acceleration values in the order of 80-120 times compared to CPU using a single-threaded implementation on a 2.33 GHz desktop computer. Only very small differences have been found between the results obtained from GPU and CPU simulations, which are comparable with the effect of stochastic tran...

  2. PO*WW*ER mobile treatment unit process hazards analysis

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, R.B.

    1996-06-01

    The objective of this report is to demonstrate that a thorough assessment of the risks associated with the operation of the Rust Geotech patented PO*WW*ER mobile treatment unit (MTU) has been performed and documented. The MTU was developed to treat aqueous mixed wastes at the US Department of Energy (DOE) Albuquerque Operations Office sites. The MTU uses evaporation to separate organics and water from radionuclides and solids, and catalytic oxidation to convert the hazardous into byproducts. This process hazards analysis evaluated a number of accident scenarios not directly related to the operation of the MTU, such as natural phenomena damage and mishandling of chemical containers. Worst case accident scenarios were further evaluated to determine the risk potential to the MTU and to workers, the public, and the environment. The overall risk to any group from operation of the MTU was determined to be very low; the MTU is classified as a Radiological Facility with low hazards.

  3. Iterative Methods for MPC on Graphical Processing Units

    DEFF Research Database (Denmark)

    2012-01-01

    The high oating point performance and memory bandwidth of Graphical Processing Units (GPUs) makes them ideal for a large number of computations which often arises in scientic computing, such as matrix operations. GPUs achieve this performance by utilizing massive par- allelism, which requires...... on their applicability for GPUs. We examine published techniques for iterative methods in interior points methods (IPMs) by applying them to simple test cases, such as a system of masses connected by springs. Iterative methods allows us deal with the ill-conditioning occurring in the later iterations of the IPM as well...... as to avoid the use of dense matrices, which may be too large for the limited memory capacity of current graphics cards....

  4. Graphics processing units accelerated semiclassical initial value representation molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Tamascelli, Dario; Dambrosio, Francesco Saverio [Dipartimento di Fisica, Università degli Studi di Milano, via Celoria 16, 20133 Milano (Italy); Conte, Riccardo [Department of Chemistry and Cherry L. Emerson Center for Scientific Computation, Emory University, Atlanta, Georgia 30322 (United States); Ceotto, Michele, E-mail: michele.ceotto@unimi.it [Dipartimento di Chimica, Università degli Studi di Milano, via Golgi 19, 20133 Milano (Italy)

    2014-05-07

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly.

  5. Polymer Field-Theory Simulations on Graphics Processing Units

    CERN Document Server

    Delaney, Kris T

    2012-01-01

    We report the first CUDA graphics-processing-unit (GPU) implementation of the polymer field-theoretic simulation framework for determining fully fluctuating expectation values of equilibrium properties for periodic and select aperiodic polymer systems. Our implementation is suitable both for self-consistent field theory (mean-field) solutions of the field equations, and for fully fluctuating simulations using the complex Langevin approach. Running on NVIDIA Tesla T20 series GPUs, we find double-precision speedups of up to 30x compared to single-core serial calculations on a recent reference CPU, while single-precision calculations proceed up to 60x faster than those on the single CPU core. Due to intensive communications overhead, an MPI implementation running on 64 CPU cores remains two times slower than a single GPU.

  6. Graphics Processing Units and High-Dimensional Optimization.

    Science.gov (United States)

    Zhou, Hua; Lange, Kenneth; Suchard, Marc A

    2010-08-01

    This paper discusses the potential of graphics processing units (GPUs) in high-dimensional optimization problems. A single GPU card with hundreds of arithmetic cores can be inserted in a personal computer and dramatically accelerates many statistical algorithms. To exploit these devices fully, optimization algorithms should reduce to multiple parallel tasks, each accessing a limited amount of data. These criteria favor EM and MM algorithms that separate parameters and data. To a lesser extent block relaxation and coordinate descent and ascent also qualify. We demonstrate the utility of GPUs in nonnegative matrix factorization, PET image reconstruction, and multidimensional scaling. Speedups of 100 fold can easily be attained. Over the next decade, GPUs will fundamentally alter the landscape of computational statistics. It is time for more statisticians to get on-board.

  7. Graphics Processing Unit Enhanced Parallel Document Flocking Clustering

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL; ST Charles, Jesse Lee [ORNL

    2010-01-01

    Analyzing and clustering documents is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to generate results in a reasonable amount of time. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. In this paper, we have conducted research to exploit this archi- tecture and apply its strengths to the flocking based document clustering problem. Using the CUDA platform from NVIDIA, we developed a doc- ument flocking implementation to be run on the NVIDIA GEFORCE GPU. Performance gains ranged from thirty-six to nearly sixty times improvement of the GPU over the CPU implementation.

  8. Implementing wide baseline matching algorithms on a graphics processing unit.

    Energy Technology Data Exchange (ETDEWEB)

    Rothganger, Fredrick H.; Larson, Kurt W.; Gonzales, Antonio Ignacio; Myers, Daniel S.

    2007-10-01

    Wide baseline matching is the state of the art for object recognition and image registration problems in computer vision. Though effective, the computational expense of these algorithms limits their application to many real-world problems. The performance of wide baseline matching algorithms may be improved by using a graphical processing unit as a fast multithreaded co-processor. In this paper, we present an implementation of the difference of Gaussian feature extractor, based on the CUDA system of GPU programming developed by NVIDIA, and implemented on their hardware. For a 2000x2000 pixel image, the GPU-based method executes nearly thirteen times faster than a comparable CPU-based method, with no significant loss of accuracy.

  9. Course Unit Design Based on Systematic Working Process--Take"Coating Pencil Hardness Measurement"as an example%基于工作过程系统化的课程单元设计--以《涂膜铅笔硬度测定》为例

    Institute of Scientific and Technical Information of China (English)

    林书乐; 刘莹; 叶志钧

    2016-01-01

    基于工作过程系统化的课程开发是高等职业教育课程改革的重要方向之一。本文以《涂膜铅笔硬度测定》为例尝试把工作过程系统化的理念用于教学单元设计中,把教学内容模块化并引入单项考核和综合考核对学生的掌握程度进行跟踪分析。实践证明:本文所述教学过程能让学生更好地掌握涂膜铅笔硬度测定。这种单元教学设计思路有望在以程序化操作技能为主要教学内容的课堂教学中推广。%Curriculum development based on the systematic work process is an important direction of higher vocational edu-cation curriculum reform. In this paper,"the film pencil hardness measurement"as an example to try to systematize the work process concept for the design of teaching units, the teaching content and the introduction of individual modular assessment and comprehensive assessment of student mastery track and analyze. Practice has proved that:the teaching process described herein allows students to better grasp the film pencil hardness measurement. This unit is expected to instructional design ideas procedural skills as the main content of classroom teaching to promote.

  10. Creativity Processes of Students in the Design Studio

    Science.gov (United States)

    Huber, Amy Mattingly; Leigh, Katharine E.; Tremblay, Kenneth R., Jr.

    2012-01-01

    The creative process is a multifaceted and dynamic path of thinking required to execute a project in design-based disciplines. The goal of this research was to test a model outlining the creative design process by investigating student experiences in a design project assignment. The study used an exploratory design to collect data from student…

  11. Design of voice coil motor dynamic focusing unit for a laser scanner.

    Science.gov (United States)

    Lee, Moon G; Kim, Gaeun; Lee, Chan-Woo; Lee, Soo-Hun; Jeon, Yongho

    2014-04-01

    Laser scanning systems have been used for material processing tasks such as welding, cutting, marking, and drilling. However, applications have been limited by the small range of motion and slow speed of the focusing unit, which carries the focusing optics. To overcome these limitations, a dynamic focusing system with a long travel range and high speed is needed. In this study, a dynamic focusing unit for a laser scanning system with a voice coil motor (VCM) mechanism is proposed to enable fast speed and a wide focusing range. The VCM has finer precision and higher speed than conventional step motors and a longer travel range than earlier lead zirconium titanate actuators. The system has a hollow configuration to provide a laser beam path. This also makes it compact and transmission-free and gives it low inertia. The VCM's magnetics are modeled using a permeance model. Its design parameters are determined by optimization using the Broyden-Fletcher-Goldfarb-Shanno method and a sequential quadratic programming algorithm. After the VCM is designed, the dynamic focusing unit is fabricated and assembled. The permeance model is verified by a magnetic finite element method simulation tool, Maxwell 2D and 3D, and by measurement data from a gauss meter. The performance is verified experimentally. The results show a resolution of 0.2 μm and travel range of 16 mm. These are better than those of conventional focusing systems; therefore, this focusing unit can be applied to laser scanning systems for good machining capability.

  12. Operation and Design of Diabatic Distillation Processes

    DEFF Research Database (Denmark)

    Bisgaard, Thomas

    nature of the modelling framework is favourable for benchmarking distillation column configurations. To further facilitate benchmarking of distillation column configurations, a conceptual design algorithm was formulated, which systematicallyaddresses the selection of the design variables. The conceptual...... design of the heat-integrated distillation column configurations is challenging as a result of the increased number of decision variables compared to the CDiC. Finally, themodel is implemented in Matlab and a database of the considered configurations, case studies, pure component properties, and binary...

  13. Launch Vehicle Design Process: Characterization, Technical Integration, and Lessons Learned

    Science.gov (United States)

    Blair, J. C.; Ryan, R. S.; Schutzenhofer, L. A.; Humphries, W. R.

    2001-01-01

    Engineering design is a challenging activity for any product. Since launch vehicles are highly complex and interconnected and have extreme energy densities, their design represents a challenge of the highest order. The purpose of this document is to delineate and clarify the design process associated with the launch vehicle for space flight transportation. The goal is to define and characterize a baseline for the space transportation design process. This baseline can be used as a basis for improving effectiveness and efficiency of the design process. The baseline characterization is achieved via compartmentalization and technical integration of subsystems, design functions, and discipline functions. First, a global design process overview is provided in order to show responsibility, interactions, and connectivity of overall aspects of the design process. Then design essentials are delineated in order to emphasize necessary features of the design process that are sometimes overlooked. Finally the design process characterization is presented. This is accomplished by considering project technical framework, technical integration, process description (technical integration model, subsystem tree, design/discipline planes, decision gates, and tasks), and the design sequence. Also included in the document are a snapshot relating to process improvements, illustrations of the process, a survey of recommendations from experienced practitioners in aerospace, lessons learned, references, and a bibliography.

  14. Learning design and feedback processes at scale

    DEFF Research Database (Denmark)

    Ringtved, Ulla L.; Miligan, Sandra; Corrin, Linda

    2016-01-01

    design and would benefit from learning analytics support? What is the character of analytics that can be deployed to help deliver good design of online learning platforms? What are the theoretical and pedagogical bases inherent in different analytics designs? These and other questions will be examined......Design for teaching in scaled courses is shifting away from replication of the traditional on-campus or online teaching-learning relationship towards exploiting the distinctive characteristic and potentials of that environment to transform both teaching and learning. This involves consideration...

  15. The ATLAS Fast Tracker Processing Units - track finding and fitting

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00384270; The ATLAS collaboration; Alison, John; Ancu, Lucian Stefan; Andreani, Alessandro; Annovi, Alberto; Beccherle, Roberto; Beretta, Matteo; Biesuz, Nicolo Vladi; Bogdan, Mircea Arghir; Bryant, Patrick; Calabro, Domenico; Citraro, Saverio; Crescioli, Francesco; Dell'Orso, Mauro; Donati, Simone; Gentsos, Christos; Giannetti, Paola; Gkaitatzis, Stamatios; Gramling, Johanna; Greco, Virginia; Horyn, Lesya Anna; Iovene, Alessandro; Kalaitzidis, Panagiotis; Kim, Young-Kee; Kimura, Naoki; Kordas, Kostantinos; Kubota, Takashi; Lanza, Agostino; Liberali, Valentino; Luciano, Pierluigi; Magnin, Betty; Sakellariou, Andreas; Sampsonidis, Dimitrios; Saxon, James; Shojaii, Seyed Ruhollah; Sotiropoulou, Calliope Louisa; Stabile, Alberto; Swiatlowski, Maximilian; Volpi, Guido; Zou, Rui; Shochet, Mel

    2016-01-01

    The Fast Tracker is a hardware upgrade to the ATLAS trigger and data-acquisition system, with the goal of providing global track reconstruction by the start of the High Level Trigger starts. The Fast Tracker can process incoming data from the whole inner detector at full first level trigger rate, up to 100 kHz, using custom electronic boards. At the core of the system is a Processing Unit installed in a VMEbus crate, formed by two sets of boards: the Associative Memory Board and a powerful rear transition module called the Auxiliary card, while the second set is the Second Stage board. The associative memories perform the pattern matching looking for correlations within the incoming data, compatible with track candidates at coarse resolution. The pattern matching task is performed using custom application specific integrated circuits, called associative memory chips. The auxiliary card prepares the input and reject bad track candidates obtained from from the Associative Memory Board using the full precision a...

  16. The ATLAS Fast TracKer Processing Units

    CERN Document Server

    Krizka, Karol; The ATLAS collaboration

    2016-01-01

    The Fast Tracker is a hardware upgrade to the ATLAS trigger and data-acquisition system, with the goal of providing global track reconstruction by the start of the High Level Trigger starts. The Fast Tracker can process incoming data from the whole inner detector at full first level trigger rate, up to 100 kHz, using custom electronic boards. At the core of the system is a Processing Unit installed in a VMEbus crate, formed by two sets of boards: the Associative Memory Board and a powerful rear transition module called the Auxiliary card, while the second set is the Second Stage board. The associative memories perform the pattern matching looking for correlations within the incoming data, compatible with track candidates at coarse resolution. The pattern matching task is performed using custom application specific integrated circuits, called associative memory chips. The auxiliary card prepares the input and reject bad track candidates obtained from from the Associative Memory Board using the full precision a...

  17. Beowulf Distributed Processing and the United States Geological Survey

    Science.gov (United States)

    Maddox, Brian G.

    2002-01-01

    Introduction In recent years, the United States Geological Survey's (USGS) National Mapping Discipline (NMD) has expanded its scientific and research activities. Work is being conducted in areas such as emergency response research, scientific visualization, urban prediction, and other simulation activities. Custom-produced digital data have become essential for these types of activities. High-resolution, remotely sensed datasets are also seeing increased use. Unfortunately, the NMD is also finding that it lacks the resources required to perform some of these activities. Many of these projects require large amounts of computer processing resources. Complex urban-prediction simulations, for example, involve large amounts of processor-intensive calculations on large amounts of input data. This project was undertaken to learn and understand the concepts of distributed processing. Experience was needed in developing these types of applications. The idea was that this type of technology could significantly aid the needs of the NMD scientific and research programs. Porting a numerically intensive application currently being used by an NMD science program to run in a distributed fashion would demonstrate the usefulness of this technology. There are several benefits that this type of technology can bring to the USGS's research programs. Projects can be performed that were previously impossible due to a lack of computing resources. Other projects can be performed on a larger scale than previously possible. For example, distributed processing can enable urban dynamics research to perform simulations on larger areas without making huge sacrifices in resolution. The processing can also be done in a more reasonable amount of time than with traditional single-threaded methods (a scaled version of Chester County, Pennsylvania, took about fifty days to finish its first calibration phase with a single-threaded program). This paper has several goals regarding distributed processing

  18. 12 CFR 741.204 - Maximum public unit and nonmember accounts, and low-income designation.

    Science.gov (United States)

    2010-01-01

    ... low-income designation. 741.204 Section 741.204 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION... Unions § 741.204 Maximum public unit and nonmember accounts, and low-income designation. Any credit union...) Obtain a low-income designation in order to accept nonmember accounts, other than from public units...

  19. 76 FR 82323 - Design, Inspection, and Testing Criteria for Air Filtration and Adsorption Units

    Science.gov (United States)

    2011-12-30

    ... COMMISSION Design, Inspection, and Testing Criteria for Air Filtration and Adsorption Units AGENCY: Nuclear...-1274, ``Design, Inspection, and Testing Criteria for Air Filtration and Adsorption Units of....'' This guide applies to the design, inspection, and testing of air filtration and iodine adsorption...

  20. Density functional theory calculation on many-cores hybrid central processing unit-graphic processing unit architectures.

    Science.gov (United States)

    Genovese, Luigi; Ospici, Matthieu; Deutsch, Thierry; Méhaut, Jean-François; Neelov, Alexey; Goedecker, Stefan

    2009-07-21

    We present the implementation of a full electronic structure calculation code on a hybrid parallel architecture with graphic processing units (GPUs). This implementation is performed on a free software code based on Daubechies wavelets. Such code shows very good performances, systematic convergence properties, and an excellent efficiency on parallel computers. Our GPU-based acceleration fully preserves all these properties. In particular, the code is able to run on many cores which may or may not have a GPU associated, and thus on parallel and massive parallel hybrid machines. With double precision calculations, we may achieve considerable speedup, between a factor of 20 for some operations and a factor of 6 for the whole density functional theory code.

  1. A new type of dehydration unit of natural gas and its design considerations

    Institute of Scientific and Technical Information of China (English)

    LIU Hengwei; LIU Zhongliang; ZHANG Jian; GU Keyu; YAN Tingmin

    2005-01-01

    A new type of dehydration unit for natural gas is described and its basic structure and working principles are presented.The key factors affecting the performance and dehydration efficiency of the unit such as nucleation rate, droplet growth rate, the strength of the swirl, and the position at which the shock wave occurs are discussed. And accordingly the design considerations of each component of the unit are provided. Experimental investigations on the working performance of the unit justified the design considerations.

  2. Multi-unit Integration in Microfluidic Processes: Current Status and Future Horizons

    Directory of Open Access Journals (Sweden)

    Pratap R. Patnaik

    2011-07-01

    Full Text Available Microfluidic processes, mainly for biological and chemical applications, have expanded rapidly in recent years. While the initial focus was on single units, principally microreactors, technological and economic considerations have caused a shift to integrated microchips in which a number of microdevices function coherently. These integrated devices have many advantages over conventional macro-scale processes. However, the small scale of operation, complexities in the underlying physics and chemistry, and differences in the time constants of the participating units, in the interactions among them and in the outputs of interest make it difficult to design and optimize integrated microprocesses. These aspects are discussed here, current research and applications are reviewed, and possible future directions are considered.

  3. Structures and Processes in Didactic Design

    DEFF Research Database (Denmark)

    Helms, Niels Henrik; Heilesen, Simon

    2012-01-01

    This paper introduces a user-driven approach to designing new educational formats including new media for learning. Focus will be on didactic design involving the use of information technology as a means of mediating, augmenting or even fundamentally changing teaching and learning practices...

  4. REVERSING THE CO-DESIGN PROCESS

    DEFF Research Database (Denmark)

    Lundsgaard, Christina

    2011-01-01

    , but the focus is almost always on the upcoming design. Based on an experiment, this paper investigates how co-design tools can be used as a part of a post-occupancy evaluation (POE). When you do a POE, you evaluate the performance of an already completed building in relation to the daily use. Unlike...

  5. Structures and Processes in Didactic Design

    DEFF Research Database (Denmark)

    Helms, Niels Henrik; Heilesen, Simon

    2012-01-01

    This paper introduces a user-driven approach to designing new educational formats including new media for learning. Focus will be on didactic design involving the use of information technology as a means of mediating, augmenting or even fundamentally changing teaching and learning practices. The ...

  6. Information management and design & engineering processes

    NARCIS (Netherlands)

    Lutters, Diederick; ten Brinke, E.; Streppel, A.H.; Kals, H.J.J.

    2000-01-01

    In analysing design and manufacturing tasks and their mutual interactions, it appears that the underlying information of these tasks is of the utmost importance. If this information is managed in a formalized, structured way, it can serve as a basis for the control of design and manufacturing proces

  7. Computer Applications in the Design Process.

    Science.gov (United States)

    Winchip, Susan

    Computer Assisted Design (CAD) and Computer Assisted Manufacturing (CAM) are emerging technologies now being used in home economics and interior design applications. A microcomputer in a computer network system is capable of executing computer graphic functions such as three-dimensional modeling, as well as utilizing office automation packages to…

  8. Property Modelling for Applications in Chemical Product and Process Design

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    Physical-chemical properties of pure chemicals and their mixtures play an important role in the design of chemicals based products and the processes that manufacture them. Although, the use of experimental data in design and analysis of chemicals based products and their processes is desirable...... such as database, property model library, model parameter regression, and, property-model based product-process design will be presented. The database contains pure component and mixture data for a wide range of organic chemicals. The property models are based on the combined group contribution and atom...... modeling tools in design and analysis of chemical product-process design, including biochemical processes will be highlighted....

  9. A Design Process Evaluation Method for Sustainable Buildings

    Directory of Open Access Journals (Sweden)

    Christopher S. Magent

    2009-12-01

    Full Text Available This research develops a technique to model and evaluate the design process for sustainable buildings. Three case studies were conducted to validate this method. The resulting design process evaluation method for sustainable buildings (DPEMSB may assist project teams in designing their own sustainable building design processes. This method helps to identify critical decisions in the design process, to evaluate these decisions for time and sequence, to define information required for decisions from various project stakeholders, and to identify stakeholder competencies for process implementation. Published in the Journal AEDM - Volume 5, Numbers 1-2, 2009 , pp. 62-74(13

  10. Designing User Interfaces for Smart-Applications for Operating Rooms and Intensive Care Units

    Science.gov (United States)

    Kindsmüller, Martin Christof; Haar, Maral; Schulz, Hannes; Herczeg, Michael

    Today’s physicians and nurses working in operating rooms and intensive care units have to deal with an ever increasing amount of data. More and more medical devices are delivering information, which has to be perceived and interpreted in regard to patient status and the necessity to adjust therapy. The combination of high information load and insufficient usability creates a severe challenge for the health personnel with respect to proper monitoring of these devices respective to acknowledging alarms and timely reaction to critical incidents. Smart Applications are a new kind of decision support systems that incorporate medical expertise in order to help health personnel in regard to diagnosis and therapy. By means of a User Centered Design process of two Smart Applications (anaesthesia monitor display, diagnosis display), we illustrate which approach should be followed and which processes and methods have been successfully applied in fostering the design of usable medical devices.

  11. New Vistas in Chemical Product and Process Design

    DEFF Research Database (Denmark)

    Zhang, Lei; Babi, Deenesh Kavi; Gani, Rafiqul

    2016-01-01

    Design of chemicals-based products is broadly classified into those that are process centered and those that are product centered. In this article, the designs of both classes of products are reviewed from a process systems point of view; developments related to the design of the chemical product......, its corresponding process, and its integration are highlighted. Although significant advances have been made in the development of systematic model-based techniques for process design (also for optimization, operation, and control), much work is needed to reach the same level for product design....... Timeline diagrams illustrating key contributions in product design, process design, and integrated product-process design are presented. The search for novel, innovative, and sustainable solutions must be matched by consideration of issues related to the multidisciplinary nature of problems, the lack...

  12. An accomplished teacher's use of scaffolding during a second-grade unit on designing games.

    Science.gov (United States)

    Chen, Weiyun; Rovegno, Inez; Cone, Stephen L; Cone, Theresa P

    2012-06-01

    The purpose of this study was to describe how an accomplished teacher taught second-grade students to design games that integrated movement and mathematics content. The participants were one physical education teacher; a classroom teacher, and an intact class of 20 second-grade students. Qualitative data were gathered through videotaping of all lessons, descriptions of 20 children's responses to all lesson segments, and interviews with all participants. In keeping with constructivist principles, the teacher used a progression of tasks and multiple instructional techniques to scaffold the design process allowing children to design games that were meaningful to them. Contrary to descriptions of scaffolding fading across a unit, in this study the scaffolding was a function of the interaction between learners' needs and task content.

  13. Thermochemical Process Development Unit: Researching Fuels from Biomass, Bioenergy Technologies (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    2009-01-01

    The Thermochemical Process Development Unit (TCPDU) at the National Renewable Energy Laboratory (NREL) is a unique facility dedicated to researching thermochemical processes to produce fuels from biomass.

  14. Process Design of Industrial Triethylene Glycol Processes Using the Cubic-Plus-Association (CPA) Equation of State

    DEFF Research Database (Denmark)

    Arya, Alay; Maribo-Mogensen, Bjørn; Tsivintzelis, Ioannis;

    2014-01-01

    design of liquid-liquid extraction of aromatic hydrocarbons by TEG. Comparisons between simulation and experimental results are presented in order to illustrate the reliability of Thermo System while it is used in a process simulator for industrial applications. Detailed analysis on selecting TEG pure...... the CAPE-OPEN standards. We, then, simulate certain binary and multicomponent systems where experimental data are available in the literature and which are critical for process design of natural gas dehydration units by triethylene glycol (TEG). We also demonstrate the potential of CPA for the process...

  15. ELT Materials Design of a Speaking Unit based on Needs Analysis

    Institute of Scientific and Technical Information of China (English)

    LIU Ai-juan; TONG Xing-hong; DU Wen-jing

    2016-01-01

    In this article, the authors design a speaking unit based on needs analysis following Hutchinson and Waters’(1987) model. First, the rationale in designing this unit is introduced, which involves the teaching approach adopted and relevant theo-ries in organizing the materials. Then, the teaching plan of this speaking unit is provided and some activities are designed to cre-ate an authentic and optimal situation for students to practice their speaking skill.

  16. Analysis of Unit Process Cost for an Engineering-Scale Pyroprocess Facility Using a Process Costing Method in Korea

    National Research Council Canada - National Science Library

    Sungki Kim; Wonil Ko; Sungsig Bang

    2015-01-01

    ...) metal ingots in a high-temperature molten salt phase. This paper provides the unit process cost of a pyroprocess facility that can process up to 10 tons of pyroprocessing product per year by utilizing the process costing method...

  17. Monte Carlo MP2 on Many Graphical Processing Units.

    Science.gov (United States)

    Doran, Alexander E; Hirata, So

    2016-10-11

    In the Monte Carlo second-order many-body perturbation (MC-MP2) method, the long sum-of-product matrix expression of the MP2 energy, whose literal evaluation may be poorly scalable, is recast into a single high-dimensional integral of functions of electron pair coordinates, which is evaluated by the scalable method of Monte Carlo integration. The sampling efficiency is further accelerated by the redundant-walker algorithm, which allows a maximal reuse of electron pairs. Here, a multitude of graphical processing units (GPUs) offers a uniquely ideal platform to expose multilevel parallelism: fine-grain data-parallelism for the redundant-walker algorithm in which millions of threads compute and share orbital amplitudes on each GPU; coarse-grain instruction-parallelism for near-independent Monte Carlo integrations on many GPUs with few and infrequent interprocessor communications. While the efficiency boost by the redundant-walker algorithm on central processing units (CPUs) grows linearly with the number of electron pairs and tends to saturate when the latter exceeds the number of orbitals, on a GPU it grows quadratically before it increases linearly and then eventually saturates at a much larger number of pairs. This is because the orbital constructions are nearly perfectly parallelized on a GPU and thus completed in a near-constant time regardless of the number of pairs. In consequence, an MC-MP2/cc-pVDZ calculation of a benzene dimer is 2700 times faster on 256 GPUs (using 2048 electron pairs) than on two CPUs, each with 8 cores (which can use only up to 256 pairs effectively). We also numerically determine that the cost to achieve a given relative statistical uncertainty in an MC-MP2 energy increases as O(n(3)) or better with system size n, which may be compared with the O(n(5)) scaling of the conventional implementation of deterministic MP2. We thus establish the scalability of MC-MP2 with both system and computer sizes.

  18. Uniting Gradual and Abrupt set Processes in Resistive Switching Oxides

    Science.gov (United States)

    Fleck, Karsten; La Torre, Camilla; Aslam, Nabeel; Hoffmann-Eifert, Susanne; Böttger, Ulrich; Menzel, Stephan

    2016-12-01

    Identifying limiting factors is crucial for a better understanding of the dynamics of the resistive switching phenomenon in transition-metal oxides. This improved understanding is important for the design of fast-switching, energy-efficient, and long-term stable redox-based resistive random-access memory devices. Therefore, this work presents a detailed study of the set kinetics of valence change resistive switches on a time scale from 10 ns to 104 s , taking Pt /SrTiO3/TiN nanocrossbars as a model material. The analysis of the transient currents reveals that the switching process can be subdivided into a linear-degradation process that is followed by a thermal runaway. The comparison with a dynamical electrothermal model of the memory cell allows the deduction of the physical origin of the degradation. The origin is an electric-field-induced increase of the oxygen-vacancy concentration near the Schottky barrier of the Pt /SrTiO3 interface that is accompanied by a steadily rising local temperature due to Joule heating. The positive feedback of the temperature increase on the oxygen-vacancy mobility, and thereby on the conductivity of the filament, leads to a self-acceleration of the set process.

  19. Process integration: Cooling water systems design

    CSIR Research Space (South Africa)

    Gololo, KV

    2010-10-01

    Full Text Available This paper presents a technique for grassroot design of cooling water system for wastewater minimization which incorporates the performances of the cooling towers involved. The study focuses mainly on cooling systems consisting of multiple cooling...

  20. Guidelines for engineering design for process safety

    National Research Council Canada - National Science Library

    2012-01-01

    "This updated version of one of the most popular and widely used CCPS books provides plant design engineers, facility operators, and safety professionals with key information on selected topics of interest...

  1. Cooling water systems design using process integration

    CSIR Research Space (South Africa)

    Gololo, KV

    2010-09-01

    Full Text Available Cooling water systems are generally designed with a set of heat exchangers arranged in parallel. This arrangement results in higher cooling water flowrate and low cooling water return temperature thus reducing cooling tower efficiency. Previous...

  2. Ergonomic implementation and work station design for quilt manufacturing unit

    Directory of Open Access Journals (Sweden)

    Deepa Vinay

    2012-01-01

    Full Text Available Background: Awkward, extreme and repetitive postures have been associated with work related musculoskeletal disorders and injury to the lowerback of workers engaged in quilting manufacturing unit. Basically quilt are made manually by hand stitch and embroidery on the quilts which was done in squatting posture on the floor. Mending, stain removal, washing and packaging were some other associated work performed on wooden table. their work demands to maintain a continuous squatting posture which leads to various injuries related to low back and to calf muscles. Material and Method s: The present study was undertaken in Tarai Agroclimatic Zone of Udham Singh Nagar District of Uttarakhand State with the objective to study the physical and physiological parameters as well as the work station layout of the respondent engaged on quilt manufacturing unit. A total of 30 subjects were selected to study the drudgery involved in quilt making enterprise and to make the provision of technology option to reduce the drudgery as well as musculoskeletal disorders, thus enhancing the productivity and comfortability. Results: Findings of the investigation show that majority of workers (93.33 per cent were female and very few (6.66 per cent were the male with the mean age of 24.53±6.43. The body mass index and aerobic capacity (lit/min values were found as 21.40±4.13 and 26.02±6.44 respectively. Forty per cent of the respondents were having the physical fitness index of high average whereas 33.33 per cent of the respondents had low average physical fitness. All the assessed activities involved to make the quilt included a number of the steps which were executed using two types of work station i.e squatting posture on floor and standing posture using wooden table. A comparative study of physiological parameters was also done in the existing conditions as well as in improved conditions by introducing low height chair and wooden spreader to hold the load of quilt

  3. Accelerating chemical database searching using graphics processing units.

    Science.gov (United States)

    Liu, Pu; Agrafiotis, Dimitris K; Rassokhin, Dmitrii N; Yang, Eric

    2011-08-22

    The utility of chemoinformatics systems depends on the accurate computer representation and efficient manipulation of chemical compounds. In such systems, a small molecule is often digitized as a large fingerprint vector, where each element indicates the presence/absence or the number of occurrences of a particular structural feature. Since in theory the number of unique features can be exceedingly large, these fingerprint vectors are usually folded into much shorter ones using hashing and modulo operations, allowing fast "in-memory" manipulation and comparison of molecules. There is increasing evidence that lossless fingerprints can substantially improve retrieval performance in chemical database searching (substructure or similarity), which have led to the development of several lossless fingerprint compression algorithms. However, any gains in storage and retrieval afforded by compression need to be weighed against the extra computational burden required for decompression before these fingerprints can be compared. Here we demonstrate that graphics processing units (GPU) can greatly alleviate this problem, enabling the practical application of lossless fingerprints on large databases. More specifically, we show that, with the help of a ~$500 ordinary video card, the entire PubChem database of ~32 million compounds can be searched in ~0.2-2 s on average, which is 2 orders of magnitude faster than a conventional CPU. If multiple query patterns are processed in batch, the speedup is even more dramatic (less than 0.02-0.2 s/query for 1000 queries). In the present study, we use the Elias gamma compression algorithm, which results in a compression ratio as high as 0.097.

  4. Massively Parallel Latent Semantic Analyzes using a Graphics Processing Unit

    Energy Technology Data Exchange (ETDEWEB)

    Cavanagh, Joseph M [ORNL; Cui, Xiaohui [ORNL

    2009-01-01

    Latent Semantic Indexing (LSA) aims to reduce the dimensions of large Term-Document datasets using Singular Value Decomposition. However, with the ever expanding size of data sets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. The Graphics Processing Unit (GPU) can solve some highly parallel problems much faster than the traditional sequential processor (CPU). Thus, a deployable system using a GPU to speedup large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a computer cluster. Due to the GPU s application-specific architecture, harnessing the GPU s computational prowess for LSA is a great challenge. We present a parallel LSA implementation on the GPU, using NVIDIA Compute Unified Device Architecture and Compute Unified Basic Linear Algebra Subprograms. The performance of this implementation is compared to traditional LSA implementation on CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1000x1000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran five to six times faster than the CPU version. The large variation is due to architectural benefits the GPU has for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  5. Towards a Unified Sentiment Lexicon Based on Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Liliana Ibeth Barbosa-Santillán

    2014-01-01

    Full Text Available This paper presents an approach to create what we have called a Unified Sentiment Lexicon (USL. This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral {P,N,Z} depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and −1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and −1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95.430 lexical entries, out of which there are 35.201 considered to be positive, 22.029 negative, and 38.200 neutral. Finally, the runtime was 10 minutes for 95.430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times.

  6. Kinematic modelling of disc galaxies using graphics processing units

    Science.gov (United States)

    Bekiaris, G.; Glazebrook, K.; Fluke, C. J.; Abraham, R.

    2016-01-01

    With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ˜100 when compared to a single-threaded CPU, and up to a factor of ˜10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.

  7. Graphics processing unit-accelerated quantitative trait Loci detection.

    Science.gov (United States)

    Chapuis, Guillaume; Filangi, Olivier; Elsen, Jean-Michel; Lavenier, Dominique; Le Roy, Pascale

    2013-09-01

    Mapping quantitative trait loci (QTL) using genetic marker information is a time-consuming analysis that has interested the mapping community in recent decades. The increasing amount of genetic marker data allows one to consider ever more precise QTL analyses while increasing the demand for computation. Part of the difficulty of detecting QTLs resides in finding appropriate critical values or threshold values, above which a QTL effect is considered significant. Different approaches exist to determine these thresholds, using either empirical methods or algebraic approximations. In this article, we present a new implementation of existing software, QTLMap, which takes advantage of the data parallel nature of the problem by offsetting heavy computations to a graphics processing unit (GPU). Developments on the GPU were implemented using Cuda technology. This new implementation performs up to 75 times faster than the previous multicore implementation, while maintaining the same results and level of precision (Double Precision) and computing both QTL values and thresholds. This speedup allows one to perform more complex analyses, such as linkage disequilibrium linkage analyses (LDLA) and multiQTL analyses, in a reasonable time frame.

  8. Accelerating VASP electronic structure calculations using graphic processing units

    KAUST Repository

    Hacene, Mohamed

    2012-08-20

    We present a way to improve the performance of the electronic structure Vienna Ab initio Simulation Package (VASP) program. We show that high-performance computers equipped with graphics processing units (GPUs) as accelerators may reduce drastically the computation time when offloading these sections to the graphic chips. The procedure consists of (i) profiling the performance of the code to isolate the time-consuming parts, (ii) rewriting these so that the algorithms become better-suited for the chosen graphic accelerator, and (iii) optimizing memory traffic between the host computer and the GPU accelerator. We chose to accelerate VASP with NVIDIA GPU using CUDA. We compare the GPU and original versions of VASP by evaluating the Davidson and RMM-DIIS algorithms on chemical systems of up to 1100 atoms. In these tests, the total time is reduced by a factor between 3 and 8 when running on n (CPU core + GPU) compared to n CPU cores only, without any accuracy loss. © 2012 Wiley Periodicals, Inc.

  9. Parallelizing the Cellular Potts Model on graphics processing units

    Science.gov (United States)

    Tapia, José Juan; D'Souza, Roshan M.

    2011-04-01

    The Cellular Potts Model (CPM) is a lattice based modeling technique used for simulating cellular structures in computational biology. The computational complexity of the model means that current serial implementations restrict the size of simulation to a level well below biological relevance. Parallelization on computing clusters enables scaling the size of the simulation but marginally addresses computational speed due to the limited memory bandwidth between nodes. In this paper we present new data-parallel algorithms and data structures for simulating the Cellular Potts Model on graphics processing units. Our implementations handle most terms in the Hamiltonian, including cell-cell adhesion constraint, cell volume constraint, cell surface area constraint, and cell haptotaxis. We use fine level checkerboards with lock mechanisms using atomic operations to enable consistent updates while maintaining a high level of parallelism. A new data-parallel memory allocation algorithm has been developed to handle cell division. Tests show that our implementation enables simulations of >10 cells with lattice sizes of up to 256 3 on a single graphics card. Benchmarks show that our implementation runs ˜80× faster than serial implementations, and ˜5× faster than previous parallel implementations on computing clusters consisting of 25 nodes. The wide availability and economy of graphics cards mean that our techniques will enable simulation of realistically sized models at a fraction of the time and cost of previous implementations and are expected to greatly broaden the scope of CPM applications.

  10. Kinematic Modelling of Disc Galaxies using Graphics Processing Units

    CERN Document Server

    Bekiaris, Georgios; Fluke, Christopher J; Abraham, Roberto

    2015-01-01

    With large-scale Integral Field Spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the Graphics Processing Unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and Nested Sampling algorithms, but also a naive brute-force approach based on Nested Grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ~100 when compared to a single-threaded CPU, and up to a factor of ~10 when compared to a multi-threaded dual CPU configuration. Our method's accuracy, precision and robustness a...

  11. Efficient graphics processing unit-based voxel carving for surveillance

    Science.gov (United States)

    Ober-Gecks, Antje; Zwicker, Marius; Henrich, Dominik

    2016-07-01

    A graphics processing unit (GPU)-based implementation of a space carving method for the reconstruction of the photo hull is presented. In particular, the generalized voxel coloring with item buffer approach is transferred to the GPU. The fast computation on the GPU is realized by an incrementally calculated standard deviation within the likelihood ratio test, which is applied as color consistency criterion. A fast and efficient computation of complete voxel-pixel projections is provided using volume rendering methods. This generates a speedup of the iterative carving procedure while considering all given pixel color information. Different volume rendering methods, such as texture mapping and raycasting, are examined. The termination of the voxel carving procedure is controlled through an anytime concept. The photo hull algorithm is examined for its applicability to real-world surveillance scenarios as an online reconstruction method. For this reason, a GPU-based redesign of a visual hull algorithm is provided that utilizes geometric knowledge about known static occluders of the scene in order to create a conservative and complete visual hull that includes all given objects. This visual hull approximation serves as input for the photo hull algorithm.

  12. Learning Objects: A User-Centered Design Process

    Science.gov (United States)

    Branon, Rovy F., III

    2011-01-01

    Design research systematically creates or improves processes, products, and programs through an iterative progression connecting practice and theory (Reinking, 2008; van den Akker, 2006). Developing a new instructional systems design (ISD) processes through design research is necessary when new technologies emerge that challenge existing practices…

  13. Knowledge and Processes in Design. DPS Final Report.

    Science.gov (United States)

    Pirolli, Peter

    Four papers from a project concerning information-processing characterizations of the knowledge and processes involved in design are presented. The project collected and analyzed verbal protocols from instructional designers, architects, and mechanical engineers. A framework was developed for characterizing the problem spaces of design that…

  14. VCM Process Design: An ABET 2000 Fully Compliant Project

    Science.gov (United States)

    Benyahia, Farid

    2005-01-01

    A long experience in undergraduate vinyl chloride monomer (VCM) process design projects is shared in this paper. The VCM process design is shown to be fully compliant with ABET 2000 criteria by virtue of its abundance in chemical engineering principles, integration of interpersonal and interdisciplinary skills in design, safety, economics, and…

  15. Multi-Mission System Architecture Platform: Design and Verification of the Remote Engineering Unit

    Science.gov (United States)

    Sartori, John

    2005-01-01

    The Multi-Mission System Architecture Platform (MSAP) represents an effort to bolster efficiency in the spacecraft design process. By incorporating essential spacecraft functionality into a modular, expandable system, the MSAP provides a foundation on which future spacecraft missions can be developed. Once completed, the MSAP will provide support for missions with varying objectives, while maintaining a level of standardization that will minimize redesign of general system components. One subsystem of the MSAP, the Remote Engineering Unit (REU), functions by gathering engineering telemetry from strategic points on the spacecraft and providing these measurements to the spacecraft's Command and Data Handling (C&DH) subsystem. Before the MSAP Project reaches completion, all hardware, including the REU, must be verified. However, the speed and complexity of the REU circuitry rules out the possibility of physical prototyping. Instead, the MSAP hardware is designed and verified using the Verilog Hardware Definition Language (HDL). An increasingly popular means of digital design, HDL programming provides a level of abstraction, which allows the designer to focus on functionality while logic synthesis tools take care of gate-level design and optimization. As verification of the REU proceeds, errors are quickly remedied, preventing costly changes during hardware validation. After undergoing the careful, iterative processes of verification and validation, the REU and MSAP will prove their readiness for use in a multitude of spacecraft missions.

  16. A Design Method for Impact-Loaded Slender Armour Units

    DEFF Research Database (Denmark)

    Burcharth, Hans F.

    It is well known that the bigger a structural member like a beam, the relatively weaker it is. In the end it cannot even support its own weight. The same problem holds for slender armour units such as Dolosse....

  17. Achieving More Sustainable Designs through a Process Synthesis-Intensification Framework

    DEFF Research Database (Denmark)

    Babi, Deenesh Kavi; Woodley, John; Gani, Rafiqul

    2014-01-01

    More sustainable process designs refer to design alternatives that correspond to lowervalues of a set of targeted performance criteria. In this paper, a multi-level frameworkfor process synthesis-intensification that leads to more sustainable process designs ispresented. At the highest level...... of aggregation, process flowsheets are synthesized interms of a sequence of unit operations that correspond to acceptable values for a set oftargeted performance criteria. This defines the upper-bound of the performance criteriaand the design is called the base-case design. At the next lower level, tasks...... representingunit operations are identified and analysedin terms of means-ends to find moreflowsheet alternatives that improve the base-case design and correspond to lower valuesof the set of targeted performance criteria. Atthe lowest level, phenomena employed toperform the specific tasks areidentified...

  18. United States Department of Energy Integrated Manufacturing & Processing Predoctoral Fellowships. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Petrochenkov, M.

    2003-03-31

    The objective of the program was threefold: to create a pool of PhDs trained in the integrated approach to manufacturing and processing, to promote academic interest in the field, and to attract talented professionals to this challenging area of engineering. It was anticipated that the program would result in the creation of new manufacturing methods that would contribute to improved energy efficiency, to better utilization of scarce resources, and to less degradation of the environment. Emphasis in the competition was on integrated systems of manufacturing and the integration of product design with manufacturing processes. Research addressed such related areas as aspects of unit operations, tooling and equipment, intelligent sensors, and manufacturing systems as they related to product design.

  19. Process Simulation for the Design and Scale Up of Heterogeneous Catalytic Process: Kinetic Modelling Issues

    Directory of Open Access Journals (Sweden)

    Antonio Tripodi

    2017-05-01

    Full Text Available Process simulation represents an important tool for plant design and optimization, either applied to well established or to newly developed processes. Suitable thermodynamic packages should be selected in order to properly describe the behavior of reactors and unit operations and to precisely define phase equilibria. Moreover, a detailed and representative kinetic scheme should be available to predict correctly the dependence of the process on its main variables. This review points out some models and methods for kinetic analysis specifically applied to the simulation of catalytic processes, as a basis for process design and optimization. Attention is paid also to microkinetic modelling and to the methods based on first principles, to elucidate mechanisms and independently calculate thermodynamic and kinetic parameters. Different case studies support the discussion. At first, we have selected two basic examples from the industrial chemistry practice, e.g., ammonia and methanol synthesis, which may be described through a relatively simple reaction pathway and the relative available kinetic scheme. Then, a more complex reaction network is deeply discussed to define the conversion of bioethanol into syngas/hydrogen or into building blocks, such as ethylene. In this case, lumped kinetic schemes completely fail the description of process behavior. Thus, in this case, more detailed—e.g., microkinetic—schemes should be available to implement into the simulator. However, the correct definition of all the kinetic data when complex microkinetic mechanisms are used, often leads to unreliable, highly correlated parameters. In such cases, greater effort to independently estimate some relevant kinetic/thermodynamic data through Density Functional Theory (DFT/ab initio methods may be helpful to improve process description.

  20. Optimality criteria design and stress constraint processing

    Science.gov (United States)

    Levy, R.

    1982-01-01

    Methods for pre-screening stress constraints into either primary or side-constraint categories are reviewed; a projection method, which is developed from prior cycle stress resultant history, is introduced as an additional screening parameter. Stress resultant projections are also employed to modify the traditional stress-ratio, side-constraint boundary. A special application of structural modification reanalysis is applied to the critical stress constraints to provide feasible designs that are preferable to those obtained by conventional scaling. Sample problem executions show relatively short run times and fewer design cycle iterations to achieve low structural weights; those attained are comparable to the minimum values developed elsewhere.

  1. Undergraduate Game Degree Programs in the United Kingdom and United States: A Comparison of the Curriculum Planning Process

    Science.gov (United States)

    McGill, Monica M.

    2010-01-01

    Digital games are marketed, mass-produced, and consumed by an increasing number of people and the game industry is only expected to grow. In response, post-secondary institutions in the United Kingdom (UK) and the United States (US) have started to create game degree programs. Though curriculum theorists provide insight into the process of…

  2. Undergraduate Game Degree Programs in the United Kingdom and United States: A Comparison of the Curriculum Planning Process

    Science.gov (United States)

    McGill, Monica M.

    2010-01-01

    Digital games are marketed, mass-produced, and consumed by an increasing number of people and the game industry is only expected to grow. In response, post-secondary institutions in the United Kingdom (UK) and the United States (US) have started to create game degree programs. Though curriculum theorists provide insight into the process of…

  3. Technological design as an evolutionary process

    NARCIS (Netherlands)

    Brey, Philip; Kroes, Peter; Vermaas, Pieter E.; Light, Andrew; Moore, Steven A.

    2008-01-01

    The evolution of technical artifacts is often seen as radically different from the evolution of biological species. Technical artifacts are normally understood to result from the purposive intelligence of designers whereas biological species and organisms are held to have resulted from evolution by

  4. Technological design as an evolutionary process

    NARCIS (Netherlands)

    Brey, Philip A.E.; Kroes, Peter; Vermaas, Pieter E.; Light, Andrew; Moore, Steven A.

    2008-01-01

    The evolution of technical artifacts is often seen as radically different from the evolution of biological species. Technical artifacts are normally understood to result from the purposive intelligence of designers whereas biological species and organisms are held to have resulted from evolution by

  5. A design process for creative technology

    NARCIS (Netherlands)

    Mader, Angelika; Eggink, Wouter

    2014-01-01

    Creative Technology is a new bachelor programme at the University of Twente. Goal of Creative Technology is to design products and applications that improve the quality of daily life in its manifold aspects, building on Information and Communication Technology (ICT). The application domains range fr

  6. Interpretive Research Design. Concepts and Processes

    NARCIS (Netherlands)

    Schwartz-Shea, P.; Yanow, D.

    2012-01-01

    Research design is fundamental to all scientific endeavors, at all levels and in all institutional settings. In many social science disciplines, however, scholars working in an interpretive-qualitative tradition get little guidance on this aspect of research from the positivist-centered training the

  7. Biochemical Engineering. Part II: Process Design

    Science.gov (United States)

    Atkinson, B.

    1972-01-01

    Describes types of industrial techniques involving biochemical products, specifying the advantages and disadvantages of batch and continuous processes, and contrasting biochemical and chemical engineering. See SE 506 318 for Part I. (AL)

  8. Integrating conceptualizations of experience into the interaction design process

    DEFF Research Database (Denmark)

    Dalsgaard, Peter

    2010-01-01

    From a design perspective, the increasing awareness of experiential aspects of interactive systems prompts the question of how conceptualizations of experience can inform and potentially be integrated into the interaction design process. This paper presents one approach to integrating theoretical...... perspectives on experience in design by formulating conceptual constructs that can guide design decisions....

  9. Preconceptual design of a salt splitting process using ceramic membranes

    Energy Technology Data Exchange (ETDEWEB)

    Kurath, D.E.; Brooks, K.P.; Hollenberg, G.W.; Clemmer, R. [Pacific Northwest National Lab., Richland, WA (United States); Balagopal, S.; Landro, T.; Sutija, D.P. [Ceramatec, Inc., Salt Lake City, UT (United States)

    1997-01-01

    Inorganic ceramic membranes for salt splitting of radioactively contaminated sodium salt solutions are being developed for treating U. S. Department of Energy tank wastes. The process consists of electrochemical separation of sodium ions from the salt solution using sodium (Na) Super Ion Conductors (NaSICON) membranes. The primary NaSICON compositions being investigated are based on rare- earth ions (RE-NaSICON). Potential applications include: caustic recycling for sludge leaching, regenerating ion exchange resins, inhibiting corrosion in carbon-steel tanks, or retrieving tank wastes; reducing the volume of low-level wastes volume to be disposed of; adjusting pH and reducing competing cations to enhance cesium ion exchange processes; reducing sodium in high-level-waste sludges; and removing sodium from acidic wastes to facilitate calcining. These applications encompass wastes stored at the Hanford, Savannah River, and Idaho National Engineering Laboratory sites. The overall project objective is to supply a salt splitting process unit that impacts the waste treatment and disposal flowsheets and meets user requirements. The potential flowsheet impacts include improving the efficiency of the waste pretreatment processes, reducing volume, and increasing the quality of the final waste disposal forms. Meeting user requirements implies developing the technology to the point where it is available as standard equipment with predictable and reliable performance. This report presents two preconceptual designs for a full-scale salt splitting process based on the RE-NaSICON membranes to distinguish critical items for testing and to provide a vision that site users can evaluate.

  10. Designing an educative curriculum unit for teaching molecular geometry in high school chemistry

    Science.gov (United States)

    Makarious, Nader N.

    Chemistry is a highly abstract discipline that is taught and learned with the aid of various models. Among the most challenging, yet a fundamental topic in general chemistry at the high school level, is molecular geometry. This study focused on developing exemplary educative curriculum materials pertaining to the topic of molecular geometry. The methodology used in this study consisted of several steps. First, a diverse set of models were analyzed to determine to what extent each model serves its purpose in teaching molecular geometry. Second, a number of high school teachers and college chemistry professors were asked to share their experiences on using models in teaching molecular geometry through an online questionnaire. Third, findings from the comparative analysis of models, teachers’ experiences, literature review on models and students’ misconceptions, the curriculum expectations of the Next Generation Science Standards and their emphasis on three-dimensional learning and nature of science (NOS) contributed to the development of the molecular geometry unit. Fourth, the developed unit was reviewed by fellow teachers and doctoral-level science education experts and was revised to further improve its coherence and clarity in support of teaching and learning of the molecular geometry concepts. The produced educative curriculum materials focus on the scientific practice of developing and using models as promoted in the Next Generations Science Standards (NGSS) while also addressing nature of science (NOS) goals. The educative features of the newly developed unit support teachers’ pedagogical knowledge (PK) and pedagogical content knowledge (PCK). The unit includes an overview, teacher’s guide, and eight detailed lesson plans with inquiry oriented modeling activities replete with models and suggestions for teachers, as well as formative and summative assessment tasks. The unit design process serves as a model for redesigning other instructional units in

  11. Integrating Thermal Tools Into the Mechanical Design Process

    Science.gov (United States)

    Tsuyuki, Glenn T.; Siebes, Georg; Novak, Keith S.; Kinsella, Gary M.

    1999-01-01

    The intent of mechanical design is to deliver a hardware product that meets or exceeds customer expectations, while reducing cycle time and cost. To this end, an integrated mechanical design process enables the idea of parallel development (concurrent engineering). This represents a shift from the traditional mechanical design process. With such a concurrent process, there are significant issues that have to be identified and addressed before re-engineering the mechanical design process to facilitate concurrent engineering. These issues also assist in the integration and re-engineering of the thermal design sub-process since it resides within the entire mechanical design process. With these issues in mind, a thermal design sub-process can be re-defined in a manner that has a higher probability of acceptance, thus enabling an integrated mechanical design process. However, the actual implementation is not always problem-free. Experience in applying the thermal design sub-process to actual situations provides the evidence for improvement, but more importantly, for judging the viability and feasibility of the sub-process.

  12. The Process of Soviet Weapons Design

    Science.gov (United States)

    1978-03-01

    system on the BMP from an early 1940s German design. But the validity and usefulness of a theory, especially one that makes predictions about the future...when the 1940 publication of a highly significant Soviet discovery of spontaneous fission resulted in a complete lack of an American response, the...taken from I. N. Golovin , I. V. Khurchatov, Atomizdat, Moscow, 1973, and from Herbert York, The Advisors. Oppenheimer, Teller, and the Superbomb, W. H

  13. Flocking-based Document Clustering on the Graphics Processing Unit

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL; Patton, Robert M [ORNL; ST Charles, Jesse Lee [ORNL

    2008-01-01

    Abstract?Analyzing and grouping documents by content is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. Each bird represents a single document and flies toward other documents that are similar to it. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to receive results in a reasonable amount of time. However, flocking behavior, along with most naturally inspired algorithms such as ant colony optimization and particle swarm optimization, are highly parallel and have found increased performance on expensive cluster computers. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. Some applications see a huge increase in performance on this new platform. The cost of these high-performance devices is also marginal when compared with the price of cluster machines. In this paper, we have conducted research to exploit this architecture and apply its strengths to the document flocking problem. Our results highlight the potential benefit the GPU brings to all naturally inspired algorithms. Using the CUDA platform from NIVIDA? we developed a document flocking implementation to be run on the NIVIDA?GEFORCE 8800. Additionally, we developed a similar but sequential implementation of the same algorithm to be run on a desktop CPU. We tested the performance of each on groups of news articles ranging in size from 200 to 3000 documents. The results of these tests were very significant. Performance gains ranged from three to nearly five times improvement of the GPU over the CPU implementation. This dramatic improvement in runtime makes the GPU a potentially revolutionary platform for document clustering algorithms.

  14. Information processing theory in the early design stages

    DEFF Research Database (Denmark)

    Cash, Philip; Kreye, Melanie

    2014-01-01

    suggestions for improvements and support. One theory that may be particularly applicable to the early design stages is Information Processing Theory (IPT) as it is linked to the design process with regard to the key concepts considered. IPT states that designers search for information if they perceive......, the new knowledge is shared between the design team to reduce ambiguity with regards to its meaning and to build a shared understanding – reducing perceived uncertainty. Thus, we propose that Information-Processing Theory is suitable to describe designer activity in the early design stages......Developing appropriate theory is one of the main challenges facing engineering design (Cross, 2007). Theory helps to both explain design activity but also support greater research impact in the domain. It is useful for gaining a more comprehensive understanding of design activity and developing...

  15. Information processing theory in the early design stages

    DEFF Research Database (Denmark)

    Cash, Philip; Kreye, Melanie

    2014-01-01

    Developing appropriate theory is one of the main challenges facing engineering design (Cross, 2007). Theory helps to both explain design activity but also support greater research impact in the domain. It is useful for gaining a more comprehensive understanding of design activity and developing...... suggestions for improvements and support. One theory that may be particularly applicable to the early design stages is Information Processing Theory (IPT) as it is linked to the design process with regard to the key concepts considered. IPT states that designers search for information if they perceive...... uncertainty with regard to the knowledge necessary to solve a design challenge. They then process this information and compare if the new knowledge they have gained covers the previous knowledge gap. In engineering design, uncertainty plays a key role, particularly in the early design stages which has been...

  16. Handling geophysical flows: Numerical modelling using Graphical Processing Units

    Science.gov (United States)

    Garcia-Navarro, Pilar; Lacasta, Asier; Juez, Carmelo; Morales-Hernandez, Mario

    2016-04-01

    Computational tools may help engineers in the assessment of sediment transport during the decision-making processes. The main requirements are that the numerical results have to be accurate and simulation models must be fast. The present work is based on the 2D shallow water equations in combination with the 2D Exner equation [1]. The resulting numerical model accuracy was already discussed in previous work. Regarding the speed of the computation, the Exner equation slows down the already costly 2D shallow water model as the number of variables to solve is increased and the numerical stability is more restrictive. On the other hand, the movement of poorly sorted material over steep areas constitutes a hazardous environmental problem. Computational tools help in the predictions of such landslides [2]. In order to overcome this problem, this work proposes the use of Graphical Processing Units (GPUs) for decreasing significantly the simulation time [3, 4]. The numerical scheme implemented in GPU is based on a finite volume scheme. The mathematical model and the numerical implementation are compared against experimental and field data. In addition, the computational times obtained with the Graphical Hardware technology are compared against Single-Core (sequential) and Multi-Core (parallel) CPU implementations. References [Juez et al.(2014)] Juez, C., Murillo, J., & Garca-Navarro, P. (2014) A 2D weakly-coupled and efficient numerical model for transient shallow flow and movable bed. Advances in Water Resources. 71 93-109. [Juez et al.(2013)] Juez, C., Murillo, J., & Garca-Navarro, P. (2013) . 2D simulation of granular flow over irregular steep slopes using global and local coordinates. Journal of Computational Physics. 225 166-204. [Lacasta et al.(2014)] Lacasta, A., Morales-Hernndez, M., Murillo, J., & Garca-Navarro, P. (2014) An optimized GPU implementation of a 2D free surface simulation model on unstructured meshes Advances in Engineering Software. 78 1-15. [Lacasta

  17. Viscoelastic Finite Difference Modeling Using Graphics Processing Units

    Science.gov (United States)

    Fabien-Ouellet, G.; Gloaguen, E.; Giroux, B.

    2014-12-01

    Full waveform seismic modeling requires a huge amount of computing power that still challenges today's technology. This limits the applicability of powerful processing approaches in seismic exploration like full-waveform inversion. This paper explores the use of Graphics Processing Units (GPU) to compute a time based finite-difference solution to the viscoelastic wave equation. The aim is to investigate whether the adoption of the GPU technology is susceptible to reduce significantly the computing time of simulations. The code presented herein is based on the freely accessible software of Bohlen (2002) in 2D provided under a General Public License (GNU) licence. This implementation is based on a second order centred differences scheme to approximate time differences and staggered grid schemes with centred difference of order 2, 4, 6, 8, and 12 for spatial derivatives. The code is fully parallel and is written using the Message Passing Interface (MPI), and it thus supports simulations of vast seismic models on a cluster of CPUs. To port the code from Bohlen (2002) on GPUs, the OpenCl framework was chosen for its ability to work on both CPUs and GPUs and its adoption by most of GPU manufacturers. In our implementation, OpenCL works in conjunction with MPI, which allows computations on a cluster of GPU for large-scale model simulations. We tested our code for model sizes between 1002 and 60002 elements. Comparison shows a decrease in computation time of more than two orders of magnitude between the GPU implementation run on a AMD Radeon HD 7950 and the CPU implementation run on a 2.26 GHz Intel Xeon Quad-Core. The speed-up varies depending on the order of the finite difference approximation and generally increases for higher orders. Increasing speed-ups are also obtained for increasing model size, which can be explained by kernel overheads and delays introduced by memory transfers to and from the GPU through the PCI-E bus. Those tests indicate that the GPU memory size

  18. Method for innovative synthesis-design of chemical process flowsheets

    OpenAIRE

    Kumar Tula, Anjan; Gani, Rafiqul

    2015-01-01

    Chemical process synthesis-design involve the identification of the processing route to reach a desired product from a specified set of raw materials, design of the operations involved in the processing route, the calculations of utility requirements, the calculations of waste and emission to the surrounding and many more. Different methods (knowledge-based [1], mathematical programming [2], hybrid, etc.) have been proposed and are also currently employed to solve these synthesis-design probl...

  19. Adding Users to the Website Design Process

    Science.gov (United States)

    Tomeo, Megan L.

    2012-01-01

    Alden Library began redesigning its website over a year ago. Throughout the redesign process the students, faculty, and staff that make up the user base were added to the conversation by utilizing several usability test methods. This article focuses on the usability testing conducted at Alden Library and delves into future usability testing, which…

  20. Design of reciprocal unit based on the Newton-Raphson approximation

    DEFF Research Database (Denmark)

    Gundersen, Anders Torp; Winther-Almstrup, Rasmus; Boesen, Michael

    A design of a reciprocal unit based on Newton-Raphson approximation is described and implemented. We present two different designs for single precisions where one of them is extremely fast but the trade-off is an increase in area. The solution behind the fast design is that the design is fully...

  1. Design and test of a flywheel energy storage unit for spacecraft application

    Science.gov (United States)

    Cormack, A., III; Notti, J. E., Jr.; Ruiz, M. L.

    1975-01-01

    This paper summarizes the design and test of a development flywheel energy storage device intended for spacecraft application. The flywheel unit is the prototype for the rotating assembly portion of an Integrated Power and Attitude Control System (IPACS). The paper includes a general description of the flywheel unit; specific design characteristics for the rotor and bearings, motor-generators, and electronics; an efficiency analysis; and test results for a research unit.

  2. Calculation of HELAS amplitudes for QCD processes using graphics processing unit (GPU)

    CERN Document Server

    Hagiwara, K; Okamura, N; Rainwater, D L; Stelzer, T

    2009-01-01

    We use a graphics processing unit (GPU) for fast calculations of helicity amplitudes of quark and gluon scattering processes in massless QCD. New HEGET ({\\bf H}ELAS {\\bf E}valuation with {\\bf G}PU {\\bf E}nhanced {\\bf T}echnology) codes for gluon self-interactions are introduced, and a C++ program to convert the MadGraph generated FORTRAN codes into HEGET codes in CUDA (a C-platform for general purpose computing on GPU) is created. Because of the proliferation of the number of Feynman diagrams and the number of independent color amplitudes, the maximum number of final state jets we can evaluate on a GPU is limited to 4 for pure gluon processes ($gg\\to 4g$), or 5 for processes with one or more quark lines such as $q\\bar{q}\\to 5g$ and $qq\\to qq+3g$. Compared with the usual CPU-based programs, we obtain 60-100 times better performance on the GPU, except for 5-jet production processes and the $gg\\to 4g$ processes for which the GPU gain over the CPU is about 20.

  3. Hidden realities inside PBL design processes

    DEFF Research Database (Denmark)

    Pihl, Ole Verner

    2015-01-01

    for the education as being intuition, reflection, artistic progression and critical interpretation (Kiib 2004). “As the reflection and critical interpretation are well integrated within the education, mostly parts of the exam evaluation, it seems like the artistic progression and intuition are somewhat drowning...... are passing from a complex world into one based on super complexity? Could Gaston Bachelard (1958), who writes in his book The Poetic of Space "that poets and artists are born phenomenologists," help architecture and design students in their journey to find his/her own professional expression? This paper...

  4. SOLVING GLOBAL PROBLEMS USING COLLABORATIVE DESIGN PROCESSES

    DEFF Research Database (Denmark)

    Lenau, Torben Anker; Mejborn, Christina Okai

    2011-01-01

    new solutions that would help solve the global problem of sanitation. Lack of sanitation is a problem for 42% of the world’s population but it is also a taboo topic that only very few people will engage in. In the one-day workshop participants from very different areas came together and brought...... forward proposed solutions for how to design, brand and make business models for how to solve aspects of the sanitation problem. The workshop showed that it was possible to work freely with such a taboo topic and that in particular the use of visualisation tools, i.e. drawing posters and building simple...

  5. Computational and Pharmacological Target of Neurovascular Unit for Drug Design and Delivery

    Directory of Open Access Journals (Sweden)

    Md. Mirazul Islam

    2015-01-01

    Full Text Available The blood-brain barrier (BBB is a dynamic and highly selective permeable interface between central nervous system (CNS and periphery that regulates the brain homeostasis. Increasing evidences of neurological disorders and restricted drug delivery process in brain make BBB as special target for further study. At present, neurovascular unit (NVU is a great interest and highlighted topic of pharmaceutical companies for CNS drug design and delivery approaches. Some recent advancement of pharmacology and computational biology makes it convenient to develop drugs within limited time and affordable cost. In this review, we briefly introduce current understanding of the NVU, including molecular and cellular composition, physiology, and regulatory function. We also discuss the recent technology and interaction of pharmacogenomics and bioinformatics for drug design and step towards personalized medicine. Additionally, we develop gene network due to understand NVU associated transporter proteins interactions that might be effective for understanding aetiology of neurological disorders and new target base protective therapies development and delivery.

  6. Major design issues of molten carbonate fuel cell power generation unit

    Energy Technology Data Exchange (ETDEWEB)

    Chen, T.P.

    1996-04-01

    In addition to the stack, a fuel cell power generation unit requires fuel desulfurization and reforming, fuel and oxidant preheating, process heat removal, waste heat recovery, steam generation, oxidant supply, power conditioning, water supply and treatment, purge gas supply, instrument air supply, and system control. These support facilities add considerable cost and system complexity. Bechtel, as a system integrator of M-C Power`s molten carbonate fuel cell development team, has spent substantial effort to simplify and minimize these supporting facilities to meet cost and reliability goals for commercialization. Similiar to other fuels cells, MCFC faces design challenge of how to comply with codes and standards, achieve high efficiency and part load performance, and meanwhile minimize utility requirements, weight, plot area, and cost. However, MCFC has several unique design issues due to its high operating temperature, use of molten electrolyte, and the requirement of CO2 recycle.

  7. Computational and Pharmacological Target of Neurovascular Unit for Drug Design and Delivery.

    Science.gov (United States)

    Islam, Md Mirazul; Mohamed, Zahurin

    2015-01-01

    The blood-brain barrier (BBB) is a dynamic and highly selective permeable interface between central nervous system (CNS) and periphery that regulates the brain homeostasis. Increasing evidences of neurological disorders and restricted drug delivery process in brain make BBB as special target for further study. At present, neurovascular unit (NVU) is a great interest and highlighted topic of pharmaceutical companies for CNS drug design and delivery approaches. Some recent advancement of pharmacology and computational biology makes it convenient to develop drugs within limited time and affordable cost. In this review, we briefly introduce current understanding of the NVU, including molecular and cellular composition, physiology, and regulatory function. We also discuss the recent technology and interaction of pharmacogenomics and bioinformatics for drug design and step towards personalized medicine. Additionally, we develop gene network due to understand NVU associated transporter proteins interactions that might be effective for understanding aetiology of neurological disorders and new target base protective therapies development and delivery.

  8. History of forest survey sampling designs in the United States

    Science.gov (United States)

    W. E. Frayer; George M. Furnival

    2000-01-01

    Extensive forest inventories of forested lands in the United States were begun in the early part of the 20th century, but widespread, frequent use was not common until after WWII. Throughout the development of inventory techniques and their application to assess the status of the nation's forests, most of the work has been done by the USDA Forest Service through...

  9. The Lean Design of Manufacturing Process

    Directory of Open Access Journals (Sweden)

    Dana Strachotová

    2008-12-01

    Full Text Available This article is intended to using of Six Sigma methodology. A break trough strategy to significantly improve customer satisfaction and shareholder value by reducing variability in every aspects of business. It enhances the ability to delivery customer satisfaction and cost improvement results faster – within months from the start, and sustains the rate of improvement on-going. One of the most powerful ways to improve business performance is combining business process management (BPM strategies with Six Sigma strategies. BPM strategies emphasize process improvements and automation to drive performance, while Six Sigma uses statistical analysis to drive quality improvements. The two strategies are not mutually exclusive, however, and some savvy companies have discovered that combining BPM and Six Sigma can create dramatic results. Six Sigma methodology teaches and deploys hard skills and business practices emphasizing.

  10. High performance direct gravitational N-body simulations on graphics processing units II: An implementation in CUDA

    NARCIS (Netherlands)

    Belleman, R.G.; Bédorf, J.; Portegies Zwart, S.F.

    2008-01-01

    We present the results of gravitational direct N-body simulations using the graphics processing unit (GPU) on a commercial NVIDIA GeForce 8800GTX designed for gaming computers. The force evaluation of the N-body problem is implemented in "Compute Unified Device Architecture" (CUDA) using the GPU to

  11. High performance direct gravitational N-body simulations on graphics processing units II: An implementation in CUDA

    NARCIS (Netherlands)

    Belleman, R.G.; Bédorf, J.; Portegies Zwart, S.F.

    2008-01-01

    We present the results of gravitational direct N-body simulations using the graphics processing unit (GPU) on a commercial NVIDIA GeForce 8800GTX designed for gaming computers. The force evaluation of the N-body problem is implemented in "Compute Unified Device Architecture" (CUDA) using the GPU to

  12. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    Science.gov (United States)

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  13. Detailed design and first tests of the application software for the instrument control unit of Euclid-NISP

    Science.gov (United States)

    Ligori, S.; Corcione, L.; Capobianco, V.; Bonino, D.; Sirri, G.; Fornari, F.; Giacomini, F.; Patrizii, L.; Valenziano, L.; Travaglini, R.; Colodro, C.; Bortoletto, F.; Bonoli, C.; Chiarusi, T.; Margiotta, A.; Mauri, N.; Pasqualini, L.; Spurio, M.; Tenti, M.; Dal Corso, F.; Dusini, S.; Laudisio, F.; Sirignano, C.; Stanco, L.; Ventura, S.; Auricchio, N.; Balestra, A.; Franceschi, E.; Morgante, G.; Trifoglio, M.; Medinaceli, E.; Guizzo, G. P.; Debei, S.; Stephen, J. B.

    2016-07-01

    In this paper we describe the detailed design of the application software (ASW) of the instrument control unit (ICU) of NISP, the Near-Infrared Spectro-Photometer of the Euclid mission. This software is based on a real-time operating system (RTEMS) and will interface with all the subunits of NISP, as well as the command and data management unit (CDMU) of the spacecraft for telecommand and housekeeping management. We briefly review the main requirements driving the design and the architecture of the software that is approaching the Critical Design Review level. The interaction with the data processing unit (DPU), which is the intelligent subunit controlling the detector system, is described in detail, as well as the concept for the implementation of the failure detection, isolation and recovery (FDIR) algorithms. The first version of the software is under development on a Breadboard model produced by AIRBUS/CRISA. We describe the results of the tests and the main performances and budgets.

  14. A systems-based approach for integrated design of materials, products and design process chains

    Science.gov (United States)

    Panchal, Jitesh H.; Choi, Hae-Jin; Allen, Janet K.; McDowell, David L.; Mistree, Farrokh

    2007-12-01

    The concurrent design of materials and products provides designers with flexibility to achieve design objectives that were not previously accessible. However, the improved flexibility comes at a cost of increased complexity of the design process chains and the materials simulation models used for executing the design chains. Efforts to reduce the complexity generally result in increased uncertainty. We contend that a systems based approach is essential for managing both the complexity and the uncertainty in design process chains and simulation models in concurrent material and product design. Our approach is based on simplifying the design process chains systematically such that the resulting uncertainty does not significantly affect the overall system performance. Similarly, instead of striving for accurate models for multiscale systems (that are inherently complex), we rely on making design decisions that are robust to uncertainties in the models. Accordingly, we pursue hierarchical modeling in the context of design of multiscale systems. In this paper our focus is on design process chains. We present a systems based approach, premised on the assumption that complex systems can be designed efficiently by managing the complexity of design process chains. The approach relies on (a) the use of reusable interaction patterns to model design process chains, and (b) consideration of design process decisions using value-of-information based metrics. The approach is illustrated using a Multifunctional Energetic Structural Material (MESM) design example. Energetic materials store considerable energy which can be released through shock-induced detonation; conventionally, they are not engineered for strength properties. The design objectives for the MESM in this paper include both sufficient strength and energy release characteristics. The design is carried out by using models at different length and time scales that simulate different aspects of the system. Finally, by

  15. Advanced Development Waste Processing Unit for Combat Vehicles. Phase 2

    Science.gov (United States)

    1987-12-29

    designed. The Waste Disposal Port (WDP) was designed so as to permit waste to be placed into the WPU directly from the Pacto toilet currently in the CCPV...designed to fit within the confines of the waste storage compartment under the Pacto toilet in the CCPV. A WDP was Incorporated in the top of the WPU

  16. Bioreactor and process design for biohydrogen production.

    Science.gov (United States)

    Show, Kuan-Yeow; Lee, Duu-Jong; Chang, Jo-Shu

    2011-09-01

    Biohydrogen is regarded as an attractive future clean energy carrier due to its high energy content and environmental-friendly conversion. It has the potential for renewable biofuel to replace current hydrogen production which rely heavily on fossil fuels. While biohydrogen production is still in the early stage of development, there have been a variety of laboratory- and pilot-scale systems developed with promising potential. This work presents a review of advances in bioreactor and bioprocess design for biohydrogen production. The state-of-the art of biohydrogen production is discussed emphasizing on production pathways, factors affecting biohydrogen production, as well as bioreactor configuration and operation. Challenges and prospects of biohydrogen production are also outlined.

  17. Simulation-enhanced lean design process

    Directory of Open Access Journals (Sweden)

    Jon H. Marvel

    2009-07-01

    Full Text Available 72 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A traditional lean transformation process does not validate the future state before implementation, relying instead on a series of iterations to modify the system until performance is satisfactory. An enhanced lean process that includes future state validation before implementation is presented.  Simulation modeling and experimentation is proposed as the primary validation tool.  Simulation modeling and experimentation extends value stream mapping to include time, the behavior of individual entities, structural variability, random variability, and component interaction effects. Experiments to analyze the model and draw conclusions about whether the lean transformation effectively addresses the current state gap can be conducted.  Industrial applications of the enhanced lean process show it effectiveness.

  18. 4D Design and Simulation Technologies and Process Design Patterns to Support Lean Construction Methods

    Institute of Scientific and Technical Information of China (English)

    Manfred Breit; Manfred Vogel; Fritz H(a)ubi; Fabian M(a)rki; Micheal Raps

    2008-01-01

    The objective of this ongoing joint research program is to determine how 3D/4D modeling, simula- tion and visualization of Products (buildings), Organizations and Processes (POP) can support lean con- struction. Initial findings suggest that Process Design Pattern may have the potential to intuitively support ICT based lean construction. We initiated a "Process Archeology" in order to reveal the requirements for tools that can support the planning, simulation and control of lean construction methods. First findings show that existing tools provide only limited support and therefore, we started to develop new methodologies and technologies to overcome these shortcomings. Through the introduction of Process Design Patterns, we in- tent to establish process thinking in the interdisciplinary POP design. Optimized construction processes may be synthesized with semi-automatic methods by applying Process Design Pattems on building structures. By providing process templates that integrate problem solution and expert knowledge, Process Design Pat- tems may have the potential to ensure high quality process models.

  19. Systematic Integrated Process Design and Control of Binary Element Reactive Distillation Processes

    DEFF Research Database (Denmark)

    Mansouri, Seyed Soheil; Sales-Cruz, Mauricio; Huusom, Jakob Kjøbsted

    2016-01-01

    In this work, integrated process design and control of reactive distillation processes is considered through a computer-aided framework. First, a set of simple design methods for reactive distillation column that are similar in concept to non-reactive distillation design methods are extended....... It is shown that the same design-control principles that apply to a non-reacting binary system of compounds are also valid for a reactive binary system of elements for distillation columns. Application of this framework shows that designing the reactive distillation process at the maximum driving force...... results in a feasible and reliable design of the process as well as the controller structure....

  20. Design of voice coil motor dynamic focusing unit for a laser scanner

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Moon G.; Kim, Gaeun; Lee, Chan-Woo; Lee, Soo-Hun; Jeon, Yongho, E-mail: princaps@ajou.ac.kr [Department of Mechanical Engineering, Ajou University, San 5, Woncheon-dong, Yeongtong-gu, Suwon-si, Gyeonggi-do 443-749 (Korea, Republic of)

    2014-04-15

    Laser scanning systems have been used for material processing tasks such as welding, cutting, marking, and drilling. However, applications have been limited by the small range of motion and slow speed of the focusing unit, which carries the focusing optics. To overcome these limitations, a dynamic focusing system with a long travel range and high speed is needed. In this study, a dynamic focusing unit for a laser scanning system with a voice coil motor (VCM) mechanism is proposed to enable fast speed and a wide focusing range. The VCM has finer precision and higher speed than conventional step motors and a longer travel range than earlier lead zirconium titanate actuators. The system has a hollow configuration to provide a laser beam path. This also makes it compact and transmission-free and gives it low inertia. The VCM's magnetics are modeled using a permeance model. Its design parameters are determined by optimization using the Broyden–Fletcher–Goldfarb–Shanno method and a sequential quadratic programming algorithm. After the VCM is designed, the dynamic focusing unit is fabricated and assembled. The permeance model is verified by a magnetic finite element method simulation tool, Maxwell 2D and 3D, and by measurement data from a gauss meter. The performance is verified experimentally. The results show a resolution of 0.2 μm and travel range of 16 mm. These are better than those of conventional focusing systems; therefore, this focusing unit can be applied to laser scanning systems for good machining capability.

  1. Using real time process measurements to reduce catheter related bloodstream infections in the intensive care unit

    Science.gov (United States)

    Wall, R; Ely, E; Elasy, T; Dittus, R; Foss, J; Wilkerson, K; Speroff, T

    2005-01-01

    

Problem: Measuring a process of care in real time is essential for continuous quality improvement (CQI). Our inability to measure the process of central venous catheter (CVC) care in real time prevented CQI efforts aimed at reducing catheter related bloodstream infections (CR-BSIs) from these devices. Design: A system was developed for measuring the process of CVC care in real time. We used these new process measurements to continuously monitor the system, guide CQI activities, and deliver performance feedback to providers. Setting: Adult medical intensive care unit (MICU). Key measures for improvement: Measured process of CVC care in real time; CR-BSI rate and time between CR-BSI events; and performance feedback to staff. Strategies for change: An interdisciplinary team developed a standardized, user friendly nursing checklist for CVC insertion. Infection control practitioners scanned the completed checklists into a computerized database, thereby generating real time measurements for the process of CVC insertion. Armed with these new process measurements, the team optimized the impact of a multifaceted intervention aimed at reducing CR-BSIs. Effects of change: The new checklist immediately provided real time measurements for the process of CVC insertion. These process measures allowed the team to directly monitor adherence to evidence-based guidelines. Through continuous process measurement, the team successfully overcame barriers to change, reduced the CR-BSI rate, and improved patient safety. Two years after the introduction of the checklist the CR-BSI rate remained at a historic low. Lessons learnt: Measuring the process of CVC care in real time is feasible in the ICU. When trying to improve care, real time process measurements are an excellent tool for overcoming barriers to change and enhancing the sustainability of efforts. To continually improve patient safety, healthcare organizations should continually measure their key clinical processes in real

  2. Design of a process template for amine synthesis

    DEFF Research Database (Denmark)

    Singh, Ravendra; Godfrey, Andy; Gregertsen, Björn;

    A conceptual nitro reduction process template that should be generic such that it can handle a series of substrates with sim ilar molecular functionality has been designed. The reduction process is based on a continuo us plug-flow slurry reactor. The process template aims at speeding up the process...

  3. Optimum Structural Design of a Chain-Driving Pumping Unit

    Institute of Scientific and Technical Information of China (English)

    Zhang Ailin; Liu Yang

    1994-01-01

    @@ Operating Principle and Essential Parameters In this paper the pumping unit of type QLCJ14-6 is studied. Through the belt driving unit, the motor drives the driving sprocket in which the rotation rate has been reduced by the reduction gearbox.The locus chain moves between the driving sprocket and upper sprocket which are vertically set. There's a special chain element in the locus chain, which drives the reciprocating holster with the main shaft linchpin and slide block. The reciprocating holster could only move up and down when the locus chain moves in a circle. In this way the up and down stroke of the sucker rod and the machine is realized.The lower end of the reciprocating holster is connected with the equilibrium system to make the structure balance. The balancing cylinder is replaced by the balancing block to make the structure simplified.

  4. Sociotechnical design processes and working environment: The case of a continuous process wok

    DEFF Research Database (Denmark)

    Broberg, Ole

    2000-01-01

    A five-year design process of a continuous process wok has been studied with the aim of elucidating the conditions for integrating working environment aspects. The design proc-ess is seen as a network building activity and as a social shaping process of the artefact. A working environment log...

  5. Sociotechnical design processes and working environment: The case of a continuous process wok

    DEFF Research Database (Denmark)

    Broberg, Ole

    2000-01-01

    A five-year design process of a continuous process wok has been studied with the aim of elucidating the conditions for integrating working environment aspects. The design process is seen as a network building activity and as a social shaping process of the artefact. A working environment log...

  6. Sociotechnical design processes and working environment: The case of a continuous process wok

    DEFF Research Database (Denmark)

    Broberg, Ole

    2000-01-01

    A five-year design process of a continuous process wok has been studied with the aim of elucidating the conditions for integrating working environment aspects. The design process is seen as a network building activity and as a social shaping process of the artefact. A working environment log...

  7. Experiences with self designed pyrolyses unit by utilization of various type of fuels

    Energy Technology Data Exchange (ETDEWEB)

    Juchelkova, Dagmar; Roubicek, Vaclav; Mikulova, Zuzana [VSB - Technische Univ. Ostrava (Czech Republic). Energieinst.; Smelik, Roman; Balco, Mario [Arrowline, a.s. (Czech Republic)

    2008-07-01

    According to the situation in the Czech Republic - existing only 3 municipal waste combustion units and about 50 small industrial waste incineration units it seems to be necessary to design some alternative for combustion process. Pyrolysis is an established process that can potentially be used to convert polymer-based materials of different types since a high yield in the separation is not necessary. Pyrolysis is the thermal degradation (without oxygen) led to produce a char, oil and gas. All of which have potential as useful end products. The Czech ministry of the environment seems to have bigger acceptance for the material recycling than for combustion processes. The concept pyrolyse is now to day understood as material recycling. Our work will be concern on the selected materials. Nowdays, polymer-based materials provide a fundamental contribution to all main daily activities (agriculture, automobile industry, packing and so on). Due to their excellent properties are now irreplaceable and absolutely necessary for people life. Their production and use are increasing sharply. On the other hand, they are not quickly decomposed and disposal of used plastics has become a serious problem. (orig.)

  8. Intermediary object for participative design processes based on the ergonomic work analysis

    DEFF Research Database (Denmark)

    Souza da Conceição, Carolina; Duarte, F.; Broberg, Ole

    2012-01-01

    The objective of this paper is to present and discuss the use of an intermediary object, built from the ergonomic work analysis, in a participative design process. The object was a zoning pattern, developed as a visual representation ‘mapping’ of the interrelations among the functional units of t...

  9. Representing the Learning Design of Units of Learning

    National Research Council Canada - National Science Library

    Rob Koper; Bill Olivier

    2004-01-01

      In order to capture current educational practices in eLearning courses, more advanced 'learning design' capabilities are needed than are provided by the open eLearning specifications hitherto available...

  10. METHODS FOR MEASURING CUSTOMER SATISFACTION IN THE DESIGN PROCESS

    Directory of Open Access Journals (Sweden)

    Mirko Đapic

    2007-09-01

    Full Text Available In the design process, the designer makes decisions under uncertainty, contradiction and ignorance conditions. Are these decisions correct and to what extent? How much do they influence customer's satisfaction? These are only some of the questions the designers face all time. These dilemmas appear more in early phases of the design process. The explicit objectives of this paper is improvement of design decision making process in such a way that in same time when designer made a decision he or she show result that decision on customer satisfaction. In order to solve above problems, the paper describes the method, which enables integrated use of the axiomatic approach to designing, and the Taguchi method of robust design. This approach implies the modelling of the development process as an evidence-reasoning network based on uncertain evidence described via belief functions of (Dempster - Shafer theory. The paper starts with base concept of belief functions and valuation based system or evidential reasoning system for representation and reasoning based on uncertainty. After that we introduce coefficient of relative decrease of uncertainty in design process and new graphical representation of system architecture - the evidence networks. On the end we presents a method for measuring customer satisfaction in the design process in uncertainty condition using evidence networks.

  11. New Vistas in Chemical Product and Process Design.

    Science.gov (United States)

    Zhang, Lei; Babi, Deenesh K; Gani, Rafiqul

    2016-06-07

    Design of chemicals-based products is broadly classified into those that are process centered and those that are product centered. In this article, the designs of both classes of products are reviewed from a process systems point of view; developments related to the design of the chemical product, its corresponding process, and its integration are highlighted. Although significant advances have been made in the development of systematic model-based techniques for process design (also for optimization, operation, and control), much work is needed to reach the same level for product design. Timeline diagrams illustrating key contributions in product design, process design, and integrated product-process design are presented. The search for novel, innovative, and sustainable solutions must be matched by consideration of issues related to the multidisciplinary nature of problems, the lack of data needed for model development, solution strategies that incorporate multiscale options, and reliability versus predictive power. The need for an integrated model-experiment-based design approach is discussed together with benefits of employing a systematic computer-aided framework with built-in design templates.

  12. Biomechanical microsystems design, processing and applications

    CERN Document Server

    Ostasevicius, Vytautas; Palevicius, Arvydas; Gaidys, Rimvydas; Jurenas, Vytautas

    2017-01-01

    This book presents the most important aspects of analysis of dynamical processes taking place on the human body surface. It provides an overview of the major devices that act as a prevention measure to boost a person‘s motivation for physical activity. A short overview of the most popular MEMS sensors for biomedical applications is given. The development and validation of a multi-level computational model that combines mathematical models of an accelerometer and reduced human body surface tissue is presented. Subsequently, results of finite element analysis are used together with experimental data to evaluate rheological properties of not only human skin but skeletal joints as well. Methodology of development of MOEMS displacement-pressure sensor and adaptation for real-time biological information monitoring, namely “ex vivo” and “in vitro” blood pulse type analysis, is described. Fundamental and conciliatory investigations, achieved knowledge and scientific experience about biologically adaptive mu...

  13. DESIGNS FOR MIXTURE AND PROCESS VARIABLES APPLIED IN TABLET FORMULATIONS

    NARCIS (Netherlands)

    DUINEVELD, CAA; SMILDE, AK; DOORNBOS, DA

    1993-01-01

    Although there are several methods for the construction of a design for process variables and mixture variables, there are not very many methods which are suitable to combine mixture and process variables in one design. Some of the methods which are feasible will be shown. These methods will be

  14. DESIGNS FOR MIXTURE AND PROCESS VARIABLES APPLIED IN TABLET FORMULATIONS

    NARCIS (Netherlands)

    DUINEVELD, CAA; SMILDE, AK; DOORNBOS, DA

    1993-01-01

    Although there are several methods for the construction of a design for process variables and mixture variables, there are not very many methods which are suitable to combine mixture and process variables in one design. Some of the methods which are feasible will be shown. These methods will be comp

  15. Improving the Quotation Process of an After-Sales Unit

    OpenAIRE

    Matilainen, Janne

    2013-01-01

    The purpose of this study was to model and analyze the quotation process of area managers at a global company. Process improvement requires understanding the fundamentals of the process. The study was conducted a case study. Data comprised of internal documentation of the case company, literature, and semi-structured, themed interviews of process performers and stakeholders. The objective was to produce model of the current state of the process. The focus was to establish a holistic view o...

  16. Solid propellant processing factor in rocket motor design

    Science.gov (United States)

    1971-01-01

    The ways are described by which propellant processing is affected by choices made in designing rocket engines. Tradeoff studies, design proof or scaleup studies, and special design features are presented that are required to obtain high product quality, and optimum processing costs. Processing is considered to include the operational steps involved with the lining and preparation of the motor case for the grain; the procurement of propellant raw materials; and propellant mixing, casting or extrusion, curing, machining, and finishing. The design criteria, recommended practices, and propellant formulations are included.

  17. Programming-Free Form Conversion, Design, and Processing

    OpenAIRE

    Fan, Ting-Jun; Machlin, Rona S.; Wang, Christopher P.; Chang, Ifay F.

    1990-01-01

    In this paper, we present the requirements and design considerations for programming-free form conversion, design, and processing. A set of object-oriented software tools are also presented to help users convert a paper form into an electronic form, design an electronic form, and fill in an electronic form directly on screen.

  18. The Use of Computer Graphics in the Design Process.

    Science.gov (United States)

    Palazzi, Maria

    This master's thesis examines applications of computer technology to the field of industrial design and ways in which technology can transform the traditional process. Following a statement of the problem, the history and applications of the fields of computer graphics and industrial design are reviewed. The traditional industrial design process…

  19. Making design representations as catalysts for reflective making in a collaborative design research process.

    Directory of Open Access Journals (Sweden)

    Jessica Schoffelen

    2013-12-01

    Full Text Available The role of making may seem self-evident in a design context. However, in developing an educational design research course at the [institute name], we experienced that when design and research are intertwined, students tend to lose their focus on making. Therefore, this paper reflects on a research trajectory that explores how to support students in intertwining making and reflecting throughout the design research process. During this trajectory, we redeveloped design research methods making use of design representations – representations of design, i.e. field studies, insights, experiments, prototypes, and so on – as a means to connect making and reflecting throughout the design process. Design representations have informing and inspiring qualities and are made by designers to open up their design process and to enable communication, collaboration and reflection with others, throughout the making process. We will argue that combining design representations with structuring rules of play in a design research method and using them throughout the whole design process can improve collaborative reflection-in-action (Schön, 1983, or reflection-in-making, since it allows students to work in a more iterative manner. We describe how we – in eight case studies - recreated and evaluated a design research method, making use of design representations and structuring rules of play.

  20. Integrating chemical engineering fundamentals in the capstone process design project

    DEFF Research Database (Denmark)

    von Solms, Nicolas; Woodley, John; Johnsson, Jan Erik

    2010-01-01

    All B.Eng. courses offered at the Technical University of Denmark (DTU) must now follow CDIO standards. The final “capstone” course in the B.Eng. education is Process Design, which for many years has been typical of chemical engineering curricula worldwide. The course at DTU typically has about 30...... of the CDIO standards – especially standard 3 – Integrated Curriculum - means that the course projects must draw on competences provided in other subjects which the students are taking in parallel with Process Design – specifically Process Control and Reaction Engineering. In each semester of the B.......Eng. education, one course is designated the “project” course, which should draw on material learned in parallel courses. In the 6th semester, Process Design is the project course. Process Control and Reaction Engineering are then incorporated into the final plant design project. Specifically, almost all...

  1. Innovation Design of Persimmon Processing Equipment Driven by Future Scenarios

    Science.gov (United States)

    Duan, Xiao-fei; Su, Xiu-juan; Guan, Lei; Zhang, Wei-she

    2017-07-01

    This article aims to discuss the methods of innovative by future scenarios design, to help the designers be more effective of the design of persimmon processing machinery. By analyzing the persimmon traditional processing process, conceiving persimmon processing future scenarios and using the UXD and Morphological matrix, it can get the comprehensive function schemes. It Select the most optimal schemes which match the future scenarios best by illustrating the schematic design of the rotary-light Dried-persimmon Processing Machinery. It is feasible and effective to carry out the scenario design research and construct the reasonable future scenario, and combine the function analysis method to carry on the product plan innovation and the development.

  2. XML-based product information processing method for product design

    Science.gov (United States)

    Zhang, Zhen Yu

    2012-01-01

    Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.

  3. Co-occurrence of Photochemical and Microbiological Transformation Processes in Open-Water Unit Process Wetlands.

    Science.gov (United States)

    Prasse, Carsten; Wenk, Jannis; Jasper, Justin T; Ternes, Thomas A; Sedlak, David L

    2015-12-15

    The fate of anthropogenic trace organic contaminants in surface waters can be complex due to the occurrence of multiple parallel and consecutive transformation processes. In this study, the removal of five antiviral drugs (abacavir, acyclovir, emtricitabine, lamivudine and zidovudine) via both bio- and phototransformation processes, was investigated in laboratory microcosm experiments simulating an open-water unit process wetland receiving municipal wastewater effluent. Phototransformation was the main removal mechanism for abacavir, zidovudine, and emtricitabine, with half-lives (t1/2,photo) in wetland water of 1.6, 7.6, and 25 h, respectively. In contrast, removal of acyclovir and lamivudine was mainly attributable to slower microbial processes (t1/2,bio = 74 and 120 h, respectively). Identification of transformation products revealed that bio- and phototransformation reactions took place at different moieties. For abacavir and zidovudine, rapid transformation was attributable to high reactivity of the cyclopropylamine and azido moieties, respectively. Despite substantial differences in kinetics of different antiviral drugs, biotransformation reactions mainly involved oxidation of hydroxyl groups to the corresponding carboxylic acids. Phototransformation rates of parent antiviral drugs and their biotransformation products were similar, indicating that prior exposure to microorganisms (e.g., in a wastewater treatment plant or a vegetated wetland) would not affect the rate of transformation of the part of the molecule susceptible to phototransformation. However, phototransformation strongly affected the rates of biotransformation of the hydroxyl groups, which in some cases resulted in greater persistence of phototransformation products.

  4. Random Designs for Estimating Integrals of Stochastic Processes

    OpenAIRE

    Schoenfelder, Carol; Cambanis, Stamatis

    1982-01-01

    The integral of a second-order stochastic process $Z$ over a $d$-dimensional domain is estimated by a weighted linear combination of observations of $Z$ in a random design. The design sample points are possibly dependent random variables and are independent of the process $Z$, which may be nonstationary. Necessary and sufficient conditions are obtained for the mean squared error of a random design estimator to converge to zero as the sample size increases towards infinity. Simple random, stra...

  5. A methodology for integrating sustainability considerations into process design

    OpenAIRE

    Azapagic, A.; Millington, A.; Collett, A

    2006-01-01

    Designing more sustainable processes is one of the key challenges for sustainable development of the chemical industry. This is by no means a trivial task as it requires translating the theoretical principles of sustainable development into design practice. At present, there is no general methodology to guide sustainable process design and almost no practical experience. In an attempt to contribute to this emerging area, this paper proposes a new methodology for integrating sustainability con...

  6. The design process seen through the eyes of a type designer

    DEFF Research Database (Denmark)

    Beier, Sofie

    2015-01-01

    To understand how the design process works, the paper takes the outset in the work of one of the first innovating type designers: the English printer and typefounder John Baskerville (1706-1775). By comparing his way of working with a model for a contemporary design process, the paper reflects up...

  7. Implications of Building Information Modeling on Interior Design Education: The Impact on Teaching Design Processes

    Directory of Open Access Journals (Sweden)

    Amy Roehl, MFA

    2013-06-01

    Full Text Available Currently, major shifts occur in design processes effecting business practices for industries involved with designing and delivering the built environment. These changing conditions are a direct result of industry adoption of relatively new technologies called BIM or Building Information Modeling. This review of literature examines implications of these changing processes on interior design education.

  8. Implications of Building Information Modeling on Interior Design Education: The Impact on Teaching Design Processes

    Directory of Open Access Journals (Sweden)

    Amy Roehl, MFA

    2013-06-01

    Full Text Available Currently, major shifts occur in design processes effecting business practices for industries involved with designing and delivering the built environment. These changing conditions are a direct result of industry adoption of relatively new technologies called BIM or Building Information Modeling. This review of literature examines implications of these changing processes on interior design education.

  9. Robust design of binary countercurrent adsorption separation processes

    Energy Technology Data Exchange (ETDEWEB)

    Storti, G. (Univ. degli Studi di Padova (Italy)); Mazzotti, M.; Morbidelli, M.; Carra, S. (Piazza Leonardo da Vinci, Milano (Italy))

    1993-03-01

    The separation of a binary mixture, using a third component having intermediate adsorptivity as desorbent, in a four section countercurrent adsorption separation unit is considered. A procedure for the optimal and robust design of the unit is developed in the frame of Equilibrium Theory, using a model where the adsorption equilibria are described through the constant selectivity stoichiometric model, while mass-transfer resistances and axial mixing are neglected. By requiring that the unit achieves complete separation, it is possible to identify a set of implicity constraints on the operating parameters, that is, the flow rate ratios in the four sections of the unit. From these constraints explicit bounds on the operating parameters are obtained, thus yielding a region in the operating parameters space, which can be drawn a priori in terms of the adsorption equilibrium constants and the feed composition. This result provides a very convenient tool to determine both optimal and robust operating conditions. The latter issue is addressed by first analyzing the various possible sources of disturbances, as well as their effect on the separation performance. Next, the criteria for the robust design of the unit are discussed. Finally, these theoretical findings are compared with a set of experimental results obtained in a six port simulated moving bed adsorption separation unit operated in the vapor phase.

  10. Dynamics and design of a power unit with a hydraulic piston actuator

    Science.gov (United States)

    Misyurin, S. Yu.; Kreinin, G. V.

    2016-07-01

    The problem of the preselection of parameters of a power unit of a mechatronic complex on the basis of the condition for providing a required control energy has been discussed. The design of the unit is based on analysis of its dynamics under the effect of a special-type test conditional control signal. The specific features of the approach used are a reasonably simplified normalized dynamic model of the unit and the formation of basic similarity criteria. Methods of designing a power unit with a hydraulic piston actuator that operates in point-to-point and oscillatory modes have been considered.

  11. Defining process design space for monoclonal antibody cell culture.

    Science.gov (United States)

    Abu-Absi, Susan Fugett; Yang, LiYing; Thompson, Patrick; Jiang, Canping; Kandula, Sunitha; Schilling, Bernhard; Shukla, Abhinav A

    2010-08-15

    The concept of design space has been taking root as a foundation of in-process control strategies for biopharmaceutical manufacturing processes. During mapping of the process design space, the multidimensional combination of operational variables is studied to quantify the impact on process performance in terms of productivity and product quality. An efficient methodology to map the design space for a monoclonal antibody cell culture process is described. A failure modes and effects analysis (FMEA) was used as the basis for the process characterization exercise. This was followed by an integrated study of the inoculum stage of the process which includes progressive shake flask and seed bioreactor steps. The operating conditions for the seed bioreactor were studied in an integrated fashion with the production bioreactor using a two stage design of experiments (DOE) methodology to enable optimization of operating conditions. A two level Resolution IV design was followed by a central composite design (CCD). These experiments enabled identification of the edge of failure and classification of the operational parameters as non-key, key or critical. In addition, the models generated from the data provide further insight into balancing productivity of the cell culture process with product quality considerations. Finally, process and product-related impurity clearance was evaluated by studies linking the upstream process with downstream purification. Production bioreactor parameters that directly influence antibody charge variants and glycosylation in CHO systems were identified.

  12. Unit Operation Experiment Linking Classroom with Industrial Processing

    Science.gov (United States)

    Benson, Tracy J.; Richmond, Peyton C.; LeBlanc, Weldon

    2013-01-01

    An industrial-type distillation column, including appropriate pumps, heat exchangers, and automation, was used as a unit operations experiment to provide a link between classroom teaching and real-world applications. Students were presented with an open-ended experiment where they defined the testing parameters to solve a generalized problem. The…

  13. Effect of energetic dissipation processes on the friction unit tribological

    Directory of Open Access Journals (Sweden)

    Moving V. V.

    2007-01-01

    Full Text Available In article presented temperature influence on reological and fric-tion unit coefficients cast iron elements. It has been found that surface layer formed in the temperature friction has good rub off resistance. The surface layer structural hardening and capacity stress relaxation make up.

  14. Sociotechnical design processes and working environment: The case of a continuous process wok

    DEFF Research Database (Denmark)

    Broberg, Ole

    2000-01-01

    A five-year design process of a continuous process wok has been studied with the aim of elucidating the conditions for integrating working environment aspects. The design process is seen as a network building activity and as a social shaping process of the artefact. A working environment log...... is suggested as a tool designers can use to integrate considerations of future operators' working environment....

  15. Process-based design of dynamical biological systems

    Science.gov (United States)

    Tanevski, Jovan; Todorovski, Ljupčo; Džeroski, Sašo

    2016-09-01

    The computational design of dynamical systems is an important emerging task in synthetic biology. Given desired properties of the behaviour of a dynamical system, the task of design is to build an in-silico model of a system whose simulated be- haviour meets these properties. We introduce a new, process-based, design methodology for addressing this task. The new methodology combines a flexible process-based formalism for specifying the space of candidate designs with multi-objective optimization approaches for selecting the most appropriate among these candidates. We demonstrate that the methodology is general enough to both formulate and solve tasks of designing deterministic and stochastic systems, successfully reproducing plausible designs reported in previous studies and proposing new designs that meet the design criteria, but have not been previously considered.

  16. Sustainable Process Design of Biofuels: Bioethanol Production from Cassava rhizome

    DEFF Research Database (Denmark)

    Mangnimit, S.; Malakul, P.; Gani, Rafiqul

    2013-01-01

    This study is focused on the sustainable process design of bioethanol production from cassava rhizome. The study includes: process simulation, sustainability analysis, economic evaluation and life cycle assessment (LCA). A steady state process simulation if performed to generate a base case design...... of the bioethanol conversion process using cassava rhizome as a feedstock. The sustainability analysis is performed to analyze the relevant indicators in sustainability metrics, to definedesign/retrofit targets for process improvements. Economic analysis is performed to evaluate the profitability of the process........ Also, simultaneously with sustainability analysis, the life cycle impact on environment associated with bioethanol production is performed. Finally, candidate alternative designs are generated and compared with the base case design in terms of LCA, economics, waste, energy usage and enviromental impact...

  17. Method for innovative synthesis-design of chemical process flowsheets

    DEFF Research Database (Denmark)

    Kumar Tula, Anjan; Gani, Rafiqul

    of chemical processes, where, chemical process flowsheets could be synthesized in the same way as atoms or groups of atoms are synthesized to form molecules in computer aided molecular design (CAMD) techniques [4]. That, from a library of building blocks (functional process-groups) and a set of rules to join......, the implementation of the computer-aided process-group based flowsheet synthesis-design framework is presented together with an extended library of flowsheet property models to predict the environmental impact, safety factors, product recovery and purity, which are employed to screen the generated alternatives. Also...... flowsheet (the well-known Hydrodealkylation of toluene process) and another for a biochemical process flowsheet (production of ethanol from lignocellulose). In both cases, not only the reported designs are found and matched, but also new innovative designs are found, which is possible because...

  18. Design of Piston Air Compressor Unit Control System based Converter

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    <正>Based on the running characteristics and high energy consumption of air compressors in coal mines,an air pressure PID closed loop control system has been designed in this paper.The system is composed of PLC, converter and sensors etc and adopts the control method of converter triple-evaporator which makes air supply"need-based".The designed system has been applied in multiple coal mines and the results show its energy saving is remarkable and potential application is widely.

  19. Research and development of process innovation design oriented web-based process case base system

    Directory of Open Access Journals (Sweden)

    Guo Xin

    2015-01-01

    Full Text Available Process innovation is very significant for an enterprise to lower cost, improve product quality and win competitive advantage. In order to inspire designers to realize innovation design, this paper has proposed a concept of process innovation design regarding Web process case base system model. To be specific, it constructs system mainline through the realization of technique and application flow, determines system architecture by combining process case base and cognition method and establishes links among principles, innovation approaches and process cases on this basis. The process case prototype system is established under the model of browser/server, and 5 kinds of search models, i.e. processing methods, processing focus, design depth, innovation approaches and user-defined model are integrated. This paper has demonstrated case base backstage realization and management methods, showcased system interface and demonstrated its effectiveness in process design based on actual cases.

  20. Perspectives on the design of safer nanomaterials and manufacturing processes

    Energy Technology Data Exchange (ETDEWEB)

    Geraci, Charles [National Institute for Occupational Safety and Health (United States); Heidel, Donna [Bureau Veritas North America, Inc. (United States); Sayes, Christie [Baylor University (United States); Hodson, Laura, E-mail: lhodson@cdc.gov; Schulte, Paul; Eastlake, Adrienne [National Institute for Occupational Safety and Health (United States); Brenner, Sara [Colleges of Nanoscale Science and Engineering at State University of New York Polytechnic Institute, (SUNY Poly) (United States)

    2015-09-15

    A concerted effort is being made to insert Prevention through Design principles into discussions of sustainability, occupational safety and health, and green chemistry related to nanotechnology. Prevention through Design is a set of principles, which includes solutions to design out potential hazards in nanomanufacturing including the design of nanomaterials, and strategies to eliminate exposures and minimize risks that may be related to the manufacturing processes and equipment at various stages of the lifecycle of an engineered nanomaterial.

  1. Perspectives on the design of safer nanomaterials and manufacturing processes.

    Science.gov (United States)

    Geraci, Charles; Heidel, Donna; Sayes, Christie; Hodson, Laura; Schulte, Paul; Eastlake, Adrienne; Brenner, Sara

    2015-09-01

    A concerted effort is being made to insert Prevention through Design principles into discussions of sustainability, occupational safety and health, and green chemistry related to nanotechnology. Prevention through Design is a set of principles that includes solutions to design out potential hazards in nanomanufacturing including the design of nanomaterials, and strategies to eliminate exposures and minimize risks that may be related to the manufacturing processes and equipment at various stages of the lifecycle of an engineered nanomaterial.

  2. Model Driven Manufacturing Process Design and Managing Quality

    OpenAIRE

    Lundgren, Magnus; Hedlind, Mikael; Kjellberg, Torsten

    2016-01-01

    Besides decisions in design, decisions made in process planning determine the conditions for manufacturing the right quality. Hence systematic process planning is a key enabler for robust product realization from design through manufacturing. Current work methods for process planning and quality assurance lack efficient system integration. As a consequence companies spend unnecessary lot of non-value adding time on managing quality. This paper presents a novel model-based approach to integrat...

  3. Process Model Construction and Optimization Using Statistical Experimental Design,

    Science.gov (United States)

    1988-04-01

    Memo No. 88-442 ~LECTE March 1988 31988 %,.. MvAY 1 98 0) PROCESS MODEL CONSTRUCTION AND OPTIMIZATION USING STATISTICAL EXPERIMENTAL DESIGN Emmanuel...Sachs and George Prueger Abstract A methodology is presented for the construction of process models by the combination of physically based mechanistic...253-8138. .% I " Process Model Construction and Optimization Using Statistical Experimental Design" by Emanuel Sachs Assistant Professor and George

  4. The uses and users of design process models

    OpenAIRE

    2016-01-01

    The use of design process models is of great importance for developing better products. Indeed, it is one of the factors that may differentiate the best companies from the rest. However, their adoption in companies is declining. Usefulness and usability issues may be responsible for process models not to meet the needs of its users. The goal of this research is to provide deeper understanding of the users needs of design process models. Three main perspectives are provided: (1) why organizati...

  5. Design and Fabrication of an Elastomeric Unit for Soft Modular Robots in Minimally Invasive Surgery.

    Science.gov (United States)

    De Falco, Iris; Gerboni, Giada; Cianchetti, Matteo; Menciassi, Arianna

    2015-11-14

    In recent years, soft robotics technologies have aroused increasing interest in the medical field due to their intrinsically safe interaction in unstructured environments. At the same time, new procedures and techniques have been developed to reduce the invasiveness of surgical operations. Minimally Invasive Surgery (MIS) has been successfully employed for abdominal interventions, however standard MIS procedures are mainly based on rigid or semi-rigid tools that limit the dexterity of the clinician. This paper presents a soft and high dexterous manipulator for MIS. The manipulator was inspired by the biological capabilities of the octopus arm, and is designed with a modular approach. Each module presents the same functional characteristics, thus achieving high dexterity and versatility when more modules are integrated. The paper details the design, fabrication process and the materials necessary for the development of a single unit, which is fabricated by casting silicone inside specific molds. The result consists in an elastomeric cylinder including three flexible pneumatic actuators that enable elongation and omni-directional bending of the unit. An external braided sheath improves the motion of the module. In the center of each module a granular jamming-based mechanism varies the stiffness of the structure during the tasks. Tests demonstrate that the module is able to bend up to 120° and to elongate up to 66% of the initial length. The module generates a maximum force of 47 N, and its stiffness can increase up to 36%.

  6. On the hazard rate process for imperfectly monitored multi-unit systems

    Energy Technology Data Exchange (ETDEWEB)

    Barros, A. [Institut des Sciences et Techonologies de l' Information de Troyes (ISTIT-CNRS), Equipe de Modelisation et Surete des Systemes, Universite de Technologie de Troyes (UTT), 12, rue Marie Curie, BP2060, 10010 Troyes cedex (France)]. E-mail: anne.barros@utt.fr; Berenguer, C. [Institut des Sciences et Techonologies de l' Information de Troyes (ISTIT-CNRS), Equipe de Modelisation et Surete des Systemes, Universite de Technologie de Troyes (UTT), 12, rue Marie Curie, BP2060, 10010 Troyes cedex (France); Grall, A. [Institut des Sciences et Techonologies de l' Information de Troyes (ISTIT-CNRS), Equipe de Modelisation et Surete des Systemes, Universite de Technologie de Troyes (UTT), 12, rue Marie Curie, BP2060, 10010 Troyes cedex (France)

    2005-12-01

    The aim of this paper is to present a stochastic model to characterize the failure distribution of multi-unit systems when the current units state is imperfectly monitored. The definition of the hazard rate process existing with perfect monitoring is extended to the realistic case where the units failure time are not always detected (non-detection events). The so defined observed hazard rate process gives a better representation of the system behavior than the classical failure rate calculated without any information on the units state and than the hazard rate process based on perfect monitoring information. The quality of this representation is, however, conditioned by the monotony property of the process. This problem is mainly discussed and illustrated on a practical example (two parallel units). The results obtained motivate the use of the observed hazard rate process to characterize the stochastic behavior of the multi-unit systems and to optimize for example preventive maintenance policies.

  7. Framework for Managing the Very Large Scale Integration Design Process

    Directory of Open Access Journals (Sweden)

    Sabah Al-Fedaghi

    2012-01-01

    Full Text Available Problem statement: The VLSI design cycle was described in terms of successive states and substages; it starts with system specification and ends with packaging. At the next descriptive level, currently known methodologies (e.g., flowchart based, object-oriented based lack a global conceptual representation suitable for managing the VLSI design process. Technical details were intermixed with tool-dependent and implementation issues such as control flow and data structure. It was important to fill the gap between these two levels of description because VLSI chip manufacturing was a complex management project and providing a conceptual detailed depiction of the design process would assist in managing operations on the great number of generated artifacts. Approach: This study introduces a conceptual framework representing flows and transformations of various descriptions (e.g., circuits, technical sketches to be used as a tracking apparatus for directing traffic during the VLSI design process. The proposed methodology views a description as an integral element of a process, called a flow system, constructed from six generic operations and designed to handle descriptions. It draws maps of flows of representations (called flowthings that run through the design flow. These flowthings are created, transformed (processed, transferred, released and received by various functions along the design flow at different levels (a hierarchy. The resultant conceptual framework can be used to support designers with computer-aided tools to organize and manage chains of tasks. Results: The proposed model for managing the VLSI design process was characterized by being conceptual (no technical or implementation details and can be uniformly applied at different levels of design and to various kinds of artifacts. The methodology is applied to describe the VLSI physical design stage that includes partitioning, floorplanning and placement, routing, compaction and extraction

  8. Systematic Integrated Process Design and Control of Binary Element Reactive Distillation Processes

    DEFF Research Database (Denmark)

    Mansouri, Seyed Soheil; Sales-Cruz, Mauricio; Huusom, Jakob Kjøbsted

    2016-01-01

    In this work, integrated process design and control of reactive distillation processes is considered through a computer-aided framework. First, a set of simple design methods for reactive distillation column that are similar in concept to non-reactive distillation design methods are extended to d...

  9. Cognitive Design for Learning: Cognition and Emotion in the Design Process

    Science.gov (United States)

    Hasebrook, Joachim

    2016-01-01

    We are so used to accept new technologies being the driver of change and innovation in human computer interfaces (HCI). In our research we focus on the development of innovations as a design process--or design, for short. We also refer to the entire process of creating innovations and putting them to use as "cognitive processes"--or…

  10. Parallel particle swarm optimization on a graphics processing unit with application to trajectory optimization

    Science.gov (United States)

    Wu, Q.; Xiong, F.; Wang, F.; Xiong, Y.

    2016-10-01

    In order to reduce the computational time, a fully parallel implementation of the particle swarm optimization (PSO) algorithm on a graphics processing unit (GPU) is presented. Instead of being executed on the central processing unit (CPU) sequentially, PSO is executed in parallel via the GPU on the compute unified device architecture (CUDA) platform. The processes of fitness evaluation, updating of velocity and position of all particles are all parallelized and introduced in detail. Comparative studies on the optimization of four benchmark functions and a trajectory optimization problem are conducted by running PSO on the GPU (GPU-PSO) and CPU (CPU-PSO). The impact of design dimension, number of particles and size of the thread-block in the GPU and their interactions on the computational time is investigated. The results show that the computational time of the developed GPU-PSO is much shorter than that of CPU-PSO, with comparable accuracy, which demonstrates the remarkable speed-up capability of GPU-PSO.

  11. A Robust Process Analytical Technology (PAT) System Design for Crystallization Processes

    DEFF Research Database (Denmark)

    Abdul Samad, Noor Asma Fazli Bin; Sin, Gürkan; Gernaey, Krist

    2013-01-01

    for generation of the supersaturation setpoint for a supersaturation controller, a tool for design of a process monitoring and control system (also called Process Analytical Technology (PAT) system) as well as a tool for performing uncertainty and sensitivity analysis of the PAT system design. The uncertainty......A generic computer-aided framework for systematic design of a process monitoring and control system for crystallization processes has been developed to study various aspects of crystallization operations. The design framework contains a generic multidimensional modelling framework, a tool...... crystallization process to achieve the target crystal size distribution (CSD) in the presence of parametric uncertainties....

  12. An Efficient Micro Control Unit with a Reconfigurable Filter Design for Wireless Body Sensor Networks (WBSNs

    Directory of Open Access Journals (Sweden)

    Chiung-An Chen

    2012-11-01

    Full Text Available In this paper, a low-cost, low-power and high performance micro control unit (MCU core is proposed for wireless body sensor networks (WBSNs. It consists of an asynchronous interface, a register bank, a reconfigurable filter, a slop-feature forecast, a lossless data encoder, an error correct coding (ECC encoder, a UART interface, a power management (PWM, and a multi-sensor controller. To improve the system performance and expansion abilities, the asynchronous interface is added for handling signal exchanges between different clock domains. To eliminate the noise of various bio-signals, the reconfigurable filter is created to provide the functions of average, binomial and sharpen filters. The slop-feature forecast and the lossless data encoder is proposed to reduce the data of various biomedical signals for transmission. Furthermore, the ECC encoder is added to improve the reliability for the wireless transmission and the UART interface is employed the proposed design to be compatible with wireless devices. For long-term healthcare monitoring application, a power management technique is developed for reducing the power consumption of the WBSN system. In addition, the proposed design can be operated with four different bio-sensors simultaneously. The proposed design was successfully tested with a FPGA verification board. The VLSI architecture of this work contains 7.67-K gate counts and consumes the power of 5.8 mW or 1.9 mW at 100 MHz or 133 MHz processing rate using a TSMC 0.18 μm or 0.13 μm CMOS process. Compared with previous techniques, this design achieves higher performance, more functions, more flexibility and higher compatibility than other micro controller designs.

  13. What are the Characteristics of Engineering Design Processes?

    DEFF Research Database (Denmark)

    Maier, Anja; Störrle, Harald

    2011-01-01

    survey among academic and industrial ED process modelling experts. In a third step, we added a further nine characteristics from personal experiences in the Language Engineering Domain to capture the pragmatic perspective. We arrive at a comprehensive set of 18 characteristics grouped into 6 challenges......This paper studies the characteristic properties of Engineering Design (ED) processes from a process modelling perspective. In a first step, we extracted nine characteristics of engineering design processes from the literature and in a second step validated the findings using results from our...... for process modelling in the engineering design domain. The challenges process modelers need to address when using and developing process modelling approaches and tools are: Development, Collaboration, Products & Services, Formality, Pragmatics, and Flexibility. We then compare the importance of elicited...

  14. Group Contribution Based Process Flowsheet Synthesis, Design and Modelling

    DEFF Research Database (Denmark)

    d'Anterroches, Loïc; Gani, Rafiqul

    2005-01-01

    In a group contribution method for pure component property prediction, a molecule is described as a set of groups linked together to form a molecular structure. In the same way, for flowsheet "property" prediction, a flowsheet can be described as a set of process-groups linked together to represent...... provides a contribution to the "property" of the flowsheet, which can be performance in terms of energy consumption, thereby allowing a flowsheet "property" to be calculated, once it is described by the groups. Another feature of this approach is that the process-group attachments provide automatically...... the flowsheet structure. Just as a functional group is a collection of atoms, a process-group is a collection of operations forming an "unit" operation or a set of "unit" operations. The link between the process-groups are the streams similar to the bonds that are attachments to atoms/groups. Each process-group...

  15. A Design Approach for Collaboration Processes: A Multi-Method Design Science Study in Collaboration Engineering

    NARCIS (Netherlands)

    Kolfschoten, G.L.; De Vreede, G.J.

    2009-01-01

    Collaboration Engineering is an approach for the design and deployment of repeatable collaboration processes that can be executed by practitioners without the support of collaboration professionals such as facilitators. A critical challenge in Collaboration Engineering concerns how the design activi

  16. Design of experiments in Biomedical Signal Processing Course.

    Science.gov (United States)

    Li, Ling; Li, Bin

    2008-01-01

    Biomedical Signal Processing is one of the most important major subjects in Biomedical Engineering. The contents of Biomedical Signal Processing include the theories of digital signal processing, the knowledge of different biomedical signals, physiology and the ability of computer programming. Based on our past five years teaching experiences, in order to let students master the signal processing algorithm well, we found that the design of experiments following algorithm was very important. In this paper we presented the ideas and aims in designing the experiments. The results showed that our methods facilitated the study of abstractive signal processing algorithms and made understanding of biomedical signals in a simple way.

  17. Designing a process for executing projects under an international agreement

    Science.gov (United States)

    Mohan, S. N.

    2003-01-01

    Projects executed under an international agreement require special arrangements in order to operate within confines of regulations issued by the State Department and the Commerce Department. In order to communicate enterprise-level guidance and procedural information uniformly to projects based on interpretations that carry the weight of institutional authority, a process was developed. This paper provides a script for designing processes in general, using this particular process for context. While the context is incidental, the method described is applicable to any process in general. The paper will expound on novel features utilized for dissemination of the procedural details over the Internet following such process design.

  18. The United States Military Entrance Processing Command (USMEPCOM) Uses Six Sigma Process to Develop and Improve Data Quality

    Science.gov (United States)

    2007-06-01

    mecpom.army.mil Original title on 712 A/B: The United States Military Entrance Processing Command (USMEPCOM) uses Six Sigma process to develop and...Entrance Processing Command (USMEPCOM) uses Six Sigma Process to Develop and Improve Data Quality 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...Processing Command (USMEPCOM) uses Six Sigma Process to Develop and Improve Data Quality 3 • USMEPCOM Overview/History • Purpose • Define: What is Important

  19. Designing a Process for Tracking Business Model Change

    DEFF Research Database (Denmark)

    Groskovs, Sergejs

    The paper has adopted a design science research approach to design and verify with key stakeholders a fundamental management process of revising KPIs (key performance indicators), including those indicators that are related to business model change. The paper proposes a general guide for such pro...... innovation, performance management and business model change that informed the design throughout the project.......The paper has adopted a design science research approach to design and verify with key stakeholders a fundamental management process of revising KPIs (key performance indicators), including those indicators that are related to business model change. The paper proposes a general guide...... that may alter the business model of the firm. The decision-making process about which metrics to track affects what management’s attention is focused on during the year. The rather streamlined process outlined here is capable of facilitating swift responses to environmental changes in local markets...

  20. Risk Management and Loss Optimization at Design Process of Products

    Directory of Open Access Journals (Sweden)

    Katalin Németh-Erdődi

    2008-06-01

    Full Text Available We’d like to introduce a flexible system of design process elements to support theformation and tool selection of an efficient, „lean” product design process. To do this weidentify numerical risk factors and introduce a calculating method for optimising takinginto consideration- the effect of design steps on usage characteristics,- the time needed by the design elements and the resultant losses,- the effect of design on the success of the implementation process.A generic model was developed for harmonising and sequencing of market and technicalactivities with built-in acceptance phase. The steps of the model can be selected flexiblydepending on design goals. The model regards the concurrent character of market,technical and organising activities, the critical speed of information flow between them andthe control, decision and confirmation points.

  1. Integrated Design Process in Problem-Based Learning

    DEFF Research Database (Denmark)

    Knudstrup, Mary-Ann

    2004-01-01

    This article reports and reflects on the learning achievements and the educational experiences in connection with the first years of the curriculum in Architecture at Aalborg University ?s Civil Engineer Education in Architecture & Design. In the article I will focus on the learning activity.......g. the computer as a tool for designing and optimising the building. I will also consider the dilemma of the Integrated Design Process in Problem Based Learning that emerges when the number of courses in the learning model, as is often the case, clashes with the demand for time and scope for reflection which...... and the method that are developed during the semester when working with an Integrated Design Process combining architecture, design, functional aspects, energy consumption, indoor environment, technology, and construction. I will emphasize the importance of working with different tools in the design process, e...

  2. Analysis of the implementation of ergonomic design at the new units of an oil refinery.

    Science.gov (United States)

    Passero, Carolina Reich Marcon; Ogasawara, Erika Lye; Baú, Lucy Mara Silva; Buso, Sandro Artur; Bianchi, Marcos Cesar

    2012-01-01

    Ergonomic design is the adaptation of working conditions to human limitations and skills in the physical design phase of a new installation, a new working system, or new products or tools. Based on this concept, the purpose of this work was to analyze the implementation of ergonomic design at the new industrial units of an oil refinery, using the method of Ergonomic Workplace Assessment. This study was conducted by a multidisciplinary team composed of operation, maintenance and industrial safety technicians, ergonomists, designers and engineers. The analysis involved 6 production units, 1 industrial wastewater treatment unit, and 3 utilities units, all in the design detailing phase, for which 455 ergonomic requirements were identified. An analysis and characterization of the requirements identified for 5 of the production units, involving a total of 246 items, indicated that 62% were related to difficult access and blockage operations, while 15% were related to difficulties in the circulation of employees inside the units. Based on these data, it was found that the ergonomic requirements identified in the design detailing phase of an industrial unit involve physical ergonomics, and that it is very difficult to identify requirements related to organizational or cognitive ergonomics.

  3. Value-driven coordination process design using physical delivery models

    NARCIS (Netherlands)

    Wieringa, R.J.; Pijpers, V.; Bodenstaff, L.; Gordijn, J.

    2008-01-01

    Current e-business technology enables the execution of increasingly complex coordination processes that link IT services of different companies. Successful design of cross-organizational coordination processes requires the mutual alignment of the coordination process with a commercial business case.

  4. Design of Test Parts to Characterize Micro Additive Manufacturing Processes

    DEFF Research Database (Denmark)

    Thompson, Mary Kathryn; Mischkot, Michael

    2015-01-01

    The minimum feature size and obtainable tolerances of additive manufacturing processes are linked to the smallest volumetric elements (voxels) that can be created. This work presents the iterative design of a test part to investigate the resolution of AM processes with voxel sizes at the micro...... manufacturing processes....

  5. Design & Implementation of Company Database for MME Subcontracting Unit

    CERN Document Server

    Horvath, Benedek

    2016-01-01

    The purpose of this document is to introduce the software stack designed and implemented by me, during my student project. The report includes both the project description, the requirements set against the solution, the already existing alternatives for solving the problem, and the final solution that has been implemented. Reading this document you may have a better understanding of what I was working on for eleven weeks in the summer of 2016.

  6. Electric Traction Machine Design for an E-RWD Unit

    OpenAIRE

    Marquez, Francisco

    2014-01-01

    Since the first generation of the Toyota Prius was introduced in December 1997, the number of Hybrid Electric Vehicles (HEVs) and pure Electric Vehicles (EVs) available in the market has increased substantially. The growing competition existent puts high demands on the electric system as well as the rest of the vehicle. As a consequence, substantial design effort is devoted to optimize both at system and component level, with respect to different parameters such as fuel efficiency, power dens...

  7. Molten salt coal gasification process development unit. Phase 1. Volume 1. PDU operations. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Kohl, A.L.

    1980-05-01

    This report summarizes the results of a test program conducted on the Molten Salt Coal Gasification Process, which included the design, construction, and operation of a Process Development Unit. In this process, coal is gasified by contacting it with air in a turbulent pool of molten sodium carbonate. Sulfur and ash are retained in the melt, and a small stream is continuously removed from the gasifier for regeneration of sodium carbonate, removal of sulfur, and disposal of the ash. The process can handle a wide variety of feed materials, including highly caking coals, and produces a gas relatively free from tars and other impurities. The gasification step is carried out at approximately 1800/sup 0/F. The PDU was designed to process 1 ton per hour of coal at pressures up to 20 atm. It is a completely integrated facility including systems for feeding solids to the gasifier, regenerating sodium carbonate for reuse, and removing sulfur and ash in forms suitable for disposal. Five extended test runs were made. The observed product gas composition was quite close to that predicted on the basis of earlier small-scale tests and thermodynamic considerations. All plant systems were operated in an integrated manner during one of the runs. The principal problem encountered during the five test runs was maintaining a continuous flow of melt from the gasifier to the quench tank. Test data and discussions regarding plant equipment and process performance are presented. The program also included a commercial plant study which showed the process to be attractive for use in a combined-cycle, electric power plant. The report is presented in two volumes, Volume 1, PDU Operations, and Volume 2, Commercial Plant Study.

  8. Models and Modelling Tools for Chemical Product and Process Design

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    2016-01-01

    The design, development and reliability of a chemical product and the process to manufacture it, need to be consistent with the end-use characteristics of the desired product. One of the common ways to match the desired product-process characteristics is through trial and error based experiments......-based framework is that in the design, development and/or manufacturing of a chemical product-process, the knowledge of the applied phenomena together with the product-process design details can be provided with diverse degrees of abstractions and details. This would allow the experimental resources......, are the needed models for such a framework available? Or, are modelling tools that can help to develop the needed models available? Can such a model-based framework provide the needed model-based work-flows matching the requirements of the specific chemical product-process design problems? What types of models...

  9. Reported Design Processes for Accessibility in Rail Transport

    DEFF Research Database (Denmark)

    Herriott, Richard; Cook, Sharon

    2014-01-01

    ). The research found that the role of users in the design process of manufacturers was limited and that compliance with industry standards was the dominant means to achieving accessibility goals. Design consultancies were willing to apply more user-centred design if the client requested it. Where operators were......Accessibility is a fundamental requirement in public transport (PT) yet there exists little research on design for accessibility or inclusive design (ID) in this area. This paper sets out to discover what methods are used in the rail sector to achieve accessibility goals and to examine how far...... these methods deviate from user-centred and ID norms. Semi-structured interviews were conducted with nine rolling stock producers, operators and design consultancies. The purpose was to determine if ID design methods are used explicitly and the extent to which the processes used conformed to ID (if at all...

  10. A computer-aided approach for achieving sustainable process design by process intensification

    DEFF Research Database (Denmark)

    Anantasarn, Nateetorn; Suriyapraphadilok, Uthaiporn; Babi, Deenesh Kavi

    2017-01-01

    to generate flowsheet alternatives that satisfy the design targets thereby, minimizing and/or eliminating the process hot-spots. The application of the framework is highlighted through the production of para-xylene via toluene methylation where more sustainable flowsheet alternatives that consist of hybrid......Process intensification can be applied to achieve sustainable process design. In this paper, a systematic, 3-stage synthesis-intensification framework is applied to achieve more sustainable design. In stage 1, the synthesis stage, an objective function and design constraints are defined and a base...... case is synthesized. In stage 2, the design and analysis stage, the base case is analyzed using economic and environmental analyses to identify process hot-spots that are translated into design targets. In stage 3, the innovation design stage, phenomena-based process intensification is performed...

  11. Design, construction, operation and evaluation of a prototype culm combustion boiler/heater unit. Final design of prototype unit

    Energy Technology Data Exchange (ETDEWEB)

    1980-10-01

    A final design of a prototype anthracite culm combustion boiler has been accomplished under Phase I of DOE Contract ET-78-C-01-3269. The prototype boiler has been designed to generate 20,000 pounds per hour of 150 psig saturated steam using low Btu (4000 Btu per pound) anthracite culm as a fuel. This boiler will be located at the industrial park of the Shamokin Area Industrial Corporation (SAIC). This program is directed at demonstrating the commercial viability of anthracite culm fueled FBC steam generation systems.

  12. Spatial resolution recovery utilizing multi-ray tracing and graphic processing unit in PET image reconstruction.

    Science.gov (United States)

    Liang, Yicheng; Peng, Hao

    2015-02-07

    Depth-of-interaction (DOI) poses a major challenge for a PET system to achieve uniform spatial resolution across the field-of-view, particularly for small animal and organ-dedicated PET systems. In this work, we implemented an analytical method to model system matrix for resolution recovery, which was then incorporated in PET image reconstruction on a graphical processing unit platform, due to its parallel processing capacity. The method utilizes the concepts of virtual DOI layers and multi-ray tracing to calculate the coincidence detection response function for a given line-of-response. The accuracy of the proposed method was validated for a small-bore PET insert to be used for simultaneous PET/MR breast imaging. In addition, the performance comparisons were studied among the following three cases: 1) no physical DOI and no resolution modeling; 2) two physical DOI layers and no resolution modeling; and 3) no physical DOI design but with a different number of virtual DOI layers. The image quality was quantitatively evaluated in terms of spatial resolution (full-width-half-maximum and position offset), contrast recovery coefficient and noise. The results indicate that the proposed method has the potential to be used as an alternative to other physical DOI designs and achieve comparable imaging performances, while reducing detector/system design cost and complexity.

  13. Hybrid design tools for conceptual design and design engineering processes: bridging the design gap: towards an intuitive design tool

    NARCIS (Netherlands)

    Wendrich, Robert Eric

    2016-01-01

    Hybrid Design Tools; Representation; Computational Synthesis. Non-linear, non-explicit, non-standard thinking and ambiguity in design tools has a great impact on enhancement of creativity during ideation and conceptualization. Tacit-tangible representation based on a mere idiosyncratic and individu

  14. A Database Design for a Unit Status Reporting System.

    Science.gov (United States)

    1987-03-01

    unlimited. OTIC ELECTE SEP 23 198 ’ 9 *57_3 A 87 9 15 07 DISCLAIMER NOTICE THIS DOCUMENT IS BEST QUALITY PRACTICABLE . THE COPY FURNISHED TO DTIC CONTAINED A...that ame avosable. (6) Dervelop ad - gadame 0 the - - (9) Comander. U S. Army Tritisisg (J) Comsar at all levels wsl review OF IS stan* 4Mt darnin...Meilir. The Practical Guide to Structured -Systems Design. New York: Yourdon Press, 1980. Sprague, Ralph H. Jr., and Carlson, Eric D. Building Effective

  15. New Design of Blade Untwisting Device of Cyclone Unit

    Directory of Open Access Journals (Sweden)

    D. I. Misiulia

    2010-01-01

    Full Text Available The paper presents a new design of a blade untwisting device where blades are considered as a main element of the device. A profile of the blades corresponds to a circular arch. An inlet angle of  the blades is determined by stream aerodynamics in an exhaust pipe, and an exit angle is determined by rectilinear gas motion. Optimum geometrical parameters of the untwisting device have been determined and its application allows to reduce a pressure drop in the ЦН-15 cyclones by 28–30 % while screw-blade untwisting device recovers only 19–20 % of energy.

  16. Developing advanced units of learning using IMS Learning Design level B

    NARCIS (Netherlands)

    Koper, Rob; Burgos, Daniel

    2005-01-01

    Please cite the original publication: Koper, R., Burgos, D. (2005). Developing advanced units of learning using IMS Learning Design level B. International Journal on Advanced Technology for Learning, 2 (4), 252-259.

  17. Caravaggio: A Design for an Interdisciplinary Content-Based EAP/ESP Unit.

    Science.gov (United States)

    Kirschner, Michal; Wexler, Carol

    2002-01-01

    Presents a detailed design for a content-based unit, the focus of which is the film "Caravaggio." The unit also includes readings in art history and film and is part of a specialized English for academic purposes/English for special purposes reading comprehension course for first-year students majoring in art history and in a…

  18. Fuel ethanol production: process design trends and integration opportunities.

    Science.gov (United States)

    Cardona, Carlos A; Sánchez, Oscar J

    2007-09-01

    Current fuel ethanol research and development deals with process engineering trends for improving biotechnological production of ethanol. In this work, the key role that process design plays during the development of cost-effective technologies is recognized through the analysis of major trends in process synthesis, modeling, simulation and optimization related to ethanol production. Main directions in techno-economical evaluation of fuel ethanol processes are described as well as some prospecting configurations. The most promising alternatives for compensating ethanol production costs by the generation of valuable co-products are analyzed. Opportunities for integration of fuel ethanol production processes and their implications are underlined. Main ways of process intensification through reaction-reaction, reaction-separation and separation-separation processes are analyzed in the case of bioethanol production. Some examples of energy integration during ethanol production are also highlighted. Finally, some concluding considerations on current and future research tendencies in fuel ethanol production regarding process design and integration are presented.

  19. Integrating rock mechanics issues with repository design through design process principles and methodology

    Energy Technology Data Exchange (ETDEWEB)

    Bieniawski, Z.T. [Pennsylvania State Univ., University Park, PA (United States)

    1996-04-01

    A good designer needs not only knowledge for designing (technical know-how that is used to generate alternative design solutions) but also must have knowledge about designing (appropriate principles and systematic methodology to follow). Concepts such as {open_quotes}design for manufacture{close_quotes} or {open_quotes}concurrent engineering{close_quotes} are widely used in the industry. In the field of rock engineering, only limited attention has been paid to the design process because design of structures in rock masses presents unique challenges to the designers as a result of the uncertainties inherent in characterization of geologic media. However, a stage has now been reached where we are be able to sufficiently characterize rock masses for engineering purposes and identify the rock mechanics issues involved but are still lacking engineering design principles and methodology to maximize our design performance. This paper discusses the principles and methodology of the engineering design process directed to integrating site characterization activities with design, construction and performance of an underground repository. Using the latest information from the Yucca Mountain Project on geology, rock mechanics and starter tunnel design, the current lack of integration is pointed out and it is shown how rock mechanics issues can be effectively interwoven with repository design through a systematic design process methodology leading to improved repository performance. In essence, the design process is seen as the use of design principles within an integrating design methodology, leading to innovative problem solving. In particular, a new concept of {open_quotes}Design for Constructibility and Performance{close_quotes} is introduced. This is discussed with respect to ten rock mechanics issues identified for repository design and performance.

  20. Modelling the role of the design context in the design process: a domain-independent approach

    OpenAIRE

    Reymen, Isabelle; Kroes, P.; Basten, T; Durling, D.; Shackleton, J

    2002-01-01

    Domain-independent models of the design process are an important means for facilitating interdisciplinary communication and for supporting multidisciplinary design. Many so-called domain-independent models are, however, not really domain independent. We state that, to be domain independent, the models must abstract from domain-specific aspects, be based on the study of several design disciplines, and be useful for many design disciplines and for multidisciplinary design teams. This paper desc...