WorldWideScience

Sample records for high-performance work systems

  1. High Performance Work Systems

    NARCIS (Netherlands)

    J.P.P.E.F. Boselie (Paul); A. van der Wiele (Ton)

    2002-01-01

    textabstractResearch, summarized and classified in the work of Delery and Doty (1996), Guest (1997), Paauwe and Richardson (1997) and Boselie et al. (2001), suggests significant impact of Human Resources Management (HRM) on the competitive advantage of organizations. The mainstream research on this

  2. High Performance Work Systems for Online Education

    Science.gov (United States)

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  3. Manufacturing Advantage: Why High-Performance Work Systems Pay Off.

    Science.gov (United States)

    Appelbaum, Eileen; Bailey, Thomas; Berg, Peter; Kalleberg, Arne L.

    A study examined the relationship between high-performance workplace practices and the performance of plants in the following manufacturing industries: steel, apparel, and medical electronic instruments and imaging. The multilevel research methodology combined the following data collection activities: (1) site visits; (2) collection of plant…

  4. THE RELATION OF HIGH-PERFORMANCE WORK SYSTEMS WITH EMPLOYEE INVOLVEMENT

    Directory of Open Access Journals (Sweden)

    Bilal AFSAR

    2010-01-01

    Full Text Available The basic aim of high performance work systems is to enable employees to exercise decision making, leading to flexibility, innovation, improvement and skill sharing. By facilitating the development of high performance work systems we help organizations make continuous improvement a way of life.The notion of a high-performance work system (HPWS constitutes a claim that there exists a system of work practices for core workers in an organisation that leads in some way to superior performance. This article will discuss the relation that HPWS has with the improvement of firms’ performance and high involvement of the employees.

  5. High-Performance Work Systems and School Effectiveness: The Case of Malaysian Secondary Schools

    Science.gov (United States)

    Maroufkhani, Parisa; Nourani, Mohammad; Bin Boerhannoeddin, Ali

    2015-01-01

    This study focuses on the impact of high-performance work systems on the outcomes of organizational effectiveness with the mediating roles of job satisfaction and organizational commitment. In light of the importance of human resource activities in achieving organizational effectiveness, we argue that higher employees' decision-making capabilities…

  6. Unlocking the Black Box: Exploring the Link between High-Performance Work Systems and Performance

    Science.gov (United States)

    Messersmith, Jake G.; Patel, Pankaj C.; Lepak, David P.

    2011-01-01

    With a growing body of literature linking systems of high-performance work practices to organizational performance outcomes, recent research has pushed for examinations of the underlying mechanisms that enable this connection. In this study, based on a large sample of Welsh public-sector employees, we explored the role of several individual-level…

  7. High Performance Work System, HRD Climate and Organisational Performance: An Empirical Study

    Science.gov (United States)

    Muduli, Ashutosh

    2015-01-01

    Purpose: This paper aims to study the relationship between high-performance work system (HPWS) and organizational performance and to examine the role of human resource development (HRD) Climate in mediating the relationship between HPWS and the organizational performance in the context of the power sector of India. Design/methodology/approach: The…

  8. An Empirical Examination of the Mechanisms Mediating between High-Performance Work Systems and the Performance of Japanese Organizations

    Science.gov (United States)

    Takeuchi, Riki; Lepak, David P.; Wang, Heli; Takeuchi, Kazuo

    2007-01-01

    The resource-based view of the firm and social exchange perspectives are invoked to hypothesize linkages among high-performance work systems, collective human capital, the degree of social exchange in an establishment, and establishment performance. The authors argue that high-performance work systems generate a high level of collective human…

  9. High performance systems

    Energy Technology Data Exchange (ETDEWEB)

    Vigil, M.B. [comp.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  10. Engaging Employees: The Importance of High-Performance Work Systems for Patient Safety.

    Science.gov (United States)

    Etchegaray, Jason M; Thomas, Eric J

    2015-12-01

    To develop and test survey items that measure high-performance work systems (HPWSs), report psychometric characteristics of the survey, and examine associations between HPWSs and teamwork culture, safety culture, and overall patient safety grade. We reviewed literature to determine dimensions of HPWSs and then asked executives to tell us which dimensions they viewed as most important for safety and quality. We then created a HPWSs survey to measure the most important HPWSs dimensions. We administered an anonymous, electronic survey to employees with direct patient care working at a large hospital system in the Southern United States and looked for linkages between HPWSs, culture, and outcomes. Similarities existed for the HPWS practices viewed as most important by previous researchers and health-care executives. The HPWSs survey was found to be reliable, distinct from safety culture and teamwork culture based on a confirmatory factor analysis, and was the strongest predictor of the extent to which employees felt comfortable speaking up about patient safety problems as well as patient safety grade. We used information from a literature review and executive input to create a reliable and valid HPWSs survey. Future research needs to examine whether HPWSs is associated with additional safety and quality outcomes.

  11. Achieving organisational competence for clinical leadership: the role of high performance work systems.

    Science.gov (United States)

    Leggat, Sandra G; Balding, Cathy

    2013-01-01

    While there has been substantial discussion about the potential for clinical leadership in improving quality and safety in healthcare, there has been little robust study. The purpose of this paper is to present the results of a qualitative study with clinicians and clinician managers to gather opinions on the appropriate content of an educational initiative being planned to improve clinical leadership in quality and safety among medical, nursing and allied health professionals working in primary, community and secondary care. In total, 28 clinicians and clinician managers throughout the state of Victoria, Australia, participated in focus groups to provide advice on the development of a clinical leadership program in quality and safety. An inductive, thematic analysis was completed to enable the themes to emerge from the data. Overwhelmingly the participants conceptualised clinical leadership in relation to organisational factors. Only four individual factors, comprising emotional intelligence, resilience, self-awareness and understanding of other clinical disciplines, were identified as being important for clinical leaders. Conversely seven organisational factors, comprising role clarity and accountability, security and sustainability for clinical leaders, selective recruitment into clinical leadership positions, teamwork and decentralised decision making, training, information sharing, and transformational leadership, were seen as essential, but the participants indicated they were rarely addressed. The human resource management literature includes these seven components, with contingent reward, reduced status distinctions and measurement of management practices, as the essential organisational underpinnings of high performance work systems. The results of this study propose that clinical leadership is an organisational property, suggesting that capability frameworks and educational programs for clinical leadership need a broader organisation focus. The paper

  12. Knowledge Work Supervision: Transforming School Systems into High Performing Learning Organizations.

    Science.gov (United States)

    Duffy, Francis M.

    1997-01-01

    This article describes a new supervision model conceived to help a school system redesign its anatomy (structures), physiology (flow of information and webs of relationships), and psychology (beliefs and values). The new paradigm (Knowledge Work Supervision) was constructed by reviewing the practices of several interrelated areas: sociotechnical…

  13. High-performance work systems in health care, part 3: the role of the business case.

    Science.gov (United States)

    Song, Paula H; Robbins, Julie; Garman, Andrew N; McAlearney, Ann Scheck

    2012-01-01

    Growing evidence suggests the systematic use of high-performance work practices (HPWPs), or evidence-based management practices, holds promise to improve organizational performance, including improved quality and efficiency, in health care organizations. However, little is understood about the investment required for HPWP implementation, nor the business case for HPWP investment. The aim of this study is to enhance our understanding about organizations' perspectives of the business case for HPWP investment, including reasons for and approaches to evaluating that investment. We used a multicase study approach to explore the business case for HPWPs in U.S. health care organizations. We conducted semistructured interviews with 67 key informants across five sites. All interviews were recorded, transcribed, and subjected to qualitative analysis using both deductive and inductive methods. The organizations in our study did not appear to have explicit financial return expectations for investments in HPWPs. Instead, the HPWP investment was viewed as an important factor contributing to successful execution of the organization's strategic priorities and a means for competitive differentiation in the market. Informants' characterizations of the HPWP investment did not involve financial terms; rather, descriptions of these investments as redeployment of existing resources or a shift of managerial time redirected attention from cost considerations. Evaluation efforts were rare, with organizations using broad organizational metrics to justify HPWP investment or avoiding formal evaluation altogether. Our findings are consistent with prior studies that have found that health care organizations have not systematically evaluated the financial outcomes of their quality-related initiatives or tend to forget formal business case analysis for investments they may perceive as "inevitable." In the absence of a clearly described association between HPWPs and outcomes or some other external

  14. Production Ergonomics: Identifying and managing risk in the design of high performance work systems

    OpenAIRE

    Neumann, Patrick

    2004-01-01

    Poor ergonomics in production systems can compromise performance and cause musculoskeletal disorders (MSDs), which pose a huge cost to society, companies, and afflicted individuals. This thesis presents a research trajectory through the problem space by: 1) Identifying and quantifying workplace risk factors for MSDs, 2) Identifying how these risks may relate to production strategies, and 3) Developing an approach to integrating ergonomics into a companies’ regular development work. A v...

  15. High-Performance Operating Systems

    DEFF Research Database (Denmark)

    Sharp, Robin

    1999-01-01

    Notes prepared for the DTU course 49421 "High Performance Operating Systems". The notes deal with quantitative and qualitative techniques for use in the design and evaluation of operating systems in computer systems for which performance is an important parameter, such as real-time applications......, communication systems and multimedia systems....

  16. How high-performance work systems drive health care value: an examination of leading process improvement strategies.

    Science.gov (United States)

    Robbins, Julie; Garman, Andrew N; Song, Paula H; McAlearney, Ann Scheck

    2012-01-01

    As hospitals focus on increasing health care value, process improvement strategies have proliferated, seemingly faster than the evidence base supporting them. Yet, most process improvement strategies are associated with work practices for which solid evidence does exist. Evaluating improvement strategies in the context of evidence-based work practices can provide guidance about which strategies would work best for a given health care organization. We combined a literature review with analysis of key informant interview data collected from 5 case studies of high-performance work practices (HPWPs) in health care organizations. We explored the link between an evidence-based framework for HPWP use and 3 process improvement strategies: Hardwiring Excellence, Lean/Six Sigma, and Baldrige. We found that each of these process improvement strategies has not only strengths but also important gaps with respect to incorporating HPWPs involving engaging staff, aligning leaders, acquiring and developing talent, and empowering the front line. Given differences among these strategies, our analyses suggest that some may work better than others for individual health care organizations, depending on the organizations' current management systems. In practice, most organizations implementing improvement strategies would benefit from including evidence-based HPWPs to maximize the potential for process improvement strategies to increase value in health care.

  17. Do perceived high performance work systems influence the relationship between emotional labour, burnout and intention to leave? A study of Australian nurses.

    Science.gov (United States)

    Bartram, Timothy; Casimir, Gian; Djurkovic, Nick; Leggat, Sandra G; Stanton, Pauline

    2012-07-01

    The purpose of this article was to explore the relationships between perceived high performance work systems, emotional labour, burnout and intention to leave among nurses in Australia. Previous studies show that emotional labour and burnout are associated with an increase in intention to leave of nurses. There is evidence that high performance work systems are in association with a decrease in turnover. There are no previous studies that examine the relationship between high performance work systems and emotional labour. A cross-sectional, correlational survey. The study was conducted in Australia in 2008 with 183 nurses. Three hypotheses were tested with validated measures of emotional labour, burnout, intention to leave, and perceived high performance work systems. Principal component analysis was used to examine the structure of the measures. The mediation hypothesis was tested using Baron and Kenny's procedure and the moderation hypothesis was tested using hierarchical regression and the product-term. Emotional labour is positively associated with both burnout and intention to leave. Burnout mediates the relationship between emotional labour and intention to leave. Perceived high performance work systems negatively moderates the relationship between emotional labour and burnout. Perceived high performance work systems not only reduces the strength of the negative effect of emotional labour on burnout but also has a unique negative effect on intention to leave. Ensuring effective human resource management practice through the implementation of high performance work systems may reduce the burnout associated with emotional labour. This may assist healthcare organizations to reduce nurse turnover. © 2012 Blackwell Publishing Ltd.

  18. High performance work practices, innovation and performance

    DEFF Research Database (Denmark)

    Jørgensen, Frances; Newton, Cameron; Johnston, Kim

    2013-01-01

    Research spanning nearly 20 years has provided considerable empirical evidence for relationships between High Performance Work Practices (HPWPs) and various measures of performance including increased productivity, improved customer service, and reduced turnover. What stands out from......, and Africa to examine these various questions relating to the HPWP-innovation-performance relationship. Each paper discusses a practice that has been identified in HPWP literature and potential variables that can facilitate or hinder the effects of these practices of innovation- and performance...

  19. High-performance work systems in health care management, part 2: qualitative evidence from five case studies.

    Science.gov (United States)

    McAlearney, Ann Scheck; Garman, Andrew N; Song, Paula H; McHugh, Megan; Robbins, Julie; Harrison, Michael I

    2011-01-01

    : A capable workforce is central to the delivery of high-quality care. Research from other industries suggests that the methodical use of evidence-based management practices (also known as high-performance work practices [HPWPs]), such as systematic personnel selection and incentive compensation, serves to attract and retain well-qualified health care staff and that HPWPs may represent an important and underutilized strategy for improving quality of care and patient safety. : The aims of this study were to improve our understanding about the use of HPWPs in health care organizations and to learn about their contribution to quality of care and patient safety improvements. : Guided by a model of HPWPs developed through an extensive literature review and synthesis, we conducted a series of interviews with key informants from five U.S. health care organizations that had been identified based on their exemplary use of HPWPs. We sought to explore the applicability of our model and learn whether and how HPWPs were related to quality and safety. All interviews were recorded, transcribed, and subjected to qualitative analysis. : In each of the five organizations, we found emphasis on all four HPWP subsystems in our conceptual model-engagement, staff acquisition/development, frontline empowerment, and leadership alignment/development. Although some HPWPs were common, there were also practices that were distinctive to a single organization. Our informants reported links between HPWPs and employee outcomes (e.g., turnover and higher satisfaction/engagement) and indicated that HPWPs made important contributions to system- and organization-level outcomes (e.g., improved recruitment, improved ability to address safety concerns, and lower turnover). : These case studies suggest that the systematic use of HPWPs may improve performance in health care organizations and provide examples of how HPWPs can impact quality and safety in health care. Further research is needed to specify

  20. Performance, Performance System, and High Performance System

    Science.gov (United States)

    Jang, Hwan Young

    2009-01-01

    This article proposes needed transitions in the field of human performance technology. The following three transitions are discussed: transitioning from training to performance, transitioning from performance to performance system, and transitioning from learning organization to high performance system. A proposed framework that comprises…

  1. High-performance work systems in health care management, part 1: development of an evidence-informed model.

    Science.gov (United States)

    Garman, Andrew N; McAlearney, Ann Scheck; Harrison, Michael I; Song, Paula H; McHugh, Megan

    2011-01-01

    : Although management practices are recognized as important factors in improving health care quality and efficiency, most research thus far has focused on individual practices, ignoring or underspecifying the contexts within which these practices are operating. Research from other industries, which has increasingly focused on systems rather than individual practices, has yielded results that may benefit health services management. : Our goal was to develop a conceptual model on the basis of prior research from health care as well as other industries that could be used to inform important contextual considerations within health care. : Using theoretical frameworks from A. Donabedian (1966), P. M. Wright, T. M. Gardner, and L. M. Moynihan (2003), and B. Schneider, D. B. Smith, and H. W. Goldstein (2000) and review methods adapted from R. Pawson (2006b), we reviewed relevant research from peer-reviewed and other industry-relevant sources to inform our model. The model we developed was then reviewed with a panel of practitioners, including experts in quality and human resource management, to assess the applicability of the model to health care settings. : The resulting conceptual model identified four practice bundles, comprising 14 management practices as well as nine factors influencing adoption and perceived sustainability of these practices. The mechanisms by which these practices influence care outcomes are illustrated using the example of hospital-acquired infections. In addition, limitations of the current evidence base are discussed, and an agenda for future research in health care settings is outlined. : Results may help practitioners better conceptualize management practices as part of a broader system of work practices. This may, in turn, help practitioners to prioritize management improvement efforts more systematically.

  2. Management issues for high performance storage systems

    Energy Technology Data Exchange (ETDEWEB)

    Louis, S. [Lawrence Livermore National Lab., CA (United States); Burris, R. [Oak Ridge National Lab., TN (United States)

    1995-03-01

    Managing distributed high-performance storage systems is complex and, although sharing common ground with traditional network and systems management, presents unique storage-related issues. Integration technologies and frameworks exist to help manage distributed network and system environments. Industry-driven consortia provide open forums where vendors and users cooperate to leverage solutions. But these new approaches to open management fall short addressing the needs of scalable, distributed storage. We discuss the motivation and requirements for storage system management (SSM) capabilities and describe how SSM manages distributed servers and storage resource objects in the High Performance Storage System (HPSS), a new storage facility for data-intensive applications and large-scale computing. Modem storage systems, such as HPSS, require many SSM capabilities, including server and resource configuration control, performance monitoring, quality of service, flexible policies, file migration, file repacking, accounting, and quotas. We present results of initial HPSS SSM development including design decisions and implementation trade-offs. We conclude with plans for follow-on work and provide storage-related recommendations for vendors and standards groups seeking enterprise-wide management solutions.

  3. High Performance Work Practices In Indian Organizations- An Understanding

    OpenAIRE

    Awasthi, Shakti

    2013-01-01

    In todays global competitive era, every business aims to optimize their processes. High performance work practices are such a practice, which can lead to the optimal utilization of human resources. In the present article I have tried to bring into light different aspects related to the high performing work practices in the organizational setup and their implementation can make a difference in the organization. The high performance work practices not only can bring the change in human resource...

  4. Toward High Performance in Industrial Refrigeration Systems

    DEFF Research Database (Denmark)

    Thybo, C.; Izadi-Zamanabadi, Roozbeh; Niemann, H.

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design in industr......Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...... in industrial refrigeration systems....

  5. Towards High Performance in Industrial Refrigeration Systems

    DEFF Research Database (Denmark)

    Thybo, C.; Izadi-Zamanabadi, Roozbeh; Niemann, H.

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design in industr......Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...... in industrial refrigeration systems....

  6. Toward a high-performance management system in health care, part 4: Using high-performance work practices to prevent central line-associated blood stream infections-a comparative case study.

    Science.gov (United States)

    McAlearney, Ann Scheck; Hefner, Jennifer; Robbins, Julie; Garman, Andrew N

    2016-01-01

    Central line-associated bloodstream infections (CLABSIs) are among the most harmful health care-associated infections and a major patient safety concern. Nationally, CLABSI rates have been reduced through the implementation of evidence-based interventions; thus far, however, hospitals still differ substantially in their success implementing these practices. Prior research on high-performance work practices (HPWPs) suggests that these practices may explain some of the differences health systems experience in the success of their quality improvement efforts; however, these relationships have not yet been systematically investigated. In this study, we sought to explore the potential role HPWPs may play in explaining differences in the success of CLABSI reduction efforts involving otherwise similar organizations and approaches. To form our sample, we identified eight hospitals participating in the federally funded "On the CUSP: Stop BSI" initiative. This sample included four hospital "pairs" matched on organizational characteristics (e.g., state, size, teaching status) but having reported contrasting CLABSI reduction outcomes. We collected data through site visits as well as 194 key informant interviews, which were framed using an evidence-informed model of health care HPWPs. We found evidence that, at higher performing sites, HPWPs facilitated the adoption and consistent application of practices known to prevent CLABSIs; these HPWPs were virtually absent at lower performing sites. We present examples of management practices and illustrative quotes categorized into four HPWP subsystems: (a) staff engagement, (b) staff acquisition/development, (c) frontline empowerment, and (d) leadership alignment/development. We present the HPWP model as an organizing framework that can be applied to facilitate quality and patient safety efforts in health care. Managers and senior leaders can use these four HPWP subsystems to select, prioritize, and communicate about management

  7. Myth Busting: Do High-Performance Students Prefer Working Alone?

    Science.gov (United States)

    Walker, Cheryl L.; Shore, Bruce M.

    2015-01-01

    There has been a longstanding assumption that gifted, high-ability, or high-performing students prefer working alone; however, this may not be true in every case. The current study expanded on this assumption to reveal more nuanced learning preferences of these students. Sixty-nine high-performing and community-school students in Grades 5 and 6…

  8. Developing collective customer knowledge and service climate: The interaction between service-oriented high-performance work systems and service leadership.

    Science.gov (United States)

    Jiang, Kaifeng; Chuang, Chih-Hsun; Chiao, Yu-Ching

    2015-07-01

    This study theorized and examined the influence of the interaction between Service-Oriented high-performance work systems (HPWSs) and service leadership on collective customer knowledge and service climate. Using a sample of 569 employees and 142 managers in footwear retail stores, we found that Service-Oriented HPWSs and service leadership reduced the influences of one another on collective customer knowledge and service climate, such that the positive influence of service leadership on collective customer knowledge and service climate was stronger when Service-Oriented HPWSs were lower than when they were higher or the positive influence of Service-Oriented HPWSs on collective customer knowledge and service climate was stronger when service leadership was lower than when it was higher. We further proposed and found that collective customer knowledge and service climate were positively related to objective financial outcomes through service performance. Implications for the literature and managerial practices are discussed. (c) 2015 APA, all rights reserved).

  9. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  10. Performance tuning for high performance computing systems

    OpenAIRE

    Pahuja, Himanshu

    2017-01-01

    A Distributed System is composed by integration between loosely coupled software components and the underlying hardware resources that can be distributed over the standard internet framework. High Performance Computing used to involve utilization of supercomputers which could churn a lot of computing power to process massively complex computational tasks, but is now evolving across distributed systems, thereby having the ability to utilize geographically distributed computing resources. We...

  11. Hybrid ventilation systems and high performance buildings

    Energy Technology Data Exchange (ETDEWEB)

    Utzinger, D.M. [Wisconsin Univ., Milwaukee, WI (United States). School of Architecture and Urban Planning

    2009-07-01

    This paper described hybrid ventilation design strategies and their impact on 3 high performance buildings located in southern Wisconsin. The Hybrid ventilation systems combined occupant controlled natural ventilation with mechanical ventilation systems. Natural ventilation was shown to provide adequate ventilation when appropriately designed. Proper control integration of natural ventilation into hybrid systems was shown to reduce energy consumption in high performance buildings. This paper also described the lessons learned from the 3 buildings. The author served as energy consultant on all three projects and had the responsibility of designing and integrating the natural ventilation systems into the HVAC control strategy. A post occupancy evaluation of building energy performance has provided learning material for architecture students. The 3 buildings included the Schlitz Audubon Nature Center completed in 2003; the Urban Ecology Center completed in 2004; and the Aldo Leopold Legacy Center completed in 2007. This paper included the size, measured energy utilization intensity and percentage of energy supplied by renewable solar power and bio-fuels on site for each building. 6 refs., 2 tabs., 6 figs.

  12. A meta-analysis of country differences in the high-performance work system-business performance relationship: the roles of national culture and managerial discretion.

    Science.gov (United States)

    Rabl, Tanja; Jayasinghe, Mevan; Gerhart, Barry; Kühlmann, Torsten M

    2014-11-01

    Our article develops a conceptual framework based primarily on national culture perspectives but also incorporating the role of managerial discretion (cultural tightness-looseness, institutional flexibility), which is aimed at achieving a better understanding of how the effectiveness of high-performance work systems (HPWSs) may vary across countries. Based on a meta-analysis of 156 HPWS-business performance effect sizes from 35,767 firms and establishments in 29 countries, we found that the mean HPWS-business performance effect size was positive overall (corrected r = .28) and positive in each country, regardless of its national culture or degree of institutional flexibility. In the case of national culture, the HPWS-business performance relationship was, on average, actually more strongly positive in countries where the degree of a priori hypothesized consistency or fit between an HPWS and national culture (according to national culture perspectives) was lower, except in the case of tight national cultures, where greater a priori fit of an HPWS with national culture was associated with a more positive HPWS-business performance effect size. However, in loose cultures (and in cultures that were neither tight nor loose), less a priori hypothesized consistency between an HPWS and national culture was associated with higher HPWS effectiveness. As such, our findings suggest the importance of not only national culture but also managerial discretion in understanding the HPWS-business performance relationship. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  13. High-Performance Energy Applications and Systems

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Barton [Univ. of Wisconsin, Madison, WI (United States)

    2014-01-01

    The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building tools and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “Foundational Tools for Petascale Computing”, SC0003922/FG02-10ER25940, UW PRJ27NU.

  14. Understanding the Work and Learning of High Performance Coaches

    Science.gov (United States)

    Rynne, Steven B.; Mallett, Cliff J.

    2012-01-01

    Background: The development of high performance sports coaches has been proposed as a major imperative in the professionalization of sports coaching. Accordingly, an increasing body of research is beginning to address the question of how coaches learn. While this is important work, an understanding of how coaches learn must be underpinned by an…

  15. High Performance Commercial Fenestration Framing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  16. High-performance commercial building systems

    Energy Technology Data Exchange (ETDEWEB)

    Selkowitz, Stephen

    2003-10-01

    This report summarizes key technical accomplishments resulting from the three year PIER-funded R&D program, ''High Performance Commercial Building Systems'' (HPCBS). The program targets the commercial building sector in California, an end-use sector that accounts for about one-third of all California electricity consumption and an even larger fraction of peak demand, at a cost of over $10B/year. Commercial buildings also have a major impact on occupant health, comfort and productivity. Building design and operations practices that influence energy use are deeply engrained in a fragmented, risk-averse industry that is slow to change. Although California's aggressive standards efforts have resulted in new buildings designed to use less energy than those constructed 20 years ago, the actual savings realized are still well below technical and economic potentials. The broad goal of this program is to develop and deploy a set of energy-saving technologies, strategies, and techniques, and improve processes for designing, commissioning, and operating commercial buildings, while improving health, comfort, and performance of occupants, all in a manner consistent with sound economic investment practices. Results are to be broadly applicable to the commercial sector for different building sizes and types, e.g. offices and schools, for different classes of ownership, both public and private, and for owner-occupied as well as speculative buildings. The program aims to facilitate significant electricity use savings in the California commercial sector by 2015, while assuring that these savings are affordable and promote high quality indoor environments. The five linked technical program elements contain 14 projects with 41 distinct R&D tasks. Collectively they form a comprehensive Research, Development, and Demonstration (RD&D) program with the potential to capture large savings in the commercial building sector, providing significant economic benefits to

  17. Women's participation in high performance work practices: a comparative analysis of Portugal and Spain

    OpenAIRE

    Ferreira, Pedro; Porto, Nelida; Portela, Marta

    2010-01-01

    High-performance work systems (HPWS) can be seen as a set of new forms of work organization combined with flexible human resources (HR) practices that enhance organizational performance through employee involvement and empowerment. Although in the past two decades much research has been conducted on the effects that high-performance work practices can have on organizations, there is still much to know about the ideal conditions for the adoption of such practices. According to s...

  18. Optical interconnection networks for high-performance computing systems.

    Science.gov (United States)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  19. Decal electronics for printed high performance cmos electronic systems

    KAUST Repository

    Hussain, Muhammad Mustafa

    2017-11-23

    High performance complementary metal oxide semiconductor (CMOS) electronics are critical for any full-fledged electronic system. However, state-of-the-art CMOS electronics are rigid and bulky making them unusable for flexible electronic applications. While there exist bulk material reduction methods to flex them, such thinned CMOS electronics are fragile and vulnerable to handling for high throughput manufacturing. Here, we show a fusion of a CMOS technology compatible fabrication process for flexible CMOS electronics, with inkjet and conductive cellulose based interconnects, followed by additive manufacturing (i.e. 3D printing based packaging) and finally roll-to-roll printing of packaged decal electronics (thin film transistors based circuit components and sensors) focusing on printed high performance flexible electronic systems. This work provides the most pragmatic route for packaged flexible electronic systems for wide ranging applications.

  20. Reliable High Performance Processing System (RHPPS) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA's exploration, science, and space operations systems are critically dependent on the hardware technologies used in their implementation. Specifically, the...

  1. High Performance Interactive System Dynamics Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Duckworth, Jonathan C [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-14

    This brochure describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  2. High Performance Interactive System Dynamics Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Duckworth, Jonathan C [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-14

    This presentation describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  3. High Performance Embedded System for Real-Time Pattern Matching

    CERN Document Server

    Sotiropoulou, Calliope Louisa; The ATLAS collaboration; Gkaitatzis, Stamatios; Citraro, Saverio; Giannetti, Paola; Dell'Orso, Mauro

    2016-01-01

    We present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics (HEP) and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton-proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The design uses the flexibility of Field Programmable Gate Arrays (FPGAs) and the powerful Associative Memory Chip (ASIC) to achieve real-time performance. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain.

  4. High Performance Embedded System for Real-Time Pattern Matching

    CERN Document Server

    Sotiropoulou, Calliope Louisa; The ATLAS collaboration; Gkaitatzis, Stamatios; Citraro, Saverio; Giannetti, Paola; Dell'Orso, Mauro

    2016-01-01

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics (HEP) and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton-proton collisions in hadron collider experiments. A miniaturised version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory (AM) chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering...

  5. High Performance Input/Output Systems for High Performance Computing and Four-Dimensional Data Assimilation

    Science.gov (United States)

    Fox, Geoffrey C.; Ou, Chao-Wei

    1997-01-01

    The approach of this task was to apply leading parallel computing research to a number of existing techniques for assimilation, and extract parameters indicating where and how input/output limits computational performance. The following was used for detailed knowledge of the application problems: 1. Developing a parallel input/output system specifically for this application 2. Extracting the important input/output characteristics of data assimilation problems; and 3. Building these characteristics s parameters into our runtime library (Fortran D/High Performance Fortran) for parallel input/output support.

  6. A Case of Innovative Integration of High-Performance Work Teams.

    Science.gov (United States)

    Thompson, Faye; Baughan, Donna; Motwani, Jaideep

    1998-01-01

    A case study of a Fortune 500 company was used to develop an integrated model of high-performance work organizations. Components are systems thinking, team interaction, team principles, and results. The model requires an ongoing training plan, change agents or champions, and recognition of teams' productive potential and fragile nature. (SK)

  7. High performance embedded system for real-time pattern matching

    Energy Technology Data Exchange (ETDEWEB)

    Sotiropoulou, C.-L., E-mail: c.sotiropoulou@cern.ch [University of Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Luciano, P. [University of Cassino and Southern Lazio, Gaetano di Biasio 43, Cassino 03043 (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Gkaitatzis, S. [Aristotle University of Thessaloniki, 54124 Thessaloniki (Greece); Citraro, S. [University of Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Giannetti, P. [INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Dell' Orso, M. [University of Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy)

    2017-02-11

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton–proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering) are also implemented on the FPGA. The pattern matching can be executed on a 2D or 3D space, on black and white or grayscale images, depending on the application and thus increasing exponentially the processing requirements of the system. We present the firmware implementation of the training and pattern matching algorithm, performance and results on a latest generation Xilinx Kintex Ultrascale FPGA device. - Highlights: • A high performance embedded system for real-time pattern matching is proposed. • It is based on a system developed for High Energy Physics experiment triggers. • It mimics the operation of the human brain (cognitive image processing). • The process can be executed on 2D and 3D, black and white or grayscale images. • The implementation uses FPGAs and custom designed associative memory (AM) chips.

  8. High performance thermal insulation systems - HLWD; Hochleistungs-Waermedaemmung HLWD

    Energy Technology Data Exchange (ETDEWEB)

    Eicher, H.; Erb, M. [Eicher und Pauli AG, Liestal (Switzerland); Binz, A.; Moosmann, A. [Fachhochschule beider Basel, Institut fuer Energie, Muttenz (Switzerland)

    2000-12-15

    This final report for the Swiss Federal Office of Energy (SFOE) by the research program concerning the efficient use of energy in buildings takes a look at high-performance thermal insulation systems (HLWD). Work done on three applications - internal insulation used in the refurbishment of buildings, insulation of hot-water storage tanks and outside doors - is reported on. Economic feasibility is discussed and a number of demonstration projects are reported on. Apart from the above mentioned, the insulation of a terrace, the insulation of a roller-blind housing and the insulation of a deep-freeze cubicle are reviewed. The construction of vacuum insulation panels (VIP) and their manufacture are looked at. Economic aspects are looked at and the use of VIP in practice is discussed.

  9. Coal-fired high performance power generating system. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-08-31

    As a result of the investigations carried out during Phase 1 of the Engineering Development of Coal-Fired High-Performance Power Generation Systems (Combustion 2000), the UTRC-led Combustion 2000 Team is recommending the development of an advanced high performance power generation system (HIPPS) whose high efficiency and minimal pollutant emissions will enable the US to use its abundant coal resources to satisfy current and future demand for electric power. The high efficiency of the power plant, which is the key to minimizing the environmental impact of coal, can only be achieved using a modern gas turbine system. Minimization of emissions can be achieved by combustor design, and advanced air pollution control devices. The commercial plant design described herein is a combined cycle using either a frame-type gas turbine or an intercooled aeroderivative with clean air as the working fluid. The air is heated by a coal-fired high temperature advanced furnace (HITAF). The best performance from the cycle is achieved by using a modern aeroderivative gas turbine, such as the intercooled FT4000. A simplified schematic is shown. In the UTRC HIPPS, the conversion efficiency for the heavy frame gas turbine version will be 47.4% (HHV) compared to the approximately 35% that is achieved in conventional coal-fired plants. This cycle is based on a gas turbine operating at turbine inlet temperatures approaching 2,500 F. Using an aeroderivative type gas turbine, efficiencies of over 49% could be realized in advanced cycle configuration (Humid Air Turbine, or HAT). Performance of these power plants is given in a table.

  10. High performance embedded system for real-time pattern matching

    Science.gov (United States)

    Sotiropoulou, C.-L.; Luciano, P.; Gkaitatzis, S.; Citraro, S.; Giannetti, P.; Dell'Orso, M.

    2017-02-01

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton-proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering) are also implemented on the FPGA. The pattern matching can be executed on a 2D or 3D space, on black and white or grayscale images, depending on the application and thus increasing exponentially the processing requirements of the system. We present the firmware implementation of the training and pattern matching algorithm, performance and results on a latest generation Xilinx Kintex Ultrascale FPGA device.

  11. Addressing Microaggressions to Facilitate High-Performing Work Climates

    Science.gov (United States)

    Brown Clarke, J.

    2016-12-01

    Microaggressions can be described as verbal, behavioral or environmental insults, whether intentional or unintentional, that communicates hostile, derogatory, or negative messages toward individuals based on one's race, ethnicity, gender, sexuality, intersectionality, thisABILITIES, language, socioeconomic and/or citizenship status. This interactive workshop will engage participants to examine and identifying microaggressions, then work collaboratively to develop strategies and tools to confront and remove them from the environment. At the end of this session, participants will be more aware of their own personal biases and stereotypes, and the influence it can have on the organizational climate: Learn how to detect microaggressions Learn how to react to microaggressions Learn how to sustain a microaggression-free environment

  12. Coal-fired high performance power generating system

    Energy Technology Data Exchange (ETDEWEB)

    1992-07-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of > 47% thermal efficiency; NO[sub x] SO [sub x] and Particulates < 25% NSPS; Cost of electricity 10% lower; coal > 65% of heat input and all solid wastes benign. In order to achieve these goals our team has outlined a research plan based on an optimized analysis of a 250 MW[sub e] combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (HITAF) which integrates several combustor and air heater designs with appropriate ash management procedures. Most of this report discusses the details of work on these components, and the R D Plan for future work. The discussion of the combustor designs illustrates how detailed modeling can be an effective tool to estimate NO[sub x] production, minimum burnout lengths, combustion temperatures and even particulate impact on the combustor walls. When our model is applied to the long flame concept it indicates that fuel bound nitrogen will limit the range of coals that can use this approach. For high nitrogen coals a rapid mixing, rich-lean, deep staging combustor will be necessary. The air heater design has evolved into two segments: a convective heat exchanger downstream of the combustion process; a radiant panel heat exchanger, located in the combustor walls; The relative amount of heat transferred either radiatively or convectively will depend on the combustor type and the ash properties.

  13. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  14. Do "High-Performance" Work Practices Improve Establishment-Level Outcomes?

    Science.gov (United States)

    Cappelli, Peter; Neumark, David

    2001-01-01

    Links between organizational performance and high-performance work practices were studied using data from the National Employment Survey, measures of work practices comparable across organizations, and a longitudinal design incorporating data predating use of high-performance practices. Practices raise employee compensation without necessarily…

  15. High-performance mass storage system for workstations

    Science.gov (United States)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive

  16. Alternative High Performance Polymers for Ablative Thermal Protection Systems

    Science.gov (United States)

    Boghozian, Tane; Stackpoole, Mairead; Gonzales, Greg

    2015-01-01

    Ablative thermal protection systems are commonly used as protection from the intense heat during re-entry of a space vehicle and have been used successfully on many missions including Stardust and Mars Science Laboratory both of which used PICA - a phenolic based ablator. Historically, phenolic resin has served as the ablative polymer for many TPS systems. However, it has limitations in both processing and properties such as char yield, glass transition temperature and char stability. Therefore alternative high performance polymers are being considered including cyanate ester resin, polyimide, and polybenzoxazine. Thermal and mechanical properties of these resin systems were characterized and compared with phenolic resin.

  17. Probabilistic performance-based design for high performance control systems

    Science.gov (United States)

    Micheli, Laura; Cao, Liang; Gong, Yongqiang; Cancelli, Alessandro; Laflamme, Simon; Alipour, Alice

    2017-04-01

    High performance control systems (HPCS) are advanced damping systems capable of high damping performance over a wide frequency bandwidth, ideal for mitigation of multi-hazards. They include active, semi-active, and hybrid damping systems. However, HPCS are more expensive than typical passive mitigation systems, rely on power and hardware (e.g., sensors, actuators) to operate, and require maintenance. In this paper, a life cycle cost analysis (LCA) approach is proposed to estimate the economic benefit these systems over the entire life of the structure. The novelty resides in the life cycle cost analysis in the performance based design (PBD) tailored to multi-level wind hazards. This yields a probabilistic performance-based design approach for HPCS. Numerical simulations are conducted on a building located in Boston, MA. LCA are conducted for passive control systems and HPCS, and the concept of controller robustness is demonstrated. Results highlight the promise of the proposed performance-based design procedure.

  18. The architecture of the High Performance Storage System (HPSS)

    Energy Technology Data Exchange (ETDEWEB)

    Teaff, D.; Coyne, B. [IBM Federal, Houston, TX (United States); Watson, D. [Lawrence Livermore National Lab., CA (United States)

    1995-01-01

    The rapid growth in the size of datasets has caused a serious imbalance in I/O and storage system performance and functionality relative to application requirements and the capabilities of other system components. The High Performance Storage System (HPSS) is a scalable, next-generation storage system that will meet the functionality and performance requirements of large-scale scientific and commercial computing environments. Our goal is to improve the performance and capacity of storage systems by two orders of magnitude or more over what is available in the general or mass marketplace today. We are also providing corresponding improvements in architecture and functionality. This paper describes the architecture and functionality of HPSS.

  19. Do Danes enjoy a high performing chronic care system

    DEFF Research Database (Denmark)

    Juul, Annegrete; Olejaz, Maria; Rudkjøbing, Andreas

    2012-01-01

    The trends in population health in Denmark are similar to those in most Western European countries. Major health issues include, among others, the high prevalence of chronic illnesses and lifestyle related risk factors such as obesity, tobacco, physical inactivity and alcohol. This has pressed...... the health system towards a model of provision of care based on the management of chronic care conditions. While the Chronic Care Model was introduced in 2005, the Danish health system does not fulfil the ten key preconditions that would characterise a high-performing chronic care system. As revealed...... in a recent report, the fragmented structure of the Danish health system poses challenges in providing effectively coordinated care to patients with chronic diseases....

  20. Victimization of high performers: the roles of envy and work group identification.

    Science.gov (United States)

    Kim, Eugene; Glomb, Theresa M

    2014-07-01

    Drawing from victim precipitation, social comparison, and identity theories, this study develops and tests an integrative model of the victimization of high-performing employees. We examine envy as an explanatory mechanism of the victimization of high performers from fellow group members and propose work group identification as a moderator of this envy mechanism. Study 1, in a sample of 4,874 university staff employees in 339 work groups, supports the proposition that high performers are more likely to be targets of victimization. In Study 2, multisource data collected at 2 time points (217 employees in 67 work groups in 3 organizations), supports the proposition that high performers are more likely to be targets of victimization because of fellow group members' envy, and work group identification mitigates the mediated relationship.

  1. High Performance Palmprint Identification System Based On Two Dimensional Gabor

    Directory of Open Access Journals (Sweden)

    Erdiawan

    2010-12-01

    Full Text Available Palmprint is a relatively new in physiological biometrics. Palmprint region of interest (ROI segmentation and feature extraction are two important issues in palmprint recognition. The main problem in palmprint recognition system is how to extract the region of interest (ROI and the features of palmprint. This paper introduces two steps in center of mass moment method for ROI segmentation and then applied the Gabor two dimensional (2D filters to obtain palm code as palmprint feature vector. Normalized Hamming distance is used to measure the similarity degrees of two feature vectors of palmprint. The system has been tested by using database 1000 palmprint images, are generated from 5 samples from each of the 200 persons randomly selected. Experiment results show that this system can achieve a high performance with success rate about 98.7% (FRR=1.1667%, FAR=0.1111%, T=0.376.

  2. Development of high-performance solar LED lighting system

    KAUST Repository

    Huang, B.J.

    2010-08-01

    The present study developed a high-performance charge/discharge controller for stand-alone solar LED lighting system by incorporating an nMPPO system design, a PWM battery charge control, and a PWM battery discharge control to directly drive the LED. The MPPT controller can then be removed from the stand-alone solar system and the charged capacity of the battery increases 9.7%. For LED driven by PWM current directly from battery, a reliability test for the light decay of LED lamps was performed continuously for 13,200 h. It has shown that the light decay of PWM-driven LED is the same as that of constant-current driven LED. The switching energy loss of the MOSFET in the PWM battery discharge control is less than 1%. Three solar-powered LED lighting systems (18 W, 100 W and 150 W LED) were designed and built. The long-term outdoor field test results have shown that the system performance is satisfactory with the control system developed in the present study. The loss of load probability for the 18 W solar LED system is 14.1% in winter and zero in summer. For the 100 W solar LED system, the loss of load probability is 3.6% in spring. © 2009 Elsevier Ltd. All rights reserved.

  3. Operating System Support for High-Performance Solid State Drives

    DEFF Research Database (Denmark)

    Bjørling, Matias

    a form of application-SSD co-design? What are the impacts on operating system design? (v) What would it take to provide quality of service for applications requiring millions of I/O per second? The dissertation consists of six publications covering these issues. Two of the main contributions...... of the operating system in reducing the gap, and enabling new forms of communication and even co-design between applications and high-performance SSDs. More specifically, we studied the storage layers within the Linux kernel. We explore the following issues: (i) what are the limitations of the legacy block......The performance of Solid State Drives (SSD) has evolved from hundreds to millions of I/Os per second in the past three years. Such a radical evolution is transforming both the storage and the software industries. Indeed, software designed based on the assumption of slow IOs has become...

  4. High performance computing in power and energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Khaitan, Siddhartha Kumar [Iowa State Univ., Ames, IA (United States); Gupta, Anshul (eds.) [IBM Watson Research Center, Yorktown Heights, NY (United States)

    2013-07-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, cascading and security analysis, interaction between hybrid systems (electric, transport, gas, oil, coal, etc.) and so on, to get meaningful information in real time to ensure a secure, reliable and stable power system grid. Advanced research on development and implementation of market-ready leading-edge high-speed enabling technologies and algorithms for solving real-time, dynamic, resource-critical problems will be required for dynamic security analysis targeted towards successful implementation of Smart Grid initiatives. This books aims to bring together some of the latest research developments as well as thoughts on the future research directions of the high performance computing applications in electric power systems planning, operations, security, markets, and grid integration of alternate sources of energy, etc.

  5. High Performance Gigabit Ethernet Switches for DAQ Systems

    CERN Document Server

    Barczyk, Artur

    2005-01-01

    Commercially available high performance Gigabit Ethernet (GbE) switches are optimized mostly for Internet and standard LAN application traffic. DAQ systems on the other hand usually make use of very specific traffic patterns, with e.g. deterministic arrival times. Industry's accepted loss-less limit of 99.999% may be still unacceptably high for DAQ purposes, as e.g. in the case of the LHCb readout system. In addition, even switches passing this criteria under random traffic can show significantly higher loss rates if subject to our traffic pattern, mainly due to buffer memory limitations. We have evaluated the performance of several switches, ranging from "pizza-box" devices with 24 or 48 ports up to chassis based core switches in a test-bed capable to emulate realistic traffic patterns as expected in the readout system of our experiment. The results obtained in our tests have been used to refine and parametrize our packet level simulation of the complete LHCb readout network. In this paper we report on the...

  6. High Performance Storage System Scalability: Architecture, Implementation, and Experience

    Energy Technology Data Exchange (ETDEWEB)

    Watson, R W

    2005-01-05

    The High Performance Storage System (HPSS) provides scalable hierarchical storage management (HSM), archive, and file system services. Its design, implementation and current dominant use are focused on HSM and archive services. It is also a general-purpose, global, shared, parallel file system, potentially useful in other application domains. When HPSS design and implementation began over a decade ago, scientific computing power and storage capabilities at a site, such as a DOE national laboratory, was measured in a few 10s of gigaops, data archived in HSMs in a few 10s of terabytes at most, data throughput rates to an HSM in a few megabytes/s, and daily throughput with the HSM in a few gigabytes/day. At that time, the DOE national laboratories and IBM HPSS design team recognized that we were headed for a data storage explosion driven by computing power rising to teraops/petaops requiring data stored in HSMs to rise to petabytes and beyond, data transfer rates with the HSM to rise to gigabytes/s and higher, and daily throughput with a HSM in 10s of terabytes/day. This paper discusses HPSS architectural, implementation and deployment experiences that contributed to its success in meeting the above orders of magnitude scaling targets. We also discuss areas that need additional attention as we continue significant scaling into the future.

  7. Thermoplastic high performance cable insulation systems for flexible system operation

    OpenAIRE

    Vaughan, A.S.; Green, C.D.; Hosier, I.L.; Stevens, G.C.; Pye, A.; Thomas, J.L.; Sutton, S.J.; Guessens, T.

    2015-01-01

    Crosslinked polyethylene (XLPE) has been the cable insulation material of choice in many different transmission and distribution applications for many years and, while this material has many desirable characteristics, its thermo-mechanical properties have consequences for both continuous and emergency cable ratings which, in turn, have implications for system operational flexibility. In this paper, we describe the principles and two embodiments through which new thermoplastic insulation syste...

  8. Configurable computing for high-security/high-performance ambient systems

    OpenAIRE

    Gogniat, Guy; Bossuet, Lilian; Burleson, Wayne

    2005-01-01

    This paper stresses why configurable computing is a promising target to guarantee the hardware security of ambient systems. Many works have focused on configurable computing to demonstrate its efficiency but as far as we know none have addressed the security issue from system to circuit levels. This paper recalls main hardware attacks before focusing on issues to build secure systems on configurable computing. Two complementary views are presented to provide a guide for security and main issues ...

  9. Organization and multidisciplinary work in an olympic high performance centers in USA

    OpenAIRE

    William J. Moreau, DC Dacbsp.; Dustin Nabhan, DC, Dacbsp.

    2012-01-01

    The organization and methodology of providing services to athletes through Olympic high performance centers varies among the National Olympic Committees (NOC). Between NOCs, provider composition and methodology for the delivery of services differs. Services provided typically include sports medicine and sports performance. NOCs may provide service through a university-based system or high performance centers. The United States Olympic Committee (USOC) provides services using multiple approach...

  10. Organization and multidisciplinary work in an olympic high performance centers in USA

    Directory of Open Access Journals (Sweden)

    William J. Moreau, DC Dacbsp.

    2012-05-01

    Full Text Available The organization and methodology of providing services to athletes through Olympic high performance centers varies among the National Olympic Committees (NOC. Between NOCs, provider composition and methodology for the delivery of services differs. Services provided typically include sports medicine and sports performance. NOCs may provide service through a university-based system or high performance centers. The United States Olympic Committee (USOC provides services using multiple approaches through a hybrid model that includes three Olympic Training Centers, National Governing Bodies (NGB high performance centers and independent specialty care centers. Some highly developed National Governing Bodies have dedicated high performance training centers that serve only their sport. The model of sports medicine and sports performance programming utilized by the USOC Olympic Training Centers is described in this manuscript.

  11. High performance work practices in the health care sector: A dutch case study

    NARCIS (Netherlands)

    Boselie, J.P.P.E.F.|info:eu-repo/dai/nl/177012277

    2010-01-01

    Purpose – This paper aims to present an empirical study of the effect of high performance work practices on commitment and citizenship behaviour in the health care sector. The theory suggests that individual employees are willing “to go the extra mile” when they are given the opportunity to develop

  12. The Design and Construction of a Battery Electric Vehicle Propulsion System - High Performance Electric Kart Application

    Science.gov (United States)

    Burridge, Mark; Alahakoon, Sanath

    2017-07-01

    This paper presents an electric propulsion system designed specifically to meet the performance specification for a competition racing kart application. The paper presents the procedure for the engineering design, construction and testing of the electric powertrain of the vehicle. High performance electric Go-Kart is not an established technology within Australia. It is expected that this work will provide design guidelines for a high performance electric propulsion system with the capability of forming the basis of a competitive electric kart racing formula for Australian conditions.

  13. Using high-performance work practices in health care organizations: a perspective for nursing.

    Science.gov (United States)

    McAlearney, Ann Scheck; Robbins, Julie

    2014-01-01

    Studies suggest that the use of high-performance work practices (HPWPs) may help improve quality in health care. We interviewed 67 administrators and clinicians across 5 health care organizations and found that the use of HPWPs was valued and salient for nurses. Communication appeared particularly important to facilitate HPWP use. Enhancing our understanding of HPWP use may help improve the work environment for nurses while also increasing care quality.

  14. Low-Cost, High-Performance Hall Thruster Support System

    Science.gov (United States)

    Hesterman, Bryce

    2015-01-01

    Colorado Power Electronics (CPE) has built an innovative modular PPU for Hall thrusters, including discharge, magnet, heater and keeper supplies, and an interface module. This high-performance PPU offers resonant circuit topologies, magnetics design, modularity, and a stable and sustained operation during severe Hall effect thruster current oscillations. Laboratory testing has demonstrated discharge module efficiency of 96 percent, which is considerably higher than current state of the art.

  15. Toward high performance radioisotope thermophotovoltaic systems using spectral control

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xiawa, E-mail: xiawaw@mit.edu [Electrical Engineering Department, Massachusetts Institute of Technology, Cambridge, MA02139 (United States); Institute for Soldier Nanotechnologies, Massachusetts Institute of Technology, Cambridge, MA02139 (United States); Chan, Walker [Institute for Soldier Nanotechnologies, Massachusetts Institute of Technology, Cambridge, MA02139 (United States); Stelmakh, Veronika [Electrical Engineering Department, Massachusetts Institute of Technology, Cambridge, MA02139 (United States); Institute for Soldier Nanotechnologies, Massachusetts Institute of Technology, Cambridge, MA02139 (United States); Celanovic, Ivan [Institute for Soldier Nanotechnologies, Massachusetts Institute of Technology, Cambridge, MA02139 (United States); Fisher, Peter [Institute for Soldier Nanotechnologies, Massachusetts Institute of Technology, Cambridge, MA02139 (United States); Physics Department, Massachusetts Institute of Technology, Cambridge, MA02139 (United States)

    2016-12-01

    This work describes RTPV-PhC-1, an initial prototype for a radioisotope thermophotovoltaic (RTPV) system using a two-dimensional photonic crystal emitter and low bandgap thermophotovoltaic (TPV) cell to realize spectral control. We validated a system simulation using the measurements of RTPV-PhC-1 and its comparison setup RTPV-FlatTa-1 with the same configuration except a polished tantalum emitter. The emitter of RTPV-PhC-1 powered by an electric heater providing energy equivalent to one plutonia fuel pellet reached 950 °C with 52 W of thermal input power and produced 208 mW output power from 1 cm{sup 2} TPV cell. We compared the system performance using a photonic crystal emitter to a polished flat tantalum emitter and found that spectral control with the photonic crystal was four times more efficient. Based on the simulation, with more cell areas, better TPV cells, and improved insulation design, the system powered by a fuel pellet equivalent heat source is expected to reach an efficiency of 7.8%.

  16. Evolution of a high-performance storage system based on magnetic tape instrumentation recorders

    Science.gov (United States)

    Peters, Bruce

    1993-01-01

    In order to provide transparent access to data in network computing environments, high performance storage systems are getting smarter as well as faster. Magnetic tape instrumentation recorders contain an increasing amount of intelligence in the form of software and firmware that manages the processes of capturing input signals and data, putting them on media and then reproducing or playing them back. Such intelligence makes them better recorders, ideally suited for applications requiring the high-speed capture and playback of large streams of signals or data. In order to make recorders better storage systems, intelligence is also being added to provide appropriate computer and network interfaces along with services that enable them to interoperate with host computers or network client and server entities. Thus, recorders are evolving into high-performance storage systems that become an integral part of a shared information system. Data tape has embarked on a program with the Caltech sponsored Concurrent Supercomputer Consortium to develop a smart mass storage system. Working within the framework of the emerging IEEE Mass Storage System Reference Model, a high-performance storage system that works with the STX File Server to provide storage services for the Intel Touchstone Delta Supercomputer is being built. Our objective is to provide the required high storage capacity and transfer rate to support grand challenge applications, such as global climate modeling.

  17. The evaluation study of high performance gas target system

    Energy Technology Data Exchange (ETDEWEB)

    Hur, Min Goo; Yang, Seung Dae; Kim, Sang Wook

    2008-06-15

    The object of this study is a improvement of a gas target and targetry for increasing the radioisotope production yields. The main results are as follows 1. Improvement of beam entrance of the gas target : In this work, deep hole grid was designed for improvement of beam entrance. Using FEM(Finite Elements Method) analysis, it was verified that this design is more effective than the old one. 2. Improvement of target gas loading and withdrawing system : For the targetry, Helium gas and vacuum lines was installed for evaluating the production yields. Using these lines, it was proved that the recovery yields was improved and the residual impurity was reduced. 3. Improvement of target cooling efficiency : In case of the cylindrical target, it is more effective to use short length of target cavity for the high production yields. For improving the cooling efficiency, cooling fin was suggested to the target design. It is more effective to put the cooling fins inside the target cavity for the suppressed target pressure and density reduction effect during the proton beam irradiation. In conclusion, the target with fins inside the target cavity was better for high current irradiation and mass RI production.

  18. Performance analysis of memory hierachies in high performance systems

    Energy Technology Data Exchange (ETDEWEB)

    Yogesh, Agrawel [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    This thesis studies memory bandwidth as a performance predictor of programs. The focus of this work is on computationally intensive programs. These programs are the most likely to access large amounts of data, stressing the memory system. Computationally intensive programs are also likely to use highly optimizing compilers to produce the fastest executables possible. Methods to reduce the amount of data traffic by increasing the average number of references to each item while it resides in the cache are explored. Increasing the average number of references to each cache item reduces the number of memory requests. Chapter 2 describes the DLX architecture. This is the architecture on which all the experiments were performed. Chapter 3 studies memory moves as a performance predictor for a group of application programs. Chapter 4 introduces a model to study the performance of programs in the presence of memory hierarchies. Chapter 5 explores some compiler optimizations that can help increase the references to each item while it resides in the cache.

  19. Decal Electronics: Printable Packaged with 3D Printing High-Performance Flexible CMOS Electronic Systems

    KAUST Repository

    Sevilla, Galo T.

    2016-10-14

    High-performance complementary metal oxide semiconductor electronics are flexed, packaged using 3D printing as decal electronics, and then printed in roll-to-roll fashion for highly manufacturable printed flexible high-performance electronic systems.

  20. The importance of a high-performance work environment in hospitals.

    Science.gov (United States)

    Weinberg, Dana Beth; Avgar, Ariel Chanan; Sugrue, Noreen M; Cooney-Miner, Dianne

    2013-02-01

    To examine the benefits of a high-performance work environment (HPWE) for employees, patients, and hospitals. Forty-five adult, medical-surgical units in nine hospitals in upstate New York. Cross-sectional study. Surveys were collected from 1,527 unit-based hospital providers (68.5 percent response rate). Hospitals provided unit turnover and patient data (16,459 discharge records and 2,920 patient surveys). HPWE, as perceived by multiple occupational groups on a unit, is significantly associated with desirable work processes, retention indicators, and care quality. Our findings underscore the potential benefits for providers, patients, and health care organizations of designing work environments that value and support a broad range of employees as having essential contributions to make to the care process and their organizations. © Health Research and Educational Trust.

  1. Measurements over distributed high performance computing and storage systems

    Science.gov (United States)

    Williams, Elizabeth; Myers, Tom

    1993-01-01

    A strawman proposal is given for a framework for presenting a common set of metrics for supercomputers, workstations, file servers, mass storage systems, and the networks that interconnect them. Production control and database systems are also included. Though other applications and third part software systems are not addressed, it is important to measure them as well.

  2. High performance computing system for flight simulation at NASA Langley

    Science.gov (United States)

    Cleveland, Jeff I., II; Sudik, Steven J.; Grove, Randall D.

    1991-01-01

    The computer architecture and components used in the NASA Langley Advanced Real-Time Simulation System (ARTSS) are briefly described and illustrated with diagrams and graphs. Particular attention is given to the advanced Convex C220 processing units, the UNIX-based operating system, the software interface to the fiber-optic-linked Computer Automated Measurement and Control system, configuration-management and real-time supervisor software, ARTSS hardware modifications, and the current implementation status. Simulation applications considered include the Transport Systems Research Vehicle, the Differential Maneuvering Simulator, the General Aviation Simulator, and the Visual Motion Simulator.

  3. Integrated energy system for a high performance building

    Science.gov (United States)

    Jaczko, Kristen

    Integrated energy systems have the potential to reduce of the energy consumption of residential buildings in Canada. These systems incorporate components to meet the building heating, cooling and domestic hot water load into a single system in order to reduce energy losses. An integrated energy system, consisting of a variable speed heat pump, cold and hot thermal storage tanks, a photovoltaic/thermal (PV/T) collector array and a battery bank, was designed for the Queen's Solar Design Team's (QSDT) test house. The system uses a radiant floor to provide space- heating and sensible cooling and a dedicated outdoor air system provides ventilation and dehumidifies the incoming fresh air. The test house, the Queen's Solar Education Centre (QSEC), and the integrated energy system were both modelled in TRNSYS. Additionally, a new TRNSYS Type was developed to model the PV/T collectors, enabling the modeling of the collection of energy from the ambient air. A parametric study was carried out in TRNSYS to investigate the effect of various parameters on the overall energy performance of the system. These parameters included the PV/T array size and the slope of the collectors, the heat pump source and load-side inlet temperature setpoints, the compressor speed control and the size of the thermal storage tanks and the battery bank. The controls of the heat pump were found to have a large impact on the performance of the integrated energy system. For example, a low evaporator setpoint improved the overall free energy ratio (FER) of the system but the heat pump performance was lowered. Reducing the heat loss of the PV/T panels was not found to have a large effect on the system performance however, as the heat pump is able to lower the inlet collector fluid temperature, thus reducing thermal losses. From the results of the sensitivity study, a recommended system model was created and this system had a predicted FER of 77.9% in Kingston, Ontario, neglecting the energy consumption of

  4. Putting our differences to work the fastest way to innovation, leadership, and high performance

    CERN Document Server

    Kennedy, Debbe

    2008-01-01

    In this rapidly changing world, organisations of every type are finding that putting our differences to work is the most powerful accelerator for generating new levels of innovation, energy, leadership and employee engagement needed for high performance. This new book is a practical guide to the strategies and steps needed to make differences the drivers of success. Debbe Kennedy shows that leveraging all the dimensions of difference -- from thinking styles, perspectives, experience, position, goals, competencies, work habits, culture and management style to traditional diversity rubrics such as gender, race, ethnicity, physical abilities, sexual orientation and age -- can accelerate teams' and organisations' performance. Kennedy's book is a practical guide for leaders at all levels that begins with a compelling invitation to pioneer a new era leadership history. It establishes the need for change, sets the course of action, offers proof points and defines five distinctive qualities all leaders need to add to...

  5. Total systems design analysis of high performance structures

    Science.gov (United States)

    Verderaime, V.

    1993-01-01

    Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.

  6. Total systems design analysis of high performance structures

    Science.gov (United States)

    Verderaime, V.

    1993-11-01

    Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.

  7. High-Performance Control in Radio Frequency Power Amplification Systems

    DEFF Research Database (Denmark)

    Høyerby, Mikkel Christian Kofod

    This thesis presents a broad study of methods for increasing the efficiency of narrow-band radio transmitters. The study is centered around the base station application and TETRA/TEDS networks. The general solution space studied is that of envelope tracking applied to linear class-A/B radio...... frequency power amplifiers (RFPAs) in conjunction with cartesian feedback (CFB) used to linearize the overall transmitter system. On a system level, it is demonstrated how envelope tracking is particularly useful for RF carriers with high peak-to-average power ratios, such as TEDS with 10dB. It is also...... and ripple voltage. It is found that the simple fourth-order filter buck converter is ideal for TETRA and TEDS envelope tracking power supplies. The problem of extracting maximum control bandwidth from a given power topology is given particular attention, with a range of, arguably new, insights resulting...

  8. Ceph: A Scalable, High-Performance Distributed File System

    OpenAIRE

    Weil, Sage; Brandt, Scott A.; Miller, Ethan L; Long, Darrell D. E.; Maltzahn, Carlos

    2006-01-01

    We have developed Ceph, a distributed file system that provides excellent performance, reliability, and scala- bility. Ceph maximizes the separation between data and metadata management by replacing allocation ta- bles with a pseudo-random data distribution function (CRUSH) designed for heterogeneous and dynamic clus- ters of unreliable object storage devices (OSDs). We leverage device intelligence by distributing data replica- tion, failure detection and recovery to semi-autonomous ...

  9. High performance graphical data trending in a distributed system

    Science.gov (United States)

    Maureira, Cristián; Hoffstadt, Arturo; López, Joao; Troncoso, Nicolás; Tobar, Rodrigo; von Brand, Horst H.

    2010-07-01

    Trending near real-time data is a complex task, specially in distributed environments. This problem was typically tackled in financial and transaction systems, but it now applies to its utmost in other contexts, such as hardware monitoring in large-scale projects. Data handling requires subscription to specific data feeds that need to be implemented avoiding replication, and rate of transmission has to be assured. On the side of the graphical client, rendering needs to be fast enough so it may be perceived as real-time processing and display. ALMA Common Software (ACS) provides a software infrastructure for distributed projects which may require trending large volumes of data. For theses requirements ACS offers a Sampling System, which allows sampling selected data feeds at different frequencies. Along with this, it provides a graphical tool to plot the collected information, which needs to perform as well as possible. Currently there are many graphical libraries available for data trending. This imposes a problem when trying to choose one: It is necessary to know which has the best performance, and which combination of programming language and library is the best decision. This document analyzes the performance of different graphical libraries and languages in order to present the optimal environment when writing or re-factoring an application using trending technologies in distributed systems. To properly address the complexity of the problem, a specific set of alternative was pre-selected, including libraries in Java and Python, languages which are part of ACS. A stress benchmark will be developed in a simulated distributed environment using ACS in order to test the trending libraries.

  10. High performance computing in power and energy systems

    CERN Document Server

    Khaitan, Siddhartha Kumar

    2012-01-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would  need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, casc

  11. A high performance pneumatic braking system for heavy vehicles

    Science.gov (United States)

    Miller, Jonathan I.; Cebon, David

    2010-12-01

    Current research into reducing actuator delays in pneumatic brake systems is opening the door for advanced anti-lock braking algorithms to be used on heavy goods vehicles. However, these algorithms require the knowledge of variables that are impractical to measure directly. This paper introduces a sliding mode braking force observer to support a sliding mode controller for air-braked heavy vehicles. The performance of the observer is examined through simulations and field testing of an articulated heavy vehicle. The observer operated robustly during single-wheel vehicle simulations, and provided reasonable estimates of surface friction from test data. The effect of brake gain errors on the controller and observer are illustrated, and a recursive least squares estimator is derived for the brake gain. The estimator converged within 0.3 s in simulations and vehicle trials.

  12. WHAT MEANS HIGH PERFORMANCE WORK PRACTICES FOR HUMAN RESOURCES IN AN ORGANIZATION?

    Directory of Open Access Journals (Sweden)

    ANCA-IOANA MUNTEANU

    2014-10-01

    Full Text Available This paper focused on an overview of the different approaches in the literature to the concept of high performance work practices (HPWP, showing how this term evolves over time. Analyzing the literature, the significance of this term are seen as an evolved with customer requirements. Organizations need employees easily adaptable, able to meet customer needs in a timely manner. Therefore, organizations must on the one hand to satisfy their customers, on the other hand, employees, those in which firms can achieve their goals. Currently have placed particular emphasis on employee motivation, training, their involvement in decision making, delegation of authority, remuneration based on performance, rewarding loyalty. All above are considered HPWP and the AMO model is representative of these. The implementation of HPWP is a current problem for organizations wishing to achieve a sustainable competitive advantage. In this sense, this article may provide information of interest to business.

  13. High Performance Polar Decomposition on Distributed Memory Systems

    KAUST Repository

    Sukkari, Dalal E.

    2016-08-08

    The polar decomposition of a dense matrix is an important operation in linear algebra. It can be directly calculated through the singular value decomposition (SVD) or iteratively using the QR dynamically-weighted Halley algorithm (QDWH). The former is difficult to parallelize due to the preponderant number of memory-bound operations during the bidiagonal reduction. We investigate the latter scenario, which performs more floating-point operations but exposes at the same time more parallelism, and therefore, runs closer to the theoretical peak performance of the system, thanks to more compute-bound matrix operations. Profiling results show the performance scalability of QDWH for calculating the polar decomposition using around 9200 MPI processes on well and ill-conditioned matrices of 100K×100K problem size. We study then the performance impact of the QDWH-based polar decomposition as a pre-processing step toward calculating the SVD itself. The new distributed-memory implementation of the QDWH-SVD solver achieves up to five-fold speedup against current state-of-the-art vendor SVD implementations. © Springer International Publishing Switzerland 2016.

  14. Exploiting communication concurrency on high performance computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Chaimov, Nicholas [Univ. of Oregon, Eugene, OR (United States); Ibrahim, Khaled Z. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Iancu, Costin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-01-01

    Although logically available, applications may not exploit enough instantaneous communication concurrency to maximize hardware utilization on HPC systems. This is exacerbated in hybrid programming models such as SPMD+OpenMP. We present the design of a "multi-threaded" runtime able to transparently increase the instantaneous network concurrency and to provide near saturation bandwidth, independent of the application configuration and dynamic behavior. The runtime forwards communication requests from application level tasks to multiple communication servers. Our techniques alleviate the need for spatial and temporal application level message concurrency optimizations. Experimental results show improved message throughput and bandwidth by as much as 150% for 4KB bytes messages on InfiniBand and by as much as 120% for 4KB byte messages on Cray Aries. For more complex operations such as all-to-all collectives, we observe as much as 30% speedup. This translates into 23% speedup on 12,288 cores for a NAS FT implemented using FFTW. We also observe as much as 76% speedup on 1,500 cores for an already optimized UPC+OpenMP geometric multigrid application using hybrid parallelism.

  15. Implementing high-performance work practices in healthcare organizations: qualitative and conceptual evidence.

    Science.gov (United States)

    McAlearney, Ann Scheck; Robbins, Julie; Garman, Andrew N; Song, Paula H

    2013-01-01

    Studies across industries suggest that the systematic use of high-performance work practices (HPWPs) may be an effective but underused strategy to improve quality of care in healthcare organizations. Optimal use of HPWPs depends on how they are implemented, yet we know little about their implementation in healthcare. We conducted 67 key informant interviews in five healthcare organizations, each considered to have exemplary work practices in place and to deliver high-quality care, as part of an extensive study of HPWP use in healthcare. We analyzed interview transcripts inductively and deductively to examine why and how organizations implement HPWPs. We used an evidence-based model of complex innovation adoption to guide our exploration of factors that facilitate HPWP implementation. We found considerable variability in interviewees' reasons for implementing HPWPs, including macro-organizational (strategic level) and micro-organizational (individual level) reasons. This variability highlighted the complex context for HPWP implementation in many organizations. We also found that our application of an innovation implementation model helped clarify and categorize facilitators of HPWP implementation, thus providing insight on how these factors can contribute to implementation effectiveness. Focusing efforts on clarifying definitions, building commitment, and ensuring consistency in the application of work practices may be particularly important elements of successful implementation.

  16. High-Performance Image Acquisition and Processing System with MTCA.4

    Science.gov (United States)

    Makowski, D.; Mielczarek, A.; Perek, P.; Jabłoński, G.; Orlikowski, M.; Sakowicz, B.; Napieralski, A.; Makijarvi, P.; Simrock, S.; Martin, V.

    2015-06-01

    Fast evolution of high-performance cameras in recent years has made them promising tools for observing transient and fast events in large-scale scientific experiments. Complex experiments, such as ITER, take advantage of high-performance imaging system consisting of several fast cameras working in the range of visible and infrared light. However, the application of such devices requires a usage of high-performance data acquisition systems able to read and transfer large amount of data, reaching even 10 Gbit/s for a single camera. The MTCA.4 form factor fulfils the requirements of demanding imaging systems. The paper presents a first implementation of a complete image acquisition system built on the basis of MTCA.4 architecture, which is dedicated for the operation with high-resolution fast cameras equipped with Camera Link interface. Image data from the camera are received by the frame grabber card and transmitted to the host via the PCIe interface. The modular structure of MTCA.4 architecture allows connecting several cameras to a single MTCA chassis. The system can operate in two modes: with internal CPU installed in the MTCA chassis or with external CPU connected to the chassis with PCIe link. The usage of the external CPU opens a possibility to aggregate data from different subsystems. The system supports precise synchronization with the time reference using Precise Timing Protocol (IEEE 1588). The timing modules ensure clock distribution and triggers generation on backplane lines. These allow synchronization of image acquisition from different cameras with high precision. The software support for the system includes low-level drivers and API libraries for all components and a high-level EPICS-based environment for system control and monitoring.

  17. On the counterintuitive consequences of high-performance work practices in cross-border post-merger human integration

    DEFF Research Database (Denmark)

    Vasilaki, A.; Smith, Pernille; Giangreco, A.

    2012-01-01

    , such as communication, employee involvement, and team building, may not always produce the expected effects on human integration; rather, it can have the opposite effects if top management does not closely monitor the immediate results of deploying such practices. Implications for managers dealing with post......, this article investigates the impact of systemic and integrated human resource practices [i.e., high-performance work practices (HPWPs)] on human integration and how their implementation affects employees' behaviours and attitudes towards post-merger human integration. We find that the implementation of HPWPs...

  18. [Estimation of chloramphenicol in the working area air by high performance liquid chromatography].

    Science.gov (United States)

    Kristova-Bagdasarian, V L; Chokhadzhieva, D

    2008-01-01

    Chloramphenicol (levomycetin) is a broad-spectrum antibiotic that is active against gram-positive and gram-negative microorganisms. At present, it is manufactured via organic synthesis. Working place air becomes polluted during the manufacture and use of medicines containing chloramphenicol. In the working place air, chloramphenicol is present as a disintegration aerosol and may provoke occupational diseases of varying severity in the exposed persons. A procedure has been determined to measure air chloramphenicol, by using high performance liquid chromatography. Aspiration through an AFA FPP-15 aerosol filter is a suitable device for air chloramphenicol sampling. The selected chloramphenicol is removed from the filter via triple methanol extraction in an ultrasound bath. The pooled extract is evaporated to dryness in a current of nitrogen and the dry residue is dissolved in the mobile phase containing acetonitrile : buffer (pH 4.8) = 30:70. The chloramphenicol determination procedure using reverse-phase liquid chromatography with ultraviolet detection at a wavelength of 275 nm has been developed and completely validated. Chromatographic conditions are given. The retention time of chloramphenicol is 6.5 min. The detection limit is 0.1 microg/cm3. The method is noted for a linear relationship between the concentration of chloramphenicol (microg/cm3) and the peak area (mm2) in the range of 1 to 20 microg/cm3.

  19. Building High-Performing and Improving Education Systems. Systems and Structures: Powers, Duties and Funding. Review

    Science.gov (United States)

    Slater, Liz

    2013-01-01

    This Review looks at the way high-performing and improving education systems share out power and responsibility. Resources--in the form of funding, capital investment or payment of salaries and other ongoing costs--are some of the main levers used to make policy happen, but are not a substitute for well thought-through and appropriate policy…

  20. Scalable, high-performance 3D imaging software platform: system architecture and application to virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2012-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system.

  1. Accuracy of W' Recovery Kinetics in High Performance Cyclists - Modelling Intermittent Work Capacity.

    Science.gov (United States)

    Bartram, Jason C; Thewlis, Dominic; Martin, David T; Norton, Kevin I

    2017-10-16

    With knowledge of an individual's critical power (CP) and W' the SKIBA 2 model provides a framework with which to track W' balance during intermittent high intensity work bouts. There are fears the time constant controlling the recovery rate of W' (τW') may require refinement to enable effective use in an elite population. Four elite endurance cyclists completed an array of intermittent exercise protocols to volitional exhaustion. Each protocol lasted approximately 3.5-6 minutes and featured a range of recovery intensities, set in relation to athlete's CPs (DCP). Using the framework of the SKIBA 2 model, the τW' values were modified for each protocol to achieve an accurate W' at volitional exhaustion. Modified τW' values were compared to equivalent SKIBA 2 τW' values to assess the difference in recovery rates for this population. Plotting modified τW' values against DCP showed the adjusted relationship between work-rate and recovery-rate. Comparing modified τW' values against the SKIBA 2 τW' values showed a negative bias of 112±46s (mean±95%CL), suggesting athlete's recovered W' faster than predicted by SKIBA 2 (p=0.0001). The modified τW' to DCP relationship was best described by a power function: τW'=2287.2∗DCP(-0.688)(R(2) = 0.433). The current SKIBA 2 model is not appropriate for use in elite cyclists as it under predicts the recovery rate of W'. The modified τW' equation presented will require validation, but appears more appropriate for high performance athletes. Individual τW' relationships may be necessary in order to maximise the model's validity.

  2. Extending PowerPack for Profiling and Analysis of High Performance Accelerator-Based Systems

    Energy Technology Data Exchange (ETDEWEB)

    Li, Bo; Chang, Hung-Ching; Song, Shuaiwen; Su, Chun-Yi; Meyer, Timmy; Mooring, John; Cameron, Kirk

    2014-12-01

    Accelerators offer a substantial increase in efficiency for high-performance systems offering speedups for computational applications that leverage hardware support for highly-parallel codes. However, the power use of some accelerators exceeds 200 watts at idle which means use at exascale comes at a significant increase in power at a time when we face a power ceiling of about 20 megawatts. Despite the growing domination of accelerator-based systems in the Top500 and Green500 lists of fastest and most efficient supercomputers, there are few detailed studies comparing the power and energy use of common accelerators. In this work, we conduct detailed experimental studies of the power usage and distribution of Xeon-Phi-based systems in comparison to the NVIDIA Tesla and at SandyBridge.

  3. System Analysis and Decision-Making During Synthesis of High-Performance Hybrid Boilers

    Science.gov (United States)

    Safin, T. R.; Konakhina, I. A.; Khamidullina, G. R.

    2017-09-01

    The decision-making analysis for synthesis of high-performance hybrid boiler plants is based on current philosophy of system analysis and synthesis of combined heat and power plants. Energetic and exergetic utilization is used as performance criteria.

  4. Top scientific research center deploys Zambeel Aztera (TM) network storage system in high performance environment

    CERN Multimedia

    2002-01-01

    " The National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory has implemented a Zambeel Aztera storage system and software to accelerate the productivity of scientists running high performance scientific simulations and computations" (1 page).

  5. Application of high performance concrete in the pavement system : structural response of high performance concrete pavement : executive summary.

    Science.gov (United States)

    2002-01-01

    Rigid pavements make up a significant percentage of highway systems in the United States and abroad. Concrete pavements provide an economical and durable solution for highway systems, because the pavements last longer and require less maintenance. Re...

  6. An Embedded System for applying High Performance Computing in Educational Learning Activity

    Directory of Open Access Journals (Sweden)

    Irene Erlyn Wina Rachmawan

    2016-08-01

    Full Text Available HPC (High Performance Computing has become more popular in the last few years. With the benefits on high computational power, HPC has impact on industry, scientific research and educational activities. Implementing HPC as a curriculum in universities could be consuming a lot of resources because well-known HPC system are using Personal Computer or Server. By using PC as the practical moduls it is need great resources and spaces.  This paper presents an innovative high performance computing cluster system to support education learning activities in HPC course with small size, low cost, and yet powerful enough. In recent years, High Performance computing usually implanted in cluster computing and require high specification computer and expensive cost. It is not efficient applying High Performance Computing in Educational research activiry such as learning in Class. Therefore, our proposed system is created with inexpensive component by using Embedded System to make High Performance Computing applicable for leaning in the class. Students involved in the construction of embedded system, built clusters from basic embedded and network components, do benchmark performance, and implement simple parallel case using the cluster.  In this research we performed evaluation of embedded systems comparing with i5 PC, the results of our embedded system performance of NAS benchmark are similar with i5 PCs. We also conducted surveys about student learning satisfaction that with embedded system students are able to learn about HPC from building the system until making an application that use HPC system.

  7. Implementing a High Performance Work Place in the Distribution and Logistics Industry: Recommendations for Leadership & Team Member Development

    Science.gov (United States)

    McCann, Laura Harding

    2012-01-01

    Leadership development and employee engagement are two elements critical to the success of organizations. In response to growth opportunities, our Distribution and Logistics company set on a course to implement High Performance Work Place to meet the leadership and employee engagement needs, and to find methods for improving work processes. This…

  8. Links among high-performance work environment, service quality, and customer satisfaction: an extension to the healthcare sector.

    Science.gov (United States)

    Scotti, Dennis J; Harmon, Joel; Behson, Scott J

    2007-01-01

    Healthcare managers must deliver high-quality patient services that generate highly satisfied and loyal customers. In this article, we examine how a high-involvement approach to the work environment of healthcare employees may lead to exceptional service quality, satisfied patients, and ultimately to loyal customers. Specifically, we investigate the chain of events through which high-performance work systems (HPWS) and customer orientation influence employee and customer perceptions of service quality and patient satisfaction in a national sample of 113 Veterans Health Administration ambulatory care centers. We present a conceptual model for linking work environment to customer satisfaction and test this model using structural equations modeling. The results suggest that (1) HPWS is linked to employee perceptions of their ability to deliver high-quality customer service, both directly and through their perceptions of customer orientation; (2) employee perceptions of customer service are linked to customer perceptions of high-quality service; and (3) perceived service quality is linked with customer satisfaction. Theoretical and practical implications of our findings, including suggestions of how healthcare managers can implement changes to their work environments, are discussed.

  9. Scientific Data Services -- A High-Performance I/O System with Array Semantics

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Byna, Surendra; Rotem, Doron; Shoshani, Arie

    2011-09-21

    As high-performance computing approaches exascale, the existing I/O system design is having trouble keeping pace in both performance and scalability. We propose to address this challenge by adopting database principles and techniques in parallel I/O systems. First, we propose to adopt an array data model because many scientific applications represent their data in arrays. This strategy follows a cardinal principle from database research, which separates the logical view from the physical layout of data. This high-level data model gives the underlying implementation more freedom to optimize the physical layout and to choose the most effective way of accessing the data. For example, knowing that a set of write operations is working on a single multi-dimensional array makes it possible to keep the subarrays in a log structure during the write operations and reassemble them later into another physical layout as resources permit. While maintaining the high-level view, the storage system could compress the user data to reduce the physical storage requirement, collocate data records that are frequently used together, or replicate data to increase availability and fault-tolerance. Additionally, the system could generate secondary data structures such as database indexes and summary statistics. We expect the proposed Scientific Data Services approach to create a “live” storage system that dynamically adjusts to user demands and evolves with the massively parallel storage hardware.

  10. Cross-level effects of high-performance work practices on burnout: Two counteracting mediating mechanisms compared

    NARCIS (Netherlands)

    Voorde, F.C. van de; Kroon, B.; Veldhoven, M.J.P.M. van

    2009-01-01

    Purpose - The purpose of this paper is to explore the impact of management practices - specifically, high-performance work practices (HPWPs) - on employee burnout. Two potential mediating mechanisms that counterbalance each other in the development of burnout are compared: a critical mechanism that

  11. Lithium triborate laser vaporization of the prostate using the 120 W, high performance system laser: high performance all the way?

    Science.gov (United States)

    Hermanns, Thomas; Strebel, Daniel D; Hefermehl, Lukas J; Gross, Oliver; Mortezavi, Ashkan; Müller, Alexander; Eberli, Daniel; Müntener, Michael; Michel, Maurice S; Meier, Alexander H; Sulser, Tullio; Seifert, Hans-Helge

    2011-06-01

    Technical modifications of the 120 W lithium-triborate laser have been implemented to increase power output, and prevent laser fiber degradation and loss of power output during laser vaporization of the prostate. However, visible alterations at the fiber tip and the subjective impression of decreasing ablative effectiveness during lithium-triborate laser vaporization indicate that delivering constantly high laser power remains a relevant problem. Thus, we evaluated the extent of laser fiber degradation and loss of power output during 120 W lithium-triborate laser vaporization of the prostate. We investigated 46 laser fibers during routine 120 W lithium-triborate laser vaporization in 35 patients with prostatic bladder outflow obstruction. Laser beam power was measured at baseline and after the application of each 25 kJ during laser vaporization. Fiber tips were microscopically examined after the procedure. Mild to moderate degradation at the emission window occurred in all fibers, associated with a loss of power output. A steep decrease to a median power output of 57.3% of baseline was detected after applying the first 25 kJ. Median power output at the end of the defined 275 kJ lifespan of the fibers was 48.8%. Despite technical refinements of the 120 W lithium-triborate laser fiber degradation and significantly decreased power output are still detectable during the procedure. Laser fibers are not fully appropriate for the high power delivery of the new system. There is still potential for further improvement in the laser performance. Copyright © 2011 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  12. PISA and High-Performing Education Systems: Explaining Singapore's Education Success

    Science.gov (United States)

    Deng, Zongyi; Gopinathan, S.

    2016-01-01

    Singapore's remarkable performance in Programme for International Student Assessment (PISA) has placed it among the world's high-performing education systems (HPES). In the literature on HPES, its "secret formula" for education success is explained in terms of teacher quality, school leadership, system characteristics and educational…

  13. IGUANA A high-performance 2D and 3D visualisation system

    CERN Document Server

    Alverson, G; Muzaffar, S; Osborne, I; Taylor, L; Tuura, L A

    2004-01-01

    The IGUANA project has developed visualisation tools for multiple high-energy experiments. At the core of IGUANA is a generic, high- performance visualisation system based on OpenInventor and OpenGL. This paper describes the back-end and a feature-rich 3D visualisation system built on it, as well as a new 2D visualisation system that can automatically generate 2D views from 3D data, for example to produce R/Z or X/Y detector displays from existing 3D display with little effort. IGUANA has collaborated with the open-source gl2ps project to create a high-quality vector postscript output that can produce true vector graphics output from any OpenGL 2D or 3D display, complete with surface shading and culling of invisible surfaces. We describe how it works. We also describe how one can measure the memory and performance costs of various OpenInventor constructs and how to test scene graphs. We present good patterns to follow and bad patterns to avoid. We have added more advanced tools such as per-object clipping, sl...

  14. Building A High Performance Parallel File System Using Grid Datafarm and ROOT I/O

    CERN Document Server

    Morita, Y; Watase, Y; Tatebe, Osamu; Sekiguchi, S; Matsuoka, S; Soda, N; Dell'Acqua, A

    2003-01-01

    Sheer amount of petabyte scale data foreseen in the LHC experiments require a careful consideration of the persistency design and the system design in the world-wide distributed computing. Event parallelism of the HENP data analysis enables us to take maximum advantage of the high performance cluster computing and networking when we keep the parallelism both in the data processing phase, in the data management phase, and in the data transfer phase. A modular architecture of FADS/ Goofy, a versatile detector simulation framework for Geant4, enables an easy choice of plug-in facilities for persistency technologies such as Objectivity/DB and ROOT I/O. The framework is designed to work naturally with the parallel file system of Grid Datafarm (Gfarm). FADS/Goofy is proven to generate 10^6 Geant4-simulated Atlas Mockup events using a 512 CPU PC cluster. The data in ROOT I/O files is replicated using Gfarm file system. The histogram information is collected from the distributed ROOT files. During the data replicatio...

  15. IGUANA: a high-performance 2D and 3D visualisation system

    Energy Technology Data Exchange (ETDEWEB)

    Alverson, G. [Department of Physics, Northeastern University, Boston, MA 02115 (United States); Eulisse, G. [Department of Physics, Northeastern University, Boston, MA 02115 (United States); Muzaffar, S. [Department of Physics, Northeastern University, Boston, MA 02115 (United States); Osborne, I. [Department of Physics, Northeastern University, Boston, MA 02115 (United States); Taylor, L. [Department of Physics, Northeastern University, Boston, MA 02115 (United States)]. E-mail: lucas.taylor@cern.ch; Tuura, L.A. [Department of Physics, Northeastern University, Boston, MA 02115 (United States)

    2004-11-21

    The IGUANA project has developed visualisation tools for multiple high-energy experiments. At the core of IGUANA is a generic, high-performance visualisation system based on OpenInventor and OpenGL. This paper describes the back-end and a feature-rich 3D visualisation system built on it, as well as a new 2D visualisation system that can automatically generate 2D views from 3D data, for example to produce R/Z or X/Y detector displays from existing 3D display with little effort. IGUANA has collaborated with the open-source gl2ps project to create a high-quality vector postscript output that can produce true vector graphics output from any OpenGL 2D or 3D display, complete with surface shading and culling of invisible surfaces. We describe how it works. We also describe how one can measure the memory and performance costs of various OpenInventor constructs and how to test scene graphs. We present good patterns to follow and bad patterns to avoid. We have added more advanced tools such as per-object clipping, slicing, lighting or animation, as well as multiple linked views with OpenInventor, and describe them in this paper. We give details on how to edit object appearance efficiently and easily, and even dynamically as a function of object properties, with instant visual feedback to the user.

  16. Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data.

    Science.gov (United States)

    Aji, Ablimit; Wang, Fusheng; Saltz, Joel H

    2012-11-06

    Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the "big data" challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce.

  17. Coal-fired high performance power generating system. Quarterly progress report

    Energy Technology Data Exchange (ETDEWEB)

    1992-07-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of > 47% thermal efficiency; NO{sub x} SO {sub x} and Particulates < 25% NSPS; Cost of electricity 10% lower; coal > 65% of heat input and all solid wastes benign. In order to achieve these goals our team has outlined a research plan based on an optimized analysis of a 250 MW{sub e} combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (HITAF) which integrates several combustor and air heater designs with appropriate ash management procedures. Most of this report discusses the details of work on these components, and the R&D Plan for future work. The discussion of the combustor designs illustrates how detailed modeling can be an effective tool to estimate NO{sub x} production, minimum burnout lengths, combustion temperatures and even particulate impact on the combustor walls. When our model is applied to the long flame concept it indicates that fuel bound nitrogen will limit the range of coals that can use this approach. For high nitrogen coals a rapid mixing, rich-lean, deep staging combustor will be necessary. The air heater design has evolved into two segments: a convective heat exchanger downstream of the combustion process; a radiant panel heat exchanger, located in the combustor walls; The relative amount of heat transferred either radiatively or convectively will depend on the combustor type and the ash properties.

  18. Engineering Development of Coal-Fired High-Performance Power Systems

    Energy Technology Data Exchange (ETDEWEB)

    York Tsuo

    2000-12-31

    A High Performance Power System (HIPPS) is being developed. This system is a coal-fired, combined cycle plant with indirect heating of gas turbine air. Foster Wheeler Development Corporation and a team consisting of Foster Wheeler Energy Corporation, Bechtel Corporation, University of Tennessee Space Institute and Westinghouse Electric Corporation are developing this system. In Phase 1 of the project, a conceptual design of a commercial plant was developed. Technical and economic analyses indicated that the plant would meet the goals of the project which include a 47 percent efficiency (HHV) and a 10 percent lower cost of electricity than an equivalent size PC plant. The concept uses a pyrolysis process to convert coal into fuel gas and char. The char is fired in a High Temperature Advanced Furnace (HITAF). The HITAF is a pulverized fuel-fired boiler/air heater where steam is generated and gas turbine air is indirectly heated. The fuel gas generated in the pyrolyzer is then used to heat the gas turbine air further before it enters the gas turbine. The project is currently in Phase 2 which includes engineering analysis, laboratory testing and pilot plant testing. Research and development is being done on the HIPPS systems that are not commercial or being developed on other projects. Pilot plant testing of the pyrolyzer subsystem and the char combustion subsystem are being done separately. This report addresses the areas of technical progress for this quarter. The detail of syngas cooler design is given in this report. The final construction work of the CFB pyrolyzer pilot plant has started during this quarter. No experimental testing was performed during this quarter. The proposed test matrix for the future CFB pyrolyzer tests is given in this report. Besides testing various fuels, bed temperature will be the primary test parameter.

  19. Engineering development of coal-fired high performance power systems, Phase II and III

    Energy Technology Data Exchange (ETDEWEB)

    None

    1999-01-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) that is capable of: thermal efficiency (HHV) {ge} 47%; NOx, SOx, and particulates {le} 10% NSPS (New Source Performance Standard) coal providing {ge} 65% of heat input; all solid wastes benign; cost of electricity {le} 90% of present plants. Phase 1, which began in 1992, focused on the analysis of various configurations of indirectly fired cycles and on technical assessments of alternative plant subsystems and components, including performance requirements, developmental status, design options, complexity and reliability, and capital and operating costs. Phase 1 also included preliminary R and D and the preparation of designs for HIPPS commercial plants approximately 300 MWe in size. This phase, Phase 2, involves the development and testing of plant subsystems, refinement and updating of the HIPPS commercial plant design, and the site selection and engineering design of a HIPPS prototype plant. Work reported herein is from: Task 2.1 HITAC Combustors; Task 2.2 HITAF Air Heaters; Task 6 HIPPS Commercial Plant Design Update.

  20. Engineering development of coal-fired high performance power systems, Phase II and III

    Energy Technology Data Exchange (ETDEWEB)

    None

    1998-07-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) that is capable of: thermal efficiency (HHV) {ge} 47%, NOx, SOx, and particulates {le} 10% NSPS (New Source Performance Standard), coal providing {ge} 65% of heat input, all solid wastes benign cost of electricity {le} 90% of present plants. Phase 1, which began in 1992, focused on the analysis of various configurations of indirectly fired cycles and on technical assessments of alternative plant subsystems and components, including performance requirements, developmental status, design options, complexity and reliability, and capital and operating costs. Phase 1 also included preliminary R and D and the preparation of designs for HIPPS commercial plants approximately 300 MWe in size. This phase, Phase 2, involves the development and testing of plant subsystems, refinement and updating of the HIPPS commercial plant design, and the site selection and engineering design of a HIPPS prototype plant. Work reported herein is from: Task 2.1 HITAF Combustor; Task 2.2 HITAF Air Heaters; Task 6 HIPPS Commercial Plant Design Update.

  1. Engineering development of coal-fired high performance power systems, Phase II and III

    Energy Technology Data Exchange (ETDEWEB)

    None

    1999-04-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) that is capable of: thermal efficiency (HHV) {ge} 47%, NOx, SOx, and particulates {le} 10% NSPS (New Source Performance Standard) coal providing {ge} 65% of heat input, all solid wastes benign, and cost of electricity {le} 90% of present plants. Phase 1, which began in 1992, focused on the analysis of various configurations of indirectly fired cycles and on technical assessments of alternative plant subsystems and components, including performance requirements, developmental status, design options, complexity and reliability, and capital and operating costs. Phase 1 also included preliminary R and D and the preparation of designs for HIPPS commercial plants approximately 300 MWe in size. This phase, Phase 2, involves the development and testing of plant subsystems, refinement and updating of the HIPPS commercial plant design, and the site selection and engineering design of a HIPPS prototype plant. Work reported herein is from: Task 2.1 HITAC Combustors; Task 2.2 HITAF Air Heaters; Task 6 HIPPS Commercial Plant Design Update.

  2. High Performance Variable Speed Drive System and Generating System with Doubly Fed Machines

    Science.gov (United States)

    Tang, Yifan

    Doubly fed machines are another alternative for variable speed drive systems. The doubly fed machines, including doubly fed induction machine, self-cascaded induction machine and doubly excited brushless reluctance machine, have several attractive advantages for variable speed drive applications, the most important one being the significant cost reduction with a reduced power converter rating. With a better understanding, improved machine design, flexible power converters and innovated controllers, the doubly fed machines could favorably compete for many applications, which may also include variable speed power generations. The goal of this research is to enhance the attractiveness of the doubly fed machines for both variable speed drive and variable speed generator applications. Recognizing that wind power is one of the favorable clean, renewable energy sources that can contribute to the solution to the energy and environment dilemma, a novel variable-speed constant-frequency wind power generating system is proposed. By variable speed operation, energy capturing capability of the wind turbine is improved. The improvement can be further enhanced by effectively utilizing the doubly excited brushless reluctance machine in slip power recovery configuration. For the doubly fed machines, a stator flux two -axis dynamic model is established, based on which a flexible active and reactive power control strategy can be developed. High performance operation of the drive and generating systems is obtained through advanced control methods, including stator field orientation control, fuzzy logic control and adaptive fuzzy control. System studies are pursued through unified modeling, computer simulation, stability analysis and power flow analysis of the complete drive system or generating system with the machine, the converter and the control. Laboratory implementations and tested results with a digital signal processor system are also presented.

  3. Work function tuning for high-performance solution-processed organic photodetectors with inverted structure.

    Science.gov (United States)

    Saracco, Emeline; Bouthinon, Benjamin; Verilhac, Jean-Marie; Celle, Caroline; Chevalier, Nicolas; Mariolle, Denis; Dhez, Olivier; Simonato, Jean-Pierre

    2013-12-03

    Organic photodetectors with inverted structure are fabricated by solution process techniques. A very thin interfacing layer of polyethyleneimine leads to a homogenous interface with low work function. The devices exhibit excellent performances, in particular in terms of low dark current density, wide range linearity, high detectivity, and remarkable stability in ambient air without encapsulation. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Possibility of high performance quantum computation by superluminal evanescent photons in living systems.

    Science.gov (United States)

    Musha, Takaaki

    2009-06-01

    Penrose and Hameroff have suggested that microtubules in living systems function as quantum computers by utilizing evanescent photons. On the basis of the theorem that the evanescent photon is a superluminal particle, the possibility of high performance computation in living systems has been studied. From the theoretical analysis, it is shown that the biological brain can achieve large quantum bits computation compared with the conventional processors at room temperature.

  5. A Heterogeneous High-Performance System for Computational and Computer Science

    Science.gov (United States)

    2016-11-15

    System for Computational and Computer Science Report Title This DoD HBC/MI Equipment/Instrumentation grant was awarded in October 2014 for the purchase... Computing (HPC) course taught in the department of computer science as to attract more graduate students from many disciplines where their research...AND SUBTITLE A Heterogeneous High-Performance System for Computational and Computer Science 5a. CONTRACT NUMBER W911NF-15-1-0023 5b

  6. A GPU-based real time high performance computing service in a fast plant system controller prototype for ITER

    Energy Technology Data Exchange (ETDEWEB)

    Nieto, J., E-mail: jnieto@sec.upm.es [Grupo de Investigacion en Instrumentacion y Acustica Aplicada. Universidad Politecnica de Madrid, Crta. Valencia Km-7, Madrid 28031 Spain (Spain); Arcas, G. de; Ruiz, M. [Grupo de Investigacion en Instrumentacion y Acustica Aplicada. Universidad Politecnica de Madrid, Crta. Valencia Km-7, Madrid 28031 Spain (Spain); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, Madrid (Spain); Lopez, J.M.; Barrera, E. [Grupo de Investigacion en Instrumentacion y Acustica Aplicada. Universidad Politecnica de Madrid, Crta. Valencia Km-7, Madrid 28031 Spain (Spain); Castro, R. [Asociacion EURATOM/CIEMAT para Fusion, Madrid (Spain); Sanz, D. [Grupo de Investigacion en Instrumentacion y Acustica Aplicada. Universidad Politecnica de Madrid, Crta. Valencia Km-7, Madrid 28031 Spain (Spain); Utzel, N.; Makijarvi, P.; Zabeo, L. [ITER Organization, CS 90 046, 13067 St. Paul lez Durance Cedex (France)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer Implementation of fast plant system controller (FPSC) for ITER CODAC. Black-Right-Pointing-Pointer GPU-based real time high performance computing service. Black-Right-Pointing-Pointer Performance evaluation with respect to other solutions based in multi-core processors. - Abstract: EURATOM/CIEMAT and the Technical University of Madrid UPM are involved in the development of a FPSC (fast plant system control) prototype for ITER based on PXIe form factor. The FPSC architecture includes a GPU-based real time high performance computing service which has been integrated under EPICS (experimental physics and industrial control system). In this work we present the design of this service and its performance evaluation with respect to other solutions based in multi-core processors. Plasma pre-processing algorithms, illustrative of the type of tasks that could be required for both control and diagnostics, are used during the performance evaluation.

  7. Study on Walking Training System using High-Performance Shoes constructed with Rubber Elements

    Science.gov (United States)

    Hayakawa, Y.; Kawanaka, S.; Kanezaki, K.; Doi, S.

    2016-09-01

    The number of accidental falls has been increasing among the elderly as society has aged. The main factor is a deteriorating center of balance due to declining physical performance. Another major factor is that the elderly tend to have bowlegged walking and their center of gravity position of the body tend to swing from side to side during walking. To find ways to counteract falls among the elderly, we developed walking training system to treat the gap in the center of balance. We also designed High-Performance Shoes that showed the status of a person's balance while walking. We also produced walk assistance from the insole in which insole stiffness corresponded to human sole distribution could be changed to correct the person's walking status. We constructed our High- Performances Shoes to detect pressure distribution during walking. Comparing normal sole distribution patterns and corrected ones, we confirmed that our assistance system helped change the user's posture, thereby reducing falls among the elderly.

  8. Coal-fired high performance power generating system. Draft quarterly progress report, January 1--March 31, 1995

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-10-01

    This report covers work carried out under Task 3, Preliminary R and D, under contract DE-AC22-92PC91155, ``Engineering Development of a Coal-Fired High Performance Power Generation System`` between DOE Pittsburgh Energy Technology Center and United Technologies Research Center. The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of >47% thermal efficiency; NO{sub x}, SO{sub x} and particulates {le} 25% NSPS; cost {ge}65% of heat input; all solid wastes benign. A crucial aspect of the authors design is the integration of the gas turbine requirements with the HITAF output and steam cycle requirements. In order to take full advantage of modern highly efficient aeroderivative gas turbines they have carried out a large number of cycle calculations to optimize their commercial plant designs for both greenfield and repowering applications.

  9. Unconventional High-Performance Laser Protection System Based on Dichroic Dye-Doped Cholesteric Liquid Crystals

    Science.gov (United States)

    Zhang, Wanshu; Zhang, Lanying; Liang, Xiao; Le Zhou; Xiao, Jiumei; Yu, Li; Li, Fasheng; Cao, Hui; Li, Kexuan; Yang, Zhou; Yang, Huai

    2017-02-01

    High-performance and cost-effective laser protection system is of crucial importance for the rapid advance of lasers in military and civilian fields leading to severe damages of human eyes and sensitive optical devices. However, it is crucially hindered by the angle-dependent protective effect and the complex preparation process. Here we demonstrate that angle-independence, good processibility, wavelength tunability, high optical density and good visibility can be effectuated simultaneously, by embedding dichroic anthraquinone dyes in a cholesteric liquid crystal matrix. More significantly, unconventional two-dimensional parabolic protection behavior is reported for the first time that in stark contrast to the existing protection systems, the overall parabolic protection behavior enables protective effect to increase with incident angles, hence providing omnibearing high-performance protection. The protective effect is controllable by dye concentration, LC cell thickness and CLC reflection efficiency, and the system can be made flexible enabling applications in flexible and even wearable protection devices. This research creates a promising avenue for the high-performance and cost-effective laser protection, and may foster the development of optical applications such as solar concentrators, car explosion-proof membrane, smart windows and polarizers.

  10. Demonstration and Validation of a High-Performance Floor-Sealant System to Reduce Concrete Degradation

    Science.gov (United States)

    2015-05-01

    The A-A-52624 Type 1 Recycled Antifreeze had no effect on the concrete or sealant system. All the other test chemicals penetrated the sealant...System to Reduce Concrete Degradation Final Report on Project F10-AR02 Co ns tr uc tio n En gi ne er in g R es ea rc h La bo ra to ry Clint...of a High-Performance Floor-Sealant System to Reduce Concrete Degradation Final Report on Project F10-AR02 Clint A. Wilson and Susan A. Drozdz

  11. The NetLogger Methodology for High Performance Distributed Systems Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, Brian; Johnston, William; Crowley, Brian; Hoo, Gary; Brooks, Chris; Gunter, Dan

    1999-12-23

    The authors describe a methodology that enables the real-time diagnosis of performance problems in complex high-performance distributed systems. The methodology includes tools for generating precision event logs that can be used to provide detailed end-to-end application and system level monitoring; a Java agent-based system for managing the large amount of logging data; and tools for visualizing the log data and real-time state of the distributed system. The authors developed these tools for analyzing a high-performance distributed system centered around the transfer of large amounts of data at high speeds from a distributed storage server to a remote visualization client. However, this methodology should be generally applicable to any distributed system. This methodology, called NetLogger, has proven invaluable for diagnosing problems in networks and in distributed systems code. This approach is novel in that it combines network, host, and application-level monitoring, providing a complete view of the entire system.

  12. Image Processor Electronics (IPE): The High-Performance Computing System for NASA SWIFT Mission

    Science.gov (United States)

    Nguyen, Quang H.; Settles, Beverly A.

    2003-01-01

    Gamma Ray Bursts (GRBs) are believed to be the most powerful explosions that have occurred in the Universe since the Big Bang and are a mystery to the scientific community. Swift, a NASA mission that includes international participation, was designed and built in preparation for a 2003 launch to help to determine the origin of Gamma Ray Bursts. Locating the position in the sky where a burst originates requires intensive computing, because the duration of a GRB can range between a few milliseconds up to approximately a minute. The instrument data system must constantly accept multiple images representing large regions of the sky that are generated by sixteen gamma ray detectors operating in parallel. It then must process the received images very quickly in order to determine the existence of possible gamma ray bursts and their locations. The high-performance instrument data computing system that accomplishes this is called the Image Processor Electronics (IPE). The IPE was designed, built and tested by NASA Goddard Space Flight Center (GSFC) in order to meet these challenging requirements. The IPE is a small size, low power and high performing computing system for space applications. This paper addresses the system implementation and the system hardware architecture of the IPE. The paper concludes with the IPE system performance that was measured during end-to-end system testing.

  13. Systems and methods for advanced ultra-high-performance InP solar cells

    Science.gov (United States)

    Wanlass, Mark

    2017-03-07

    Systems and Methods for Advanced Ultra-High-Performance InP Solar Cells are provided. In one embodiment, an InP photovoltaic device comprises: a p-n junction absorber layer comprising at least one InP layer; a front surface confinement layer; and a back surface confinement layer; wherein either the front surface confinement layer or the back surface confinement layer forms part of a High-Low (HL) doping architecture; and wherein either the front surface confinement layer or the back surface confinement layer forms part of a heterointerface system architecture.

  14. High-Performance Constant Power Generation in Grid-Connected PV Systems

    DEFF Research Database (Denmark)

    Sangwongwanich, Ariya; Yang, Yongheng; Blaabjerg, Frede

    2016-01-01

    An advanced power control strategy by limiting the maximum feed-in power of PV systems has been proposed, which can ensure a fast and smooth transition between maximum power point tracking and Constant Power Generation (CPG). Regardless of the solar irradiance levels, high-performance and stable...... operation are always achieved by the proposed control strategy. It can regulate the PV output power according to any set-point, and force the PV systems to operate at the left side of the maximum power point without stability problems. Experimental results have verified the effectiveness of the proposed CPG...

  15. Thermochemistry: the key to minerals separation from biomass for fuel use in high performance systems

    Energy Technology Data Exchange (ETDEWEB)

    Overend, R.P. [National Renewable Energy Laboratory, Golden, CO (United States)

    1996-12-31

    Biomass use in high efficiency thermal electricity generation is limited not by the properties of the organic component of biomass, but by the behavior of the associated mineral matter at high temperatures. On a moisture and ash free basis biomass, which has an average formula of CH{sub 1.4}O{sub 0.6}N{sub 0.1}, has a relatively low heating value of 18.6 GJ/t. However, this would not limit its use in high efficiency combustion systems because adequate high temperatures could be reached to achieve high carnot cycle efficiencies. These high temperatures cannot be reached because of the fouling and slagging propensities of the minerals in biomass. The mineral composition is a function of soils and the growth habit of the biomass, however, the most important element is potassium, which either alone or in combinating with silica forms the basis of fouling and slagging behaviors. Growing plants selectively concentrate potassium in their cells, which along with nitrogen and phosphorus are the key macronutrients for plant growth. Annual plants tend to have very high potassium contents, although wood biomass exclusive of the living cambial layer (i.e. minus the bark, small branches, and leaves) has minimal potassium content and other nutrients. Under combustion conditions the potassium is mobilized, especially in the presence of chlorine, at relative low temperatures and fouls heat transfer surfaces and corrodes high performance metals used, for example, in the high temperature sections of burners and gas turbines. Recent work has demonstrated the phenomenology of ash fouling, mainly by the potassium component of biomass, as well as identifying the key species such as KOH, KCl, and sulphates that are involved in potassium transport at temperatures <800 deg C. Techniques that separate the mineral matter from the fuel components (carbon and hydrogen) at low temperatures reduce or limit the alkali metal transport phenomena and result in very high efficiency combustion

  16. BurstMem: A High-Performance Burst Buffer System for Scientific Applications

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Teng [Auburn University, Auburn, Alabama; Oral, H Sarp [ORNL; Wang, Yandong [Auburn University, Auburn, Alabama; Settlemyer, Bradley W [ORNL; Atchley, Scott [ORNL; Yu, Weikuan [Auburn University, Auburn, Alabama

    2014-01-01

    The growth of computing power on large-scale sys- tems requires commensurate high-bandwidth I/O system. Many parallel file systems are designed to provide fast sustainable I/O in response to applications soaring requirements. To meet this need, a novel system is imperative to temporarily buffer the bursty I/O and gradually flush datasets to long-term parallel file systems. In this paper, we introduce the design of BurstMem, a high- performance burst buffer system. BurstMem provides a storage framework with efficient storage and communication manage- ment strategies. Our experiments demonstrate that BurstMem is able to speed up the I/O performance of scientific applications by up to 8.5 on leadership computer systems.

  17. HPNAIDM: The High-Performance Network Anomaly/Intrusion Detection and Mitigation System

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yan [Northwesten University

    2013-12-05

    Identifying traffic anomalies and attacks rapidly and accurately is critical for large network operators. With the rapid growth of network bandwidth, such as the next generation DOE UltraScience Network, and fast emergence of new attacks/virus/worms, existing network intrusion detection systems (IDS) are insufficient because they: • Are mostly host-based and not scalable to high-performance networks; • Are mostly signature-based and unable to adaptively recognize flow-level unknown attacks; • Cannot differentiate malicious events from the unintentional anomalies. To address these challenges, we proposed and developed a new paradigm called high-performance network anomaly/intrustion detection and mitigation (HPNAIDM) system. The new paradigm is significantly different from existing IDSes with the following features (research thrusts). • Online traffic recording and analysis on high-speed networks; • Online adaptive flow-level anomaly/intrusion detection and mitigation; • Integrated approach for false positive reduction. Our research prototype and evaluation demonstrate that the HPNAIDM system is highly effective and economically feasible. Beyond satisfying the pre-set goals, we even exceed that significantly (see more details in the next section). Overall, our project harvested 23 publications (2 book chapters, 6 journal papers and 15 peer-reviewed conference/workshop papers). Besides, we built a website for technique dissemination, which hosts two system prototype release to the research community. We also filed a patent application and developed strong international and domestic collaborations which span both academia and industry.

  18. High-performance electronics for time-of-flight PET systems.

    Science.gov (United States)

    Choong, W-S; Peng, Q; Vu, C Q; Turko, B T; Moses, W W

    2013-01-01

    We have designed and built a high-performance readout electronics system for time-of-flight positron emission tomography (TOF PET) cameras. The electronics architecture is based on the electronics for a commercial whole-body PET camera (Siemens/CPS Cardinal electronics), modified to improve the timing performance. The fundamental contributions in the electronics that can limit the timing resolution include the constant fraction discriminator (CFD), which converts the analog electrical signal from the photo-detector to a digital signal whose leading edge is time-correlated with the input signal, and the time-to-digital converter (TDC), which provides a time stamp for the CFD output. Coincident events are identified by digitally comparing the values of the time stamps. In the Cardinal electronics, the front-end processing electronics are performed by an Analog subsection board, which has two application-specific integrated circuits (ASICs), each servicing a PET block detector module. The ASIC has a built-in CFD and TDC. We found that a significant degradation in the timing resolution comes from the ASIC's CFD and TDC. Therefore, we have designed and built an improved Analog subsection board that replaces the ASIC's CFD and TDC with a high-performance CFD (made with discrete components) and TDC (using the CERN high-performance TDC ASIC). The improved Analog subsection board is used in a custom single-ring LSO-based TOF PET camera. The electronics system achieves a timing resolution of 60 ps FWHM. Prototype TOF detector modules are read out with the electronics system and give coincidence timing resolutions of 259 ps FWHM and 156 ps FWHM for detector modules coupled to LSO and LaBr3 crystals respectively.

  19. A high-performance stand-alone solar PV power system for LED lighting

    KAUST Repository

    Huang, B. J.

    2010-06-01

    The present study developed a high-performance solar PV power technology for the LED lighting of a solar home system. The nMPPO (near-Maximum-Power- Point- Operation) design is employed in system design to eliminate MPPT. A feedback control system using pulse width modulation (PWM) technique was developed for battery charging control which can increase the charging capacity by 78%. For high-efficiency lighting, the LED is directly driven by battery using a PWM discharge control to eliminate a DC/DC converter. Two solar-powered LED lighting systems (50W and 100W LED) were built. The long-term outdoor tests have shown that the loss of load probability for full-night lighting requirement is zero for 50W LED and 3.6% for 100W LED. © 2010 IEEE.

  20. a High-Performance Method for Simulating Surface Rainfall-Runoff Dynamics Using Particle System

    Science.gov (United States)

    Zhang, Fangli; Zhou, Qiming; Li, Qingquan; Wu, Guofeng; Liu, Jun

    2016-06-01

    The simulation of rainfall-runoff process is essential for disaster emergency and sustainable development. One common disadvantage of the existing conceptual hydrological models is that they are highly dependent upon specific spatial-temporal contexts. Meanwhile, due to the inter-dependence of adjacent flow paths, it is still difficult for the RS or GIS supported distributed hydrological models to achieve high-performance application in real world applications. As an attempt to improve the performance efficiencies of those models, this study presents a high-performance rainfall-runoff simulating framework based on the flow path network and a separate particle system. The vector-based flow path lines are topologically linked to constrain the movements of independent rain drop particles. A separate particle system, representing surface runoff, is involved to model the precipitation process and simulate surface flow dynamics. The trajectory of each particle is constrained by the flow path network and can be tracked by concurrent processors in a parallel cluster system. The result of speedup experiment shows that the proposed framework can significantly improve the simulating performance just by adding independent processors. By separating the catchment elements and the accumulated water, this study provides an extensible solution for improving the existing distributed hydrological models. Further, a parallel modeling and simulating platform needs to be developed and validate to be applied in monitoring real world hydrologic processes.

  1. High-Performance Carbon Nanotube Complementary Electronics and Integrated Sensor Systems on Ultrathin Plastic Foil.

    Science.gov (United States)

    Zhang, Heng; Xiang, Li; Yang, Yingjun; Xiao, Mengmeng; Han, Jie; Ding, Li; Zhang, Zhiyong; Hu, Youfan; Peng, Lian-Mao

    2018-02-01

    The longtime vacancy of high-performance complementary metal-oxide-semiconductor (CMOS) technology on plastics is a non-negligible obstacle to the applications of flexible electronics with advanced functions, such as continuous health monitoring with in situ signal processing and wireless communication capabilities, in which high speed, low power consumption, and complex functionality are desired for integrated circuits (ICs). Here, we report the implementation of carbon nanotube (CNT)-based high-performance CMOS technology and its application for signal processing in an integrated sensor system for human body monitoring on ultrathin plastic foil with a thickness of 2.5 μm. The performances of both the p- and n-type CNT field-effect transistors (FETs) are excellent and symmetric on plastic foil with a low operation voltage of 2 V: width-normalized transconductances (g m /W) as high as 4.69 μS/μm and 5.45 μS/μm, width-normalized on-state currents reaching 5.85 μA/μm and 6.05 μA/μm, and mobilities up to 80.26 cm 2 ·V -1 ·s -1 and 97.09 cm 2 ·V -1 ·s -1 , respectively, together with a current on/off ratio of approximately 10 5 . The devices were mechanically robust, withstanding a curvature radius down to 124 μm. Utilizing these transistors, various high-performance CMOS digital ICs with rail-to-rail output and a ring oscillator on plastics with an oscillation frequency of 5 MHz were demonstrated. Furthermore, an ultrathin skin-mounted humidity sensor system with in situ frequency modulation signal processing capability was realized to monitor human body sweating.

  2. A new massively parallel version of CRYSTAL for large systems on high performance computing architectures.

    Science.gov (United States)

    Orlando, Roberto; Delle Piane, Massimo; Bush, Ian J; Ugliengo, Piero; Ferrabone, Matteo; Dovesi, Roberto

    2012-10-30

    Fully ab initio treatment of complex solid systems needs computational software which is able to efficiently take advantage of the growing power of high performance computing (HPC) architectures. Recent improvements in CRYSTAL, a periodic ab initio code that uses a Gaussian basis set, allows treatment of very large unit cells for crystalline systems on HPC architectures with high parallel efficiency in terms of running time and memory requirements. The latter is a crucial point, due to the trend toward architectures relying on a very high number of cores with associated relatively low memory availability. An exhaustive performance analysis shows that density functional calculations, based on a hybrid functional, of low-symmetry systems containing up to 100,000 atomic orbitals and 8000 atoms are feasible on the most advanced HPC architectures available to European researchers today, using thousands of processors. Copyright © 2012 Wiley Periodicals, Inc.

  3. Design Considerations for Scalable High-Performance Vision Systems Embedded in Industrial Print Inspection Machines

    Directory of Open Access Journals (Sweden)

    Rössler Peter

    2007-01-01

    Full Text Available This paper describes the design of a scalable high-performance vision system which is used in the application area of optical print inspection. The system is able to process hundreds of megabytes of image data per second coming from several high-speed/high-resolution cameras. Due to performance requirements, some functionality has been implemented on dedicated hardware based on a field programmable gate array (FPGA, which is coupled to a high-end digital signal processor (DSP. The paper discusses design considerations like partitioning of image processing algorithms between hardware and software. The main chapters focus on functionality implemented on the FPGA, including low-level image processing algorithms (flat-field correction, image pyramid generation, neighborhood operations and advanced processing units (programmable arithmetic unit, geometry unit. Verification issues for the complex system are also addressed. The paper concludes with a summary of the FPGA resource usage and some performance results.

  4. Kemari: A Portable High Performance Fortran System for Distributed Memory Parallel Processors

    Directory of Open Access Journals (Sweden)

    T. Kamachi

    1997-01-01

    Full Text Available We have developed a compilation system which extends High Performance Fortran (HPF in various aspects. We support the parallelization of well-structured problems with loop distribution and alignment directives similar to HPF's data distribution directives. Such directives give both additional control to the user and simplify the compilation process. For the support of unstructured problems, we provide directives for dynamic data distribution through user-defined mappings. The compiler also allows integration of message-passing interface (MPI primitives. The system is part of a complete programming environment which also comprises a parallel debugger and a performance monitor and analyzer. After an overview of the compiler, we describe the language extensions and related compilation mechanisms in detail. Performance measurements demonstrate the compiler's applicability to a variety of application classes.

  5. High-performance sensorless nonlinear power control of a flywheel energy storage system

    Energy Technology Data Exchange (ETDEWEB)

    Amodeo, S.J.; Chiacchiarini, H.G.; Solsona, J.A.; Busada, C.A. [Departamento de Ingenieria Electrica y de Computadoras, Instituto de Investigaciones en Ingenieria Electrica ' ' Alfredo Desages' ' , Universidad Nacional del Sur y CONICET, Avda. Alem 1253 (B8000CPB) Bahaa Blanca (Argentina)

    2009-07-15

    The flywheel energy storage systems (FESS) can be used to store and release energy in high power pulsed systems. Based on the use of a homopolar synchronous machine in a FESS, a high performance model-based power flow control law is developed using the feedback linearization methodology. This law is based on the voltage space vector reference frame machine model. To reduce the magnetic losses, a pulse amplitude modulation driver for the armature is more adequate. The restrictions in amplitude and phase imposed by the driver are also included. A full order Luenberger observer for the torque angle and rotor speed is developed to implement a sensorless control strategy. Simulation results are presented to illustrate the performance. (author)

  6. A high performance fluorescence switching system triggered electrochemically by Prussian blue with upconversion nanoparticles

    Science.gov (United States)

    Zhai, Yiwen; Zhang, Hui; Zhang, Lingling; Dong, Shaojun

    2016-05-01

    A high performance fluorescence switching system triggered electrochemically by Prussian blue with upconversion nanoparticles was proposed. We synthesized a kind of hexagonal monodisperse β-NaYF4:Yb3+,Er3+,Tm3+ upconversion nanoparticle and manipulated the intensity ratio of red emission (at 653 nm) and green emission at (523 and 541 nm) around 2 : 1, in order to match well with the absorption spectrum of Prussian blue. Based on the efficient fluorescence resonance energy transfer and inner-filter effect of the as-synthesized upconversion nanoparticles and Prussian blue, the present fluorescence switching system shows obvious behavior with high fluorescence contrast and good stability. To further extend the application of this system in analysis, sulfite, a kind of important anion in environmental and physiological systems, which could also reduce Prussian blue to Prussian white nanoparticles leading to a decrease of the absorption spectrum, was chosen as the target. And we were able to determine the concentration of sulfite in aqueous solution with a low detection limit and a broad linear relationship.A high performance fluorescence switching system triggered electrochemically by Prussian blue with upconversion nanoparticles was proposed. We synthesized a kind of hexagonal monodisperse β-NaYF4:Yb3+,Er3+,Tm3+ upconversion nanoparticle and manipulated the intensity ratio of red emission (at 653 nm) and green emission at (523 and 541 nm) around 2 : 1, in order to match well with the absorption spectrum of Prussian blue. Based on the efficient fluorescence resonance energy transfer and inner-filter effect of the as-synthesized upconversion nanoparticles and Prussian blue, the present fluorescence switching system shows obvious behavior with high fluorescence contrast and good stability. To further extend the application of this system in analysis, sulfite, a kind of important anion in environmental and physiological systems, which could also reduce Prussian blue to

  7. High performance computing system in the framework of the Higgs boson studies

    CERN Document Server

    Belyaev, Nikita; The ATLAS collaboration

    2017-01-01

    The Higgs boson physics is one of the most important and promising fields of study in modern High Energy Physics. To perform precision measurements of the Higgs boson properties, the use of fast and efficient instruments of Monte Carlo event simulation is required. Due to the increasing amount of data and to the growing complexity of the simulation software tools, the computing resources currently available for Monte Carlo simulation on the LHC GRID are not sufficient. One of the possibilities to address this shortfall of computing resources is the usage of institutes computer clusters, commercial computing resources and supercomputers. In this paper, a brief description of the Higgs boson physics, the Monte-Carlo generation and event simulation techniques are presented. A description of modern high performance computing systems and tests of their performance are also discussed. These studies have been performed on the Worldwide LHC Computing Grid and Kurchatov Institute Data Processing Center, including Tier...

  8. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  9. A scalable silicon photonic chip-scale optical switch for high performance computing systems.

    Science.gov (United States)

    Yu, Runxiang; Cheung, Stanley; Li, Yuliang; Okamoto, Katsunari; Proietti, Roberto; Yin, Yawei; Yoo, S J B

    2013-12-30

    This paper discusses the architecture and provides performance studies of a silicon photonic chip-scale optical switch for scalable interconnect network in high performance computing systems. The proposed switch exploits optical wavelength parallelism and wavelength routing characteristics of an Arrayed Waveguide Grating Router (AWGR) to allow contention resolution in the wavelength domain. Simulation results from a cycle-accurate network simulator indicate that, even with only two transmitter/receiver pairs per node, the switch exhibits lower end-to-end latency and higher throughput at high (>90%) input loads compared with electronic switches. On the device integration level, we propose to integrate all the components (ring modulators, photodetectors and AWGR) on a CMOS-compatible silicon photonic platform to ensure a compact, energy efficient and cost-effective device. We successfully demonstrate proof-of-concept routing functions on an 8 × 8 prototype fabricated using foundry services provided by OpSIS-IME.

  10. The impact of very high performance integrated circuits on avionics system readiness

    Science.gov (United States)

    Strull, G.

    1985-08-01

    Very high performance integrated circuits (VHPIC) represent more than an integrated circuit technology advance- VHPIC really represents a new systems/technology culture. With a philosophy of top-down design and bottom-up build, a vehicle is provided to avoid rapid obsolescence so prevalent in the fast moving integrated circuit industry. However, to successfully and effectively design advanced systems in this manner, a design methodology is required that adequately addresses the challenge. Since everything from chip definition through application analysis is interactive with everything else, the challenge is to adequately keep track of all the perimeters and their relationship. The methodology by which design and analysis are accomplished is discussed. The starting point is the systems architecture and its application software. From the architecture and application software the partitioning of the system into appropriate modules can be derived. From this an idea of the integrated circuits needed can be determined. The elements of system readiness are described. They are design, implementation, insertion, maintenance, and (Preplanned Product Improvement).

  11. Enhanced Central System of the Traversing Rod for High-Performance Rotor Spinning Machines

    Directory of Open Access Journals (Sweden)

    Valtera Jan

    2017-03-01

    Full Text Available The paper deals with the improvement of central traversing system on rotor spinning machines, where rectilinear motion with variable stroke is used. A new system of traversing rod with implemented set of magnetic-mechanical energy accumulators is described. Mathematical model of this system is analysed in the MSC. Software Adams/View and verified by an experimental measurement on a real-length testing rig. Analysis results prove the enhancement of devised traversing system, where the overall dynamic force is reduced considerably. At the same time, the precision of the traversing movement over the machine length is increased. This enables to increase machine operating speed while satisfying both the maximal tensile strength of the traversing rod and also output bobbin size standards. The usage of the developed mathematical model for determination of the optimal number and distribution of accumulators over the traversing rod of optional parameters is proved. The potential of the devised system for high-performance rotor spinning machines with longer traversing rod is also discussed.

  12. High performance mixed optical CDMA system using ZCC code and multiband OFDM

    Science.gov (United States)

    Nawawi, N. M.; Anuar, M. S.; Junita, M. N.; Rashidi, C. B. M.

    2017-11-01

    In this paper, we have proposed a high performance network design, which is based on mixed optical Code Division Multiple Access (CDMA) system using Zero Cross Correlation (ZCC) code and multiband Orthogonal Frequency Division Multiplexing (OFDM) called catenated OFDM. In addition, we also investigate the related changing parameters such as; effective power, number of user, number of band, code length and code weight. Then we theoretically analyzed the system performance comprehensively while considering up to five OFDM bands. The feasibility of the proposed system architecture is verified via the numerical analysis. The research results demonstrated that our developed modulation solution can significantly enhanced the total number of user; improving up to 80% for five catenated bands compared to traditional optical CDMA system, with the code length equals to 80, transmitted at 622 Mbps. It is also demonstrated that the BER performance strongly depends on number of weight, especially with less number of users. As the number of weight increases, the BER performance is better.

  13. Flexible and biocompatible high-performance solid-state micro-battery for implantable orthodontic system

    KAUST Repository

    Kutbee, Arwa T.

    2017-09-25

    To augment the quality of our life, fully compliant personalized advanced health-care electronic system is pivotal. One of the major requirements to implement such systems is a physically flexible high-performance biocompatible energy storage (battery). However, the status-quo options do not match all of these attributes simultaneously and we also lack in an effective integration strategy to integrate them in complex architecture such as orthodontic domain in human body. Here we show, a physically complaint lithium-ion micro-battery (236 μg) with an unprecedented volumetric energy (the ratio of energy to device geometrical size) of 200 mWh/cm3 after 120 cycles of continuous operation. Our results of 90% viability test confirmed the battery’s biocompatibility. We also show seamless integration of the developed battery in an optoelectronic system embedded in a three-dimensional printed smart dental brace. We foresee the resultant orthodontic system as a personalized advanced health-care application, which could serve in faster bone regeneration and enhanced enamel health-care protection and subsequently reducing the overall health-care cost.

  14. Engineering development of coal-fired high performance power systems, Phase II and Phase III. Quarter progress report, April 1, 1996--June 30, 1996

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-11-01

    Work is presented on the development of a coal-fired high performance power generation system by the year 2000. This report describes the design of the air heater, duct heater, system controls, slag viscosity, and design of a quench zone.

  15. RavenDB high performance

    CERN Document Server

    Ritchie, Brian

    2013-01-01

    RavenDB High Performance is comprehensive yet concise tutorial that developers can use to.This book is for developers & software architects who are designing systems in order to achieve high performance right from the start. A basic understanding of RavenDB is recommended, but not required. While the book focuses on advanced topics, it does not assume that the reader has a great deal of prior knowledge of working with RavenDB.

  16. ExaGeoStat: A High Performance Unified Framework for Geostatistics on Manycore Systems

    KAUST Repository

    Abdulah, Sameh

    2017-08-09

    We present ExaGeoStat, a high performance framework for geospatial statistics in climate and environment modeling. In contrast to simulation based on partial differential equations derived from first-principles modeling, ExaGeoStat employs a statistical model based on the evaluation of the Gaussian log-likelihood function, which operates on a large dense covariance matrix. Generated by the parametrizable Matern covariance function, the resulting matrix is symmetric and positive definite. The computational tasks involved during the evaluation of the Gaussian log-likelihood function become daunting as the number n of geographical locations grows, as O(n2) storage and O(n3) operations are required. While many approximation methods have been devised from the side of statistical modeling to ameliorate these polynomial complexities, we are interested here in the complementary approach of evaluating the exact algebraic result by exploiting advances in solution algorithms and many-core computer architectures. Using state-of-the-art high performance dense linear algebra libraries associated with various leading edge parallel architectures (Intel KNLs, NVIDIA GPUs, and distributed-memory systems), ExaGeoStat raises the game for statistical applications from climate and environmental science. ExaGeoStat provides a reference evaluation of statistical parameters, with which to assess the validity of the various approaches based on approximation. The framework takes a first step in the merger of large-scale data analytics and extreme computing for geospatial statistical applications, to be followed by additional complexity reducing improvements from the solver side that can be implemented under the same interface. Thus, a single uncompromised statistical model can ultimately be executed in a wide variety of emerging exascale environments.

  17. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce

    Science.gov (United States)

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-01-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS – a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive. PMID:24187650

  18. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce.

    Science.gov (United States)

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-08-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS - a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive.

  19. High-Performance, Multi-Node File Copies and Checksums for Clustered File Systems

    Science.gov (United States)

    Kolano, Paul Z.; Ciotti, Robert B.

    2012-01-01

    Modern parallel file systems achieve high performance using a variety of techniques, such as striping files across multiple disks to increase aggregate I/O bandwidth and spreading disks across multiple servers to increase aggregate interconnect bandwidth. To achieve peak performance from such systems, it is typically necessary to utilize multiple concurrent readers/writers from multiple systems to overcome various singlesystem limitations, such as number of processors and network bandwidth. The standard cp and md5sum tools of GNU coreutils found on every modern Unix/Linux system, however, utilize a single execution thread on a single CPU core of a single system, and hence cannot take full advantage of the increased performance of clustered file systems. Mcp and msum are drop-in replacements for the standard cp and md5sum programs that utilize multiple types of parallelism and other optimizations to achieve maximum copy and checksum performance on clustered file systems. Multi-threading is used to ensure that nodes are kept as busy as possible. Read/write parallelism allows individual operations of a single copy to be overlapped using asynchronous I/O. Multinode cooperation allows different nodes to take part in the same copy/checksum. Split-file processing allows multiple threads to operate concurrently on the same file. Finally, hash trees allow inherently serial checksums to be performed in parallel. Mcp and msum provide significant performance improvements over standard cp and md5sum using multiple types of parallelism and other optimizations. The total speed-ups from all improvements are significant. Mcp improves cp performance over 27x, msum improves md5sum performance almost 19x, and the combination of mcp and msum improves verified copies via cp and md5sum by almost 22x. These improvements come in the form of drop-in replacements for cp and md5sum, so are easily used and are available for download as open source software at http://mutil.sourceforge.net.

  20. A Low-Cost, High-Performance System for Fluorescence Lateral Flow Assays

    Science.gov (United States)

    Lee, Linda G.; Nordman, Eric S.; Johnson, Martin D.; Oldham, Mark F.

    2013-01-01

    We demonstrate a fluorescence lateral flow system that has excellent sensitivity and wide dynamic range. The illumination system utilizes an LED, plastic lenses and plastic and colored glass filters for the excitation and emission light. Images are collected on an iPhone 4. Several fluorescent dyes with long Stokes shifts were evaluated for their signal and nonspecific binding in lateral flow. A wide range of values for the ratio of signal to nonspecific binding was found, from 50 for R-phycoerythrin (R-PE) to 0.15 for Brilliant Violet 605. The long Stokes shift of R-PE allowed the use of inexpensive plastic filters rather than costly interference filters to block the LED light. Fluorescence detection with R-PE and absorbance detection with colloidal gold were directly compared in lateral flow using biotinylated bovine serum albumen (BSA) as the analyte. Fluorescence provided linear data over a range of 0.4–4,000 ng/mL with a 1,000-fold signal change while colloidal gold provided non-linear data over a range of 16–4,000 ng/mL with a 10-fold signal change. A comparison using human chorionic gonadotropin (hCG) as the analyte showed a similar advantage in the fluorescent system. We believe our inexpensive yet high-performance platform will be useful for providing quantitative and sensitive detection in a point-of-care setting. PMID:25586412

  1. A Low-Cost, High-Performance System for Fluorescence Lateral Flow Assays

    Directory of Open Access Journals (Sweden)

    Linda G. Lee

    2013-10-01

    Full Text Available We demonstrate a fluorescence lateral flow system that has excellent sensitivity and wide dynamic range. The illumination system utilizes an LED, plastic lenses and plastic and colored glass filters for the excitation and emission light. Images are collected on an iPhone 4. Several fluorescent dyes with long Stokes shifts were evaluated for their signal and nonspecific binding in lateral flow. A wide range of values for the ratio of signal to nonspecific binding was found, from 50 for R-phycoerythrin (R-PE to 0.15 for Brilliant Violet 605. The long Stokes shift of R-PE allowed the use of inexpensive plastic filters rather than costly interference filters to block the LED light. Fluorescence detection with R-PE and absorbance detection with colloidal gold were directly compared in lateral flow using biotinylated bovine serum albumen (BSA as the analyte. Fluorescence provided linear data over a range of 0.4–4,000 ng/mL with a 1,000-fold signal change while colloidal gold provided non-linear data over a range of 16–4,000 ng/mL with a 10-fold signal change. A comparison using human chorionic gonadotropin (hCG as the analyte showed a similar advantage in the fluorescent system. We believe our inexpensive yet high-performance platform will be useful for providing quantitative and sensitive detection in a point-of-care setting.

  2. Optimized Architectural Approaches in Hardware and Software Enabling Very High Performance Shared Storage Systems

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    There are issues encountered in high performance storage systems that normally lead to compromises in architecture. Compute clusters tend to have compute phases followed by an I/O phase that must move data from the entire cluster in one operation. That data may then be shared by a large number of clients creating unpredictable read and write patterns. In some cases the aggregate performance of a server cluster must exceed 100 GB/s to minimize the time required for the I/O cycle thus maximizing compute availability. Accessing the same content from multiple points in a shared file system leads to the classical problems of data "hot spots" on the disk drive side and access collisions on the data connectivity side. The traditional method for increasing apparent bandwidth usually includes data replication which is costly in both storage and management. Scaling a model that includes replicated data presents additional management challenges as capacity and bandwidth expand asymmetrically while the system is scaled. ...

  3. High-performance Negative Database for Massive Data Management System of The Mingantu Spectral Radioheliograph

    Science.gov (United States)

    Shi, Congming; Wang, Feng; Deng, Hui; Liu, Yingbo; Liu, Cuiyin; Wei, Shoulin

    2017-08-01

    As a dedicated synthetic aperture radio interferometer in China, the MingantU SpEctral Radioheliograph (MUSER), initially known as the Chinese Spectral RadioHeliograph (CSRH), has entered the stage of routine observation. More than 23 million data records per day need to be effectively managed to provide high-performance data query and retrieval for scientific data reduction. In light of these massive amounts of data generated by the MUSER, in this paper, a novel data management technique called the negative database (ND) is proposed and used to implement a data management system for the MUSER. Based on the key-value database, the ND technique makes complete utilization of the complement set of observational data to derive the requisite information. Experimental results showed that the proposed ND can significantly reduce storage volume in comparison with a relational database management system (RDBMS). Even when considering the time needed to derive records that were absent, its overall performance, including querying and deriving the data of the ND, is comparable with that of a relational database management system (RDBMS). The ND technique effectively solves the problem of massive data storage for the MUSER and is a valuable reference for the massive data management required in next-generation telescopes.

  4. Reconfigurable and adaptive photonic networks for high-performance computing systems.

    Science.gov (United States)

    Kodi, Avinash; Louri, Ahmed

    2009-08-01

    As feature sizes decrease to the submicrometer regime and clock rates increase to the multigigahertz range, the limited bandwidth at higher bit rates and longer communication distances in electrical interconnects will create a major bandwidth imbalance in future high-performance computing (HPC) systems. We explore the application of an optoelectronic interconnect for the design of flexible, high-bandwidth, reconfigurable and adaptive interconnection architectures for chip-to-chip and board-to-board HPC systems. Reconfigurability is realized by interconnecting arrays of optical transmitters, and adaptivity is implemented by a dynamic bandwidth reallocation (DBR) technique that balances the load on each communication channel. We evaluate a DBR technique, the lockstep (LS) protocol, that monitors traffic intensities, reallocates bandwidth, and adapts to changes in communication patterns. We incorporate this DBR technique into a detailed discrete-event network simulator to evaluate the performance for uniform, nonuniform, and permutation communication patterns. Simulation results indicate that, without reconfiguration techniques being applied, optical based system architecture shows better performance than electrical interconnects for uniform and nonuniform patterns; with reconfiguration techniques being applied, the dynamically reconfigurable optoelectronic interconnect provides much better performance for all communication patterns. Based on the performance study, the reconfigured architecture shows 30%-50% increased throughput and 50%-75% reduced network latency compared with HPC electrical networks.

  5. Investigating the effectiveness of many-core network processors for high performance cyber protection systems. Part I, FY2011.

    Energy Technology Data Exchange (ETDEWEB)

    Wheeler, Kyle Bruce; Naegle, John Hunt; Wright, Brian J.; Benner, Robert E., Jr.; Shelburg, Jeffrey Scott; Pearson, David Benjamin; Johnson, Joshua Alan; Onunkwo, Uzoma A.; Zage, David John; Patel, Jay S.

    2011-09-01

    This report documents our first year efforts to address the use of many-core processors for high performance cyber protection. As the demands grow for higher bandwidth (beyond 1 Gbits/sec) on network connections, the need to provide faster and more efficient solution to cyber security grows. Fortunately, in recent years, the development of many-core network processors have seen increased interest. Prior working experiences with many-core processors have led us to investigate its effectiveness for cyber protection tools, with particular emphasis on high performance firewalls. Although advanced algorithms for smarter cyber protection of high-speed network traffic are being developed, these advanced analysis techniques require significantly more computational capabilities than static techniques. Moreover, many locations where cyber protections are deployed have limited power, space and cooling resources. This makes the use of traditionally large computing systems impractical for the front-end systems that process large network streams; hence, the drive for this study which could potentially yield a highly reconfigurable and rapidly scalable solution.

  6. Frontline health care workers and perceived career mobility: do high-performance work practices make a difference?

    Science.gov (United States)

    Dill, Janette S; Morgan, Jennifer Craft; Weiner, Bryan

    2014-01-01

    The use of high-performance work practices (HPWPs) related to career development (e.g., tuition remission, career ladders) is becoming more common in health care organizations, where skill shortages and concerns about quality of care have led to increasing investment in the frontline health care workforce. However, few studies have examined the effectiveness of these policies in shaping the career trajectories of health care workers. The aim of this study is to examine how HPWPs that focus on career development are related to an individuals' perceived mobility with their current employer. We will also examine the relationships between perceived mobility, job satisfaction, and turnover intent. We use confirmatory factor analysis and structural equation modeling to examine the relationships between HPWPs and perceived mobility in a sample of 947 frontline health care workers in 22 health care organizations across the United States. The findings suggest that tuition remission and educational release time positively predict perceived mobility. Measures of perceived organizational support in one's current position (e.g., financial rewards, workload, and autonomy) and perceived supervisor support for career development are also significant predictors of perceived mobility. Finally, perceived mobility is a significant predictor of job satisfaction and intent to stay with current employer. Our findings suggest that HPWPs related to career development may be effective tools in improving workers' assessments of their own career potential and improving overall job satisfaction of frontline health care workers. Consequently, HPWPs related to career development may help employers both retain valuable workers and fill worker shortages.

  7. Detection of HEMA in self-etching adhesive systems with high performance liquid chromatography

    Science.gov (United States)

    Panduric, V.; Tarle, Z.; Hameršak, Z.; Stipetić, I.; Matosevic, D.; Negovetić-Mandić, V.; Prskalo, K.

    2009-04-01

    One of the factors that can decrease hydrolytic stability of self-etching adhesive systems (SEAS) is 2-hydroxymethylmethacrylate (HEMA). Due to hydrolytic instability of acidic methacrylate monomers in SEAS, HEMA can be present even if the manufacturer did not include it in original composition. The aim of the study was to determine the presence of HEMA because of decomposition by hydrolysis of methacrylates during storage, resulting with loss of adhesion strength to hard dental tissues of the tooth crown. Three most commonly used SEAS were tested: AdheSE ONE, G-Bond and iBond under different storage conditions. High performance liquid chromatography analysis was performed on a Nucleosil C 18-100 5 μm (250 × 4.6 mm) column, Knauer K-501 pumps and Wellchrom DAD K-2700 detector at 215 nm. Data were collected and processed by EuroCrom 2000 HPLC software. Calibration curves were made related eluted peak area to known concentrations of HEMA (purchased from Fluka). The elution time for HEMA is 12.25 min at flow rate 1.0 ml/min. Obtained results indicate that no HEMA was present in AdheSE ONE because methacrylates are substituted with methacrylamides that seem to be more stable under acidic aqueous conditions. In all other adhesive systems HEMA was detected.

  8. A High Performance Pocket-Size System for Evaluations in Acoustic Signal Processing

    Directory of Open Access Journals (Sweden)

    Steeger Gerhard H

    2001-01-01

    Full Text Available Custom-made hardware is attractive for sophisticated signal processing in wearable electroacoustic devices, but has a high initial cost overhead. Thus, signal processing algorithms should be tested thoroughly in real application environments by potential end users prior to the hardware implementation. In addition, the algorithms should be easily alterable during this test phase. A wearable system which meets these requirements has been developed and built. The system is based on the high performance signal processor Motorola DSP56309. This device also includes high quality stereo analog-to-digital-(ADC- and digital-to-analog-(DAC-converters with 20 bit word length each. The available dynamic range exceeds 88 dB. The input and output gains can be adjusted by digitally controlled potentiometers. The housing of the unit is small enough to carry it in a pocket (dimensions 150 × 80 × 25 mm. Software tools have been developed to ease the development of new algorithms. A set of configurable Assembler code modules implements all hardware dependent software routines and gives easy access to the peripherals and interfaces. A comfortable fitting interface allows easy control of the signal processing unit from a PC, even by assistant personnel. The device has proven to be a helpful means for development and field evaluations of advanced new hearing aid algorithms, within interdisciplinary research projects. Now it is offered to the scientific community.

  9. Understanding Values in a Large Health Care Organization through Work-Life Narratives of High-Performing Employees

    Directory of Open Access Journals (Sweden)

    Orit Karnieli-Miller

    2011-10-01

    Full Text Available Objective— To understand high-performing frontline employees’ values as reflected in their narratives of day-to-day interactions in a large health care organization. Methods— A total of 150 employees representing various roles within the organization were interviewed and asked to share work-life narratives (WLNs about value-affirming situations (i.e. situations in which they believed their actions to be fully aligned with their values and value-challenging situations (i.e. when their actions or the actions of others were not consistent with their values, using methods based on appreciative inquiry. Results— The analysis revealed 10 broad values. Most of the value-affirming WLNs were about the story-teller and team providing care for the patient/family. Half of the value-challenging WLNs were about the story-teller or a patient and barriers created by the organization, supervisor, or physician. Almost half of these focused on “treating others with disrespect/respect”. Only 15% of the value-challenging WLNs contained a resolution reached by the participants, often leaving them describing unresolved and frequently negative feelings. Conclusions— Appreciative inquiry and thematic analysis methods were found to be an effective tool for understanding the important and sometimes competing role personal and institutional values play in day-to-day work. There is remarkable potential in using WLNs as a way to surface and reinforce shared values and, perhaps more importantly, respectfully to identify and discuss conflicting personal and professional values.

  10. Employee Ownership and Organizational Citizenship Behavior: High Performance Ownership Systems and the Mediating Role of Psychological Ownership

    NARCIS (Netherlands)

    Poutsma, F.; Eert, C. van; Ligthart, P.E.M.

    2015-01-01

    This paper investigated the effect of employee share ownership, mediated through psychological ownership, on organizational citizenship behavior. The analysis included the possible complementary role of High Performance Ownership systems. This paper investigated these relationships by analyzing

  11. Toward server-side, high performance climate change data analytics in the Earth System Grid Federation (ESGF) eco-system

    Science.gov (United States)

    Fiore, Sandro; Williams, Dean; Aloisio, Giovanni

    2016-04-01

    In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated (e.g., the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). Most of the tools currently available for scientific data analysis in the climate domain fail at large scale since they: (1) are desktop based and need the data locally; (2) are sequential, so do not benefit from available multicore/parallel machines; (3) do not provide declarative languages to express scientific data analysis tasks; (4) are domain-specific, which ties their adoption to a specific domain; and (5) do not provide a workflow support, to enable the definition of complex "experiments". The Ophidia project aims at facing most of the challenges highlighted above by providing a big data analytics framework for eScience. Ophidia provides declarative, server-side, and parallel data analysis, jointly with an internal storage model able to efficiently deal with multidimensional data and a hierarchical data organization to manage large data volumes ("datacubes"). The project relies on a strong background of high performance database management and OLAP systems to manage large scientific data sets. It also provides a native workflow management support, to define processing chains and workflows with tens to hundreds of data analytics operators to build real scientific use cases. With regard to interoperability aspects, the talk will present the contribution provided both to the RDA Working Group on Array Databases, and the Earth System Grid Federation (ESGF

  12. High Performance and Accurate Change Detection System for HyspIRI Missions Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose novel and high performance change detection algorithms to process HyspIRI data, which have been used for monitoring changes in vegetation, climate,...

  13. A high performance, electric pump-fed LOX / RP propulsion system Project

    Data.gov (United States)

    National Aeronautics and Space Administration — To-date, the realization of small-scale, high-performance liquid bipropellant rocket engines has largely been limited by the inability to operate at high chamber...

  14. Advanced Insulation for High Performance Cost-Effective Wall, Roof, and Foundation Systems Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Costeux, Stephane [Dow Chemical Company, Midland, MI (United States); Bunker, Shanon [Dow Chemical Company, Midland, MI (United States)

    2013-12-20

    The objective of this project was to explore and potentially develop high performing insulation with increased R/inch and low impact on climate change that would help design highly insulating building envelope systems with more durable performance and lower overall system cost than envelopes with equivalent performance made with materials available today. The proposed technical approach relied on insulation foams with nanoscale pores (about 100 nm in size) in which heat transfer will be decreased. Through the development of new foaming methods, of new polymer formulations and new analytical techniques, and by advancing the understanding of how cells nucleate, expand and stabilize at the nanoscale, Dow successfully invented and developed methods to produce foams with 100 nm cells and 80% porosity by batch foaming at the laboratory scale. Measurements of the gas conductivity on small nanofoam specimen confirmed quantitatively the benefit of nanoscale cells (Knudsen effect) to increase insulation value, which was the key technical hypotheses of the program. In order to bring this technology closer to a viable semi-continuous/continuous process, the project team modified an existing continuous extrusion foaming process as well as designed and built a custom system to produce 6" x 6" foam panels. Dow demonstrated for the first time that nanofoams can be produced in a both processes. However, due to technical delays, foam characteristics achieved so far fall short of the 100 nm target set for optimal insulation foams. In parallel with the technology development, effort was directed to the determination of most promising applications for nanocellular insulation foam. Voice of Customer (VOC) exercise confirmed that demand for high-R value product will rise due to building code increased requirements in the near future, but that acceptance for novel products by building industry may be slow. Partnerships with green builders, initial launches in smaller markets (e.g. EIFS

  15. Pyrolytic carbon-coated stainless steel felt as a high-performance anode for bioelectrochemical systems.

    Science.gov (United States)

    Guo, Kun; Hidalgo, Diana; Tommasi, Tonia; Rabaey, Korneel

    2016-07-01

    Scale up of bioelectrochemical systems (BESs) requires highly conductive, biocompatible and stable electrodes. Here we present pyrolytic carbon-coated stainless steel felt (C-SS felt) as a high-performance and scalable anode. The electrode is created by generating a carbon layer on stainless steel felt (SS felt) via a multi-step deposition process involving α-d-glucose impregnation, caramelization, and pyrolysis. Physicochemical characterizations of the surface elucidate that a thin (20±5μm) and homogenous layer of polycrystalline graphitic carbon was obtained on SS felt surface after modification. The carbon coating significantly increases the biocompatibility, enabling robust electroactive biofilm formation. The C-SS felt electrodes reach current densities (jmax) of 3.65±0.14mA/cm(2) within 7days of operation, which is 11 times higher than plain SS felt electrodes (0.30±0.04mA/cm(2)). The excellent biocompatibility, high specific surface area, high conductivity, good mechanical strength, and low cost make C-SS felt a promising electrode for BESs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. High-performance liquid chromatographic postcolumn reaction detection based on a competitive binding system

    Energy Technology Data Exchange (ETDEWEB)

    Przyjazny, A.; Kjellstroem, T.L.; Bachas, L.G. (Univ. of Kentucky, Lexington (USA))

    1990-12-01

    Postcolumn reactions are typically employed to improve detection in high-performance liquid chromatography (HPLC) separation techniques. This study proposes the use of competitive binding principles in designing novel postcolumn reaction schemes. The feasibility of this approach was tested by using the HPLC determination of biotin and biocytin as a model system. The effluent from the HPLC column was merged with a reagent stream containing avidin, whose bindings sites were occupied by the dye HABA (2-(4{prime}-hydroxyphenylazo)benzoic acid). HABA was displaced by the analytes from the avidin-HABA complex and the free dye was determined with a UV-vis detector at 345 nm. The procedure was optimized with respect to reactor design, reagent concentrations, and the flow rate of reagent solution. Analytical characteristics of the developed procedure were determined and compared with the direct detection of biotin and biocytin at 220 nm. The postcolumn reaction scheme improved the selectivity and sensitivity of the detection of biotin and biocytin while maintaining similar detection limits.

  17. High performance dash on warning air mobile, missile system. [intercontinental ballistic missiles - systems analysis

    Science.gov (United States)

    Levin, A. D.; Castellano, C. R.; Hague, D. S.

    1975-01-01

    An aircraft-missile system which performs a high acceleration takeoff followed by a supersonic dash to a 'safe' distance from the launch site is presented. Topics considered are: (1) technological feasibility to the dash on warning concept; (2) aircraft and boost trajectory requirements; and (3) partial cost estimates for a fleet of aircraft which provide 200 missiles on airborne alert. Various aircraft boost propulsion systems were studied such as an unstaged cryogenic rocket, an unstaged storable liquid, and a solid rocket staged system. Various wing planforms were also studied. Vehicle gross weights are given. The results indicate that the dash on warning concept will meet expected performance criteria, and can be implemented using existing technology, such as all-aluminum aircraft and existing high-bypass-ratio turbofan engines.

  18. Joining of ceramics for high performance energy systems. Mid-term progress report, August 1, 1979-March 31, 1980

    Energy Technology Data Exchange (ETDEWEB)

    Smeltzer, C E; Metcalfe, A G

    1980-10-06

    The subject program is primarily an exploratory and demonstration study of the use of silicate glass-based adhesives for bonding silicon-base refractory ceramics (SiC, Si/sub 3/N/sub 4/). The projected application is 1250 to 2050/sup 0/F relaxing joint service in high-performance energy conversion systems. The five program tasks and their current status are as follows. Task 1 - Long-Term Joint Stability. Time-temperature-transformation studies of candidate glass adhesives, out to 2000 hours simulated service exposure, are half complete. Task 2 - Environmental and Service Effects on Joint Reliability. Start up delayed due to late delivery of candidate glass fillers and ceramic specimens. Task 3 - Viscoelastic Damping of Glass Bonded Ceramics. Promising results obtained over approximately the same range of glass viscosity required for joint relaxation function (10/sup 7.5/ to 10/sup 9.5/ poise). Work is 90% complete. Task 4 - Crack Arrest and Crack Diversion by Joints. No work started due to late arrival of materials. Task 5 - Improved Joining and Fabrication Methods. Significant work has been conducted in the area of refractory pre-glazing and the application and bonding of high-density candidate glass fillers (by both hand-artisan and slip-spray techniques). Work is half complete.

  19. Vactub, a new high performance insulation system for pipe in pipe in ultra-deep water

    Energy Technology Data Exchange (ETDEWEB)

    Chenin, L. [Bouygues Offshore, Montigny-le-Bretonneux, 78 - St-Quentin-Yvelines (France); Poirson, L. [Saibos, Montigny-le-Bretonneux, 78 - St-Quentin-Yvelines (France)

    2002-12-01

    fabricate these insulation cylinders has been developed. An important qualification program has been set-up and prototypes will be tested in regards to heat conductivity, ageing, abrasion resistance, bending, compression, and shear strength. Due to the expressed industry needs for high performance insulation, this effort is targeted to be complete and successful by the end of 2002, and be marketed mid-2003. (authors)

  20. High performance MRI simulations of motion on multi-GPU systems

    Science.gov (United States)

    2014-01-01

    Background MRI physics simulators have been developed in the past for optimizing imaging protocols and for training purposes. However, these simulators have only addressed motion within a limited scope. The purpose of this study was the incorporation of realistic motion, such as cardiac motion, respiratory motion and flow, within MRI simulations in a high performance multi-GPU environment. Methods Three different motion models were introduced in the Magnetic Resonance Imaging SIMULator (MRISIMUL) of this study: cardiac motion, respiratory motion and flow. Simulation of a simple Gradient Echo pulse sequence and a CINE pulse sequence on the corresponding anatomical model was performed. Myocardial tagging was also investigated. In pulse sequence design, software crushers were introduced to accommodate the long execution times in order to avoid spurious echoes formation. The displacement of the anatomical model isochromats was calculated within the Graphics Processing Unit (GPU) kernel for every timestep of the pulse sequence. Experiments that would allow simulation of custom anatomical and motion models were also performed. Last, simulations of motion with MRISIMUL on single-node and multi-node multi-GPU systems were examined. Results Gradient Echo and CINE images of the three motion models were produced and motion-related artifacts were demonstrated. The temporal evolution of the contractility of the heart was presented through the application of myocardial tagging. Better simulation performance and image quality were presented through the introduction of software crushers without the need to further increase the computational load and GPU resources. Last, MRISIMUL demonstrated an almost linear scalable performance with the increasing number of available GPU cards, in both single-node and multi-node multi-GPU computer systems. Conclusions MRISIMUL is the first MR physics simulator to have implemented motion with a 3D large computational load on a single computer

  1. High performance MRI simulations of motion on multi-GPU systems.

    Science.gov (United States)

    Xanthis, Christos G; Venetis, Ioannis E; Aletras, Anthony H

    2014-07-04

    MRI physics simulators have been developed in the past for optimizing imaging protocols and for training purposes. However, these simulators have only addressed motion within a limited scope. The purpose of this study was the incorporation of realistic motion, such as cardiac motion, respiratory motion and flow, within MRI simulations in a high performance multi-GPU environment. Three different motion models were introduced in the Magnetic Resonance Imaging SIMULator (MRISIMUL) of this study: cardiac motion, respiratory motion and flow. Simulation of a simple Gradient Echo pulse sequence and a CINE pulse sequence on the corresponding anatomical model was performed. Myocardial tagging was also investigated. In pulse sequence design, software crushers were introduced to accommodate the long execution times in order to avoid spurious echoes formation. The displacement of the anatomical model isochromats was calculated within the Graphics Processing Unit (GPU) kernel for every timestep of the pulse sequence. Experiments that would allow simulation of custom anatomical and motion models were also performed. Last, simulations of motion with MRISIMUL on single-node and multi-node multi-GPU systems were examined. Gradient Echo and CINE images of the three motion models were produced and motion-related artifacts were demonstrated. The temporal evolution of the contractility of the heart was presented through the application of myocardial tagging. Better simulation performance and image quality were presented through the introduction of software crushers without the need to further increase the computational load and GPU resources. Last, MRISIMUL demonstrated an almost linear scalable performance with the increasing number of available GPU cards, in both single-node and multi-node multi-GPU computer systems. MRISIMUL is the first MR physics simulator to have implemented motion with a 3D large computational load on a single computer multi-GPU configuration. The incorporation

  2. Optimisation and coupling of high-performance photocyclic initiating systems for efficient holographic materials (Conference Presentation)

    Science.gov (United States)

    Ley, Christian; Carré, Christian; Ibrahim, Ahmad; Allonas, Xavier

    2017-05-01

    For fabrication of diffractive optical elements or for holographic data storage, photopolymer materials have turned out to be serious candidates, taking into account their performances such as high spatial resolution, dry processing capability, ease of use, high versatility. From the chemical point of view, several organic materials are able to exhibit refractive index changes resulting from polymerization, crosslinking or depolymerization, such as mixtures of monomers with several reactive functions and oligomers, associated to additives, fillers and to a photoinitiating system (PIS). In this work, the efficiencies of two and three component PIS as holographic recording materials are analyzed in term of photopolymerization kinetics and diffraction yield. The selected systems are based on visible dyes, electron donor and electron acceptor. In order to investigate the influence of the photophysical properties of dye on the holographic recording material performance time resolved and steady state spectroscopic studies of the PIS are presented. This detailed photochemical studies of the PIS outline the possible existence of photocyclic initiating systems (PCIS) where the dye is regenerated during the chemical process. Simultaneously, these visible systems are associated to fluorinated acrylate monomers for the recording of transmission gratings. To get more insight into the hologram formation, gratings' recording curves were compared to those of monomer to polymer conversion obtained by real time Fourier transform infrared spectroscopy. This work outlines the importance of the coupling of the the photochemical reactions and the holographic resin. Moreover the application of the PCIS in holographic recording outlines the importance of the photochemistry on final holographic material properties: here a sensitive material with high diffraction yield is described. Indeed, this work outlines the importance of the coupling between the photochemistry underlying the radicals

  3. Self-Centering Seismic Lateral Force Resisting Systems: High Performance Structures for the City of Tomorrow

    Directory of Open Access Journals (Sweden)

    Nathan Brent Chancellor

    2014-09-01

    Full Text Available Structures designed in accordance with even the most modern buildings codes are expected to sustain damage during a severe earthquake; however; these structures are expected to protect the lives of the occupants. Damage to the structure can require expensive repairs; significant business downtime; and in some cases building demolition. If damage occurs to many structures within a city or region; the regional and national economy may be severely disrupted. To address these shortcomings with current seismic lateral force resisting systems and to work towards more resilient; sustainable cities; a new class of seismic lateral force resisting systems that sustains little or no damage under severe earthquakes has been developed. These new seismic lateral force resisting systems reduce or prevent structural damage to nonreplaceable structural elements by softening the structural response elastically through gap opening mechanisms. To dissipate seismic energy; friction elements or replaceable yielding energy dissipation elements are also included. Post-tensioning is often used as a part of these systems to return the structure to a plumb; upright position (self-center after the earthquake has passed. This paper summarizes the state-of-the art for self-centering seismic lateral force resisting systems and outlines current research challenges for these systems.

  4. Instructional leadership in centralised systems: evidence from Greek high-performing secondary schools

    OpenAIRE

    Kaparou, Mary; Bush, Tony

    2015-01-01

    This paper examines the enactment of instructional leadership in high-performing secondary schools (HPSS), and the relationship between leadership and learning in raising student outcomes and encouraging teachers’ professional learning in the highly centralised context of Greece. It reports part of a comparative research study focused on whether, and to what extent, instructional leadership has been embraced by Greek school leaders. The study is exploratory, using a qualitative multiple case ...

  5. Isolation, pointing, and suppression (IPS) system for high-performance spacecraft

    Science.gov (United States)

    Hindle, Tim; Davis, Torey; Fischer, Jim

    2007-04-01

    Passive mechanical isolation is often times the first step taken to remedy vibration issues on-board a spacecraft. In many cases, this is done with a hexapod of axial members or struts to obtain the desired passive isolation in all six degrees-of-freedom (DOF). In some instances, where the disturbance sources are excessive or the payload is particularly sensitive to vibration, additional steps are taken to improve the performance beyond that of passive isolation. Additional performance or functionality can be obtained with the addition of active control, using a hexapod of hybrid (passive/active) elements at the interface between the payload and the bus. This paper describes Honeywell's Isolation, Pointing, and Suppression (IPS) system. It is a hybrid isolation system designed to isolate a sensitive spacecraft payload with very low passive resonant break frequencies while affording agile independent payload pointing, on-board payload disturbance rejection, and active isolation augmentation. This system is an extension of the work done on Honeywell's previous Vibration Isolation, Steering, and Suppression (VISS) flight experiment. Besides being designed for a different size payload than VISS, the IPS strut includes a dual-stage voice coil design for improved dynamic range as well as improved low-noise drive electronics. In addition, the IPS struts include integral load cells, gap sensors, and payloadside accelerometers for control and telemetry purposes. The associated system-level control architecture to accomplish these tasks is also new for this program as compared to VISS. A summary of the IPS system, including analysis and hardware design, build, and single axis bipod testing will be reviewed.

  6. Towards high performing hospital enterprise systems: an empirical and literature based design framework

    Science.gov (United States)

    dos Santos Fradinho, Jorge Miguel

    2014-05-01

    Our understanding of enterprise systems (ES) is gradually evolving towards a sense of design which leverages multidisciplinary bodies of knowledge that may bolster hybrid research designs and together further the characterisation of ES operation and performance. This article aims to contribute towards ES design theory with its hospital enterprise systems design (HESD) framework, which reflects a rich multidisciplinary literature and two in-depth hospital empirical cases from the US and UK. In doing so it leverages systems thinking principles and traditionally disparate bodies of knowledge to bolster the theoretical evolution and foundation of ES. A total of seven core ES design elements are identified and characterised with 24 main categories and 53 subcategories. In addition, it builds on recent work which suggests that hospital enterprises are comprised of multiple internal ES configurations which may generate different levels of performance. Multiple sources of evidence were collected including electronic medical records, 54 recorded interviews, observation, and internal documents. Both in-depth cases compare and contrast higher and lower performing ES configurations. Following literal replication across in-depth cases, this article concludes that hospital performance can be improved through an enriched understanding of hospital ES design.

  7. Development of a high-performance multichannel system for time-correlated single photon counting

    Science.gov (United States)

    Peronio, P.; Cominelli, A.; Acconcia, G.; Rech, I.; Ghioni, M.

    2017-05-01

    Time-Correlated Single Photon Counting (TCSPC) is one of the most effective techniques for measuring weak and fast optical signals. It outperforms traditional "analog" techniques due to its high sensitivity along with high temporal resolution. Despite those significant advantages, a main drawback still exists, which is related to the long acquisition time needed to perform a measurement. In past years many TCSPC systems have been developed with higher and higher number of channels, aimed to dealing with that limitation. Nevertheless, modern systems suffer from a strong trade-off between parallelism level and performance: the higher the number of channels the poorer the performance. In this work we present the design of a 32x32 TCSPC system meant for overtaking the existing trade-off. To this aim different technologies has been employed, to get the best performance both from detectors and sensing circuits. The exploitation of different technologies will be enabled by Through Silicon Vias (TSVs) which will be investigated as a possible solution for connecting the detectors to the sensing circuits. When dealing with a high number of channels, the count rate is inevitably set by the affordable throughput to the external PC. We targeted a throughput of 10Gb/s, which is beyond the state of the art, and designed the number of TCSPC channels accordingly. A dynamic-routing logic will connect the detectors to the lower number of acquisition chains.

  8. Optical Fiber Demodulation System with High Performance for Assessing Fretting Damage of Steam Generator Tubes

    Directory of Open Access Journals (Sweden)

    Peijian Huang

    2018-01-01

    Full Text Available In order to access the fretting damage of the steam generator tube (SGT, a fast fiber Fabry-Perot (F-P non-scanning correlation demodulation system based on a super luminescent light emitting diode (SLED was performed. By demodulating the light signal coming out from the F-P force sensor, the radial collision force between the SGT and the tube support plate (TSP was interrogated. For higher demodulation accuracy, the effects of the center wavelength, bandwidth, and spectrum noise of SLED were discussed in detail. Specially, a piezoelectric ceramic transducer (PZT modulation method was developed to get rid of the interference of mode coupling induced by different types of fiber optics in the demodulation system. The reflectivity of optical wedge and F-P sensor was optimized. Finally, the demodulation system worked well in a 1:1 steam generator test loop and successfully demodulated a force signal of 32 N with a collision time of 2 ms.

  9. Architecture of a high-performance PACS based on a shared file system

    Science.gov (United States)

    Glicksman, Robert A.; Wilson, Dennis L.; Perry, John H.; Prior, Fred W.

    1992-07-01

    The Picture Archive and Communication System developed by Loral Western Development Laboratories and Siemens Gammasonics Incorporated utilizes an advanced, high speed, fault tolerant image file server or Working Storage Unit (WSU) combined with 100 Mbit per second fiber optic data links. This central shared file server is capable of supporting the needs of more than one hundred workstations and acquisition devices at interactive rates. If additional performance is required, additional working storage units may be configured in a hyper-star topology. Specialized processing and display hardware is used to enhance Apple Macintosh personal computers to provide a family of low cost, easy to use, yet extremely powerful medical image workstations. The Siemens LiteboxTM application software provides a consistent look and feel to the user interface of all workstation in the family. Modern database and wide area communications technologies combine to support not only large hospital PACS but also outlying clinics and smaller facilities. Basic RIS functionality is integrated into the PACS database for convenience and data integrity.

  10. MoS2 Quantum Dots with a Tunable Work Function for High-Performance Organic Solar Cells.

    Science.gov (United States)

    Xing, Wang; Chen, Yusheng; Wang, Xinlong; Lv, Lei; Ouyang, Xinhua; Ge, Ziyi; Huang, Hui

    2016-10-12

    An efficient hole extraction layer (HEL) is critical to achieve high-performance organic solar cells (OSCs). In this study, we developed a pinhole-free and efficient HEL based on MoS2 quantum dots (QDs) combined with UV-ozone (UVO) treatment. The optophysical properties and morphology of MoS2 QDs and their photovoltaic performance are investigated. The results showed that MoS2 QDs can form homogeneous films and can be applied as an interfacial layer not only for donors with shallow highest occupied molecular orbital (HOMO) but also for those with deep HOMO energy levels after UVO treatment (O-MoS2 QDs). The solar cells based on O-MoS2 QDs yield a power conversion efficiency (PCE) of 8.66%, which is 71% and 12% higher than those of the OSCs with pristine MoS2 QD and O-MoS2 nanosheets, respectively, and the highest PCEs for OSCs containing MoS2 materials. Furthermore, the stability of solar cells based on MoS2 QDs is greatly improved in comparison with state-of-the-art PEDOT:PSS. These results demonstrate the great potential of O-MoS2 QDs as an efficient HEL for high-performance OSCs.

  11. Relationships of Cognitive and Metacognitive Learning Strategies to Mathematics Achievement in Four High-Performing East Asian Education Systems

    Science.gov (United States)

    Areepattamannil, Shaljan; Caleon, Imelda S.

    2013-01-01

    The authors examined the relationships of cognitive (i.e., memorization and elaboration) and metacognitive learning strategies (i.e., control strategies) to mathematics achievement among 15-year-old students in 4 high-performing East Asian education systems: Shanghai-China, Hong Kong-China, Korea, and Singapore. In all 4 East Asian education…

  12. Constructing a LabVIEW-Controlled High-Performance Liquid Chromatography (HPLC) System: An Undergraduate Instrumental Methods Exercise

    Science.gov (United States)

    Smith, Eugene T.; Hill, Marc

    2011-01-01

    In this laboratory exercise, students develop a LabVIEW-controlled high-performance liquid chromatography system utilizing a data acquisition device, two pumps, a detector, and fraction collector. The programming experience involves a variety of methods for interface communication, including serial control, analog-to-digital conversion, and…

  13. High Performance Network Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, Jesse E [Los Alamos National Laboratory

    2012-08-10

    Network Monitoring requires a substantial use of data and error analysis to overcome issues with clusters. Zenoss and Splunk help to monitor system log messages that are reporting issues about the clusters to monitoring services. Infiniband infrastructure on a number of clusters upgraded to ibmon2. ibmon2 requires different filters to report errors to system administrators. Focus for this summer is to: (1) Implement ibmon2 filters on monitoring boxes to report system errors to system administrators using Zenoss and Splunk; (2) Modify and improve scripts for monitoring and administrative usage; (3) Learn more about networks including services and maintenance for high performance computing systems; and (4) Gain a life experience working with professionals under real world situations. Filters were created to account for clusters running ibmon2 v1.0.0-1 10 Filters currently implemented for ibmon2 using Python. Filters look for threshold of port counters. Over certain counts, filters report errors to on-call system administrators and modifies grid to show local host with issue.

  14. High performance thermal insulation systems (HiPTI). Vacuum insulated products (VIP). Proceedings of the international conference and workshop

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, M.; Bertschinger, H.

    2001-07-01

    These are the proceedings of the International Conference and Workshop held at EMPA Duebendorf, Switzerland, in January 2001. The papers presented at the conference's first day included contributions on the role of high-performance insulation in energy efficiency - providing an overview of available technologies and reviewing physical aspects of heat transfer and the development of thermal insulation as well as the state of the art of glazing technologies such as high-performance and vacuum glazing. Also, vacuum-insulated products (VIP) with fumed silica, applications of VIP systems in technical building systems, nanogels, VIP packaging materials and technologies, measurement of physical properties, VIP for advanced retrofit solutions for buildings and existing and future applications for advanced low energy building are discussed. Finally, research and development concerning VIP for buildings are reported on. The workshops held on the second day covered a preliminary study on high-performance thermal insulation materials with gastight porosity, flexible pipes with high performance thermal insulation, evaluation of modern insulation systems by simulation methods as well as the development of vacuum insulation panels with a stainless steel envelope.

  15. Dynamic behavior of radiant cooling system based on capillary tubes in walls made of high performance concrete

    DEFF Research Database (Denmark)

    Mikeska, Tomás; Svendsen, Svend

    2015-01-01

    using cooling water for the radiant cooling system with a temperature only about 4K lower than the temperature of the room air. The relatively high speed reaction of the designed system is a result of the slim construction of the sandwich wall elements made of high performance concrete. (C) 2015...... the small amount of fresh air required by standards to provide a healthy indoor environment.This paper reports on experimental analyses evaluating the dynamic behavior of a test room equipped with a radiant cooling system composed of plastic capillary tubes integrated into the inner layer of sandwich wall...... elements made of high performance concrete. The influence of the radiant cooling system on the indoor climate of the test room in terms of the air, surface and operative temperatures and velocities was investigated.The results show that the temperature of the room air can be kept in a comfortable range...

  16. The DoD's High Performance Computing Modernization Program - Ensuing the National Earth Systems Prediction Capability Becomes Operational

    Science.gov (United States)

    Burnett, W.

    2016-12-01

    The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will

  17. HPTLC-aptastaining - Innovative protein detection system for high-performance thin-layer chromatography

    Science.gov (United States)

    Morschheuser, Lena; Wessels, Hauke; Pille, Christina; Fischer, Judith; Hünniger, Tim; Fischer, Markus; Paschke-Kratzin, Angelika; Rohn, Sascha

    2016-05-01

    Protein analysis using high-performance thin-layer chromatography (HPTLC) is not commonly used but can complement traditional electrophoretic and mass spectrometric approaches in a unique way. Due to various detection protocols and possibilities for hyphenation, HPTLC protein analysis is a promising alternative for e.g., investigating posttranslational modifications. This study exemplarily focused on the investigation of lysozyme, an enzyme which is occurring in eggs and technologically added to foods and beverages such as wine. The detection of lysozyme is mandatory, as it might trigger allergenic reactions in sensitive individuals. To underline the advantages of HPTLC in protein analysis, the development of innovative, highly specific staining protocols leads to improved sensitivity for protein detection on HPTLC plates in comparison to universal protein derivatization reagents. This study aimed at developing a detection methodology for HPTLC separated proteins using aptamers. Due to their affinity and specificity towards a wide range of targets, an aptamer based staining procedure on HPTLC (HPTLC-aptastaining) will enable manifold analytical possibilities. Besides the proof of its applicability for the very first time, (i) aptamer-based staining of proteins is applicable on different stationary phase materials and (ii) furthermore, it can be used as an approach for a semi-quantitative estimation of protein concentrations.

  18. Programmable partitioning for high-performance coherence domains in a multiprocessor system

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Salapura, Valentina [Chappaqua, NY

    2011-01-25

    A multiprocessor computing system and a method of logically partitioning a multiprocessor computing system are disclosed. The multiprocessor computing system comprises a multitude of processing units, and a multitude of snoop units. Each of the processing units includes a local cache, and the snoop units are provided for supporting cache coherency in the multiprocessor system. Each of the snoop units is connected to a respective one of the processing units and to all of the other snoop units. The multiprocessor computing system further includes a partitioning system for using the snoop units to partition the multitude of processing units into a plurality of independent, memory-consistent, adjustable-size processing groups. Preferably, when the processor units are partitioned into these processing groups, the partitioning system also configures the snoop units to maintain cache coherency within each of said groups.

  19. HyperForest: A high performance multi-processor architecture for real-time intelligent systems

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, P. Jr.; Rebeil, J.P. [Sandia National Labs., Albuquerque, NM (United States); Pollard, H. [Univ. of New Mexico, Albuquerque, NM (United States). Electrical Engineering and Computer Engineering Dept.

    1997-04-01

    Intelligent Systems are characterized by the intensive use of computer power. The computer revolution of the last few years is what has made possible the development of the first generation of Intelligent Systems. Software for second generation Intelligent Systems will be more complex and will require more powerful computing engines in order to meet real-time constraints imposed by new robots, sensors, and applications. A multiprocessor architecture was developed that merges the advantages of message-passing and shared-memory structures: expendability and real-time compliance. The HyperForest architecture will provide an expandable real-time computing platform for computationally intensive Intelligent Systems and open the doors for the application of these systems to more complex tasks in environmental restoration and cleanup projects, flexible manufacturing systems, and DOE`s own production and disassembly activities.

  20. A new maltose-inducible high-performance heterologous expression system in Bacillus subtilis.

    Science.gov (United States)

    Yue, Jie; Fu, Gang; Zhang, Dawei; Wen, Jianping

    2017-08-01

    To improve heterologous proteins production, we constructed a maltose-inducible expression system in Bacillus subtilis. An expression system based on the promoter for maltose utilization constructed in B. subtilis. Successively, to improve the performance of the P malA -derived system, mutagenesis was employed by gradually shortening the length of P malA promoter and altering the spacing between the predicted MalR binding site and the -35 region. Furthermore, deletion of the maltose utilization genes (malL and yvdK) improved the P malA promoter activity. Finally, using this efficient maltose-inducible expression system, we enhanced the production of luciferase and D-aminoacylase, compared with the P hpaII system. A maltose-inducible expression system was constructed and evaluated. It could be used for high level expression of heterologous proteins production.

  1. Affordable Resins for High-Performance, Ablative Thermal Protection Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Cornerstone Research Group Inc. (CRG) proposes to advance fundamental material development of a high-temperature resistant, multifunctional polymer system conceived...

  2. Next Generation Advanced Binder Chemistries for High Performance, Environmetally DurableThermal Control Material Systems. Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This innovative SBIR Phase I proposal will develop new binder systems through the systematic investigations to tailor required unique performance properties and...

  3. Development of Nano-structured Electrode Materials for High Performance Energy Storage System

    Science.gov (United States)

    Huang, Zhendong

    Systematic studies have been done to develop a low cost, environmental-friendly facile fabrication process for the preparation of high performance nanostructured electrode materials and to fully understand the influence factors on the electrochemical performance in the application of lithium ion batteries (LIBs) or supercapacitors. For LIBs, LiNi1/3Co1/3Mn1/3O2 (NCM) with a 1D porous structure has been developed as cathode material. The tube-like 1D structure consists of inter-linked, multi-facet nanoparticles of approximately 100-500nm in diameter. The microscopically porous structure originates from the honeycomb-shaped precursor foaming gel, which serves as self-template during the stepwise calcination process. The 1D NCM presents specific capacities of 153, 140, 130 and 118mAh·g-1 at current densities of 0.1C, 0.5C, 1C and 2C, respectively. Subsequently, a novel stepwise crystallization process consisting of a higher crystallization temperature and longer period for grain growth is employed to prepare single crystal NCM nanoparticles. The modified sol-gel process followed by optimized crystallization process results in significant improvements in chemical and physical characteristics of the NCM particles. They include a fully-developed single crystal NCM with uniform composition and a porous NCM architecture with a reduced degree of fusion and a large specific surface area. The NCM cathode material with these structural modifications in turn presents significantly enhanced specific capacities of 173.9, 166.9, 158.3 and 142.3mAh·g -1 at 0.1C, 0.5C, 1C and 2C, respectively. Carbon nanotube (CNT) is used to improve the relative low power capability and poor cyclic stability of NCM caused by its poor electrical conductivity. The NCM/CNT nanocomposites cathodes are prepared through simply mixing of the two component materials followed by a thermal treatment. The CNTs were functionalized to obtain uniformly-dispersed MWCNTs in the NCM matrix. The electrochemical

  4. High Performance MG-System Alloys For Weight Saving Applications: First Year Results From The Green Metallurgy EU Project

    Science.gov (United States)

    D'Errico, Fabrizio; Plaza, Gerardo Garces; Hofer, Markus; Kim, Shae K.

    The GREEN METALLURGY Project, a LIFE+ project co-financed by the EU Commission, has just concluded its first year. The Project seeks to set manufacturing processes at a pre-industrial scale for nanostructured-based high-performance Mg-Zn(Y) magnesium alloys. The Project's goal is the reduction of specific energy consumed and the overall carbon-footprint produced in the cradle-to-exit gate phases. Preliminary results addressed potentialities of the upstream manufacturing process pathway. Two Mg-Zn(Y) system alloys with rapid solidifying powders have been produced and directly extruded for 100% densification. Examination of the mechanical properties showed that such materials exhibit strength and elongation comparable to several high performing aluminum alloys; 390 MPa and 440 MPa for the average UTS for two different system alloys, and 10% and 15% elongations for two system alloys. These results, together with the low-environmental impact targeted, make these novel Mg alloys competitive as lightweight high-performance materials for automotive components.

  5. Multi-Core Technology for and Fault Tolerant High-Performance Spacecraft Computer Systems

    Science.gov (United States)

    Behr, Peter M.; Haulsen, Ivo; Van Kampenhout, J. Reinier; Pletner, Samuel

    2012-08-01

    The current architectural trends in the field of multi-core processors can provide an enormous increase in processing power by exploiting the parallelism available in many applications. In particular because of their high energy efficiency, it is obvious that multi-core processor-based systems will also be used in future space missions. In this paper we present the system architecture of a powerful optical sensor system based on the eight core multi-core processor P4080 from Freescale. The fault tolerant structure and the highly effective FDIR concepts implemented on different hardware and software levels of the system are described in detail. The space application scenario and thus the main requirements for the sensor system have been defined by a complex tracking sensor application for autonomous landing or docking manoeuvres.

  6. A Novel and High Performance System for Enhancing Speech in Helmet Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose a highly innovative system for enhancing speech in helmet. First, we propose to apply a circular array with 8 microphones that are inside the helmet. In...

  7. Analytical design of a high performance stability and control augmentation system for a hingeless rotor helicopter

    Science.gov (United States)

    Miyajima, K.

    1978-01-01

    A stability and control augmentation system (SCAS) was designed based on a set of comprehensive performance criteria. Linear optimal control theory was applied to determine appropriate feedback gains for the stability augmentation system (SAS). The helicopter was represented by six-degree-of-freedom rigid body equations of motion and constant factors were used as weightings for state and control variables. The ratio of these factors was employed as a parameter for SAS analysis and values of the feedback gains were selected on this basis to satisfy three of the performance criteria for full and partial state feedback systems. A least squares design method was then applied to determine control augmentation system (CAS) cross feed gains to satisfy the remaining seven performance criteria. The SCAS gains were then evaluated by nine degree-of-freedom equations which include flapping motion and conclusions drawn concerning the necessity of including the pitch/regressing and roll/regressing modes in SCAS analyses.

  8. Next Generation Advanced Binder Chemistries for High Performance, Environmentally DurableThermal Control Material Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This innovative SBIR Phase II proposal will develop next generation products for Thermal Control Material Systems (TCMS) an adhesives based on the next generation...

  9. A Low-Cost, High-Performance System for Fluorescence Lateral Flow Assays

    OpenAIRE

    Linda G. Lee; Nordman, Eric S.; Johnson, Martin D.; Mark F. Oldham

    2013-01-01

    We demonstrate a fluorescence lateral flow system that has excellent sensitivity and wide dynamic range. The illumination system utilizes an LED, plastic lenses and plastic and colored glass filters for the excitation and emission light. Images are collected on an iPhone 4. Several fluorescent dyes with long Stokes shifts were evaluated for their signal and nonspecific binding in lateral flow. A wide range of values for the ratio of signal to nonspecific binding was found, from 50 for R-phyco...

  10. Boost Converter Fed High Performance BLDC Drive for Solar PV Array Powered Air Cooling System

    Directory of Open Access Journals (Sweden)

    Shobha Rani Depuru

    2017-01-01

    Full Text Available This paper proposes the utilization of a DC-DC boost converter as a mediator between a Solar Photovoltaic (SPV array and the Voltage Source Inverters (VSI in an SPV array powered air cooling system to attain maximum efficiency. The boost converter, over the various common DC-DC converters, offers many advantages in SPV based applications. Further, two Brushless DC (BLDC motors are employed in the proposed air cooling system: one to run the centrifugal water pump and the other to run a fan-blower. Employing a BLDC motor is found to be the best option because of its top efficiency, supreme reliability and better performance over a wide range of speeds. The air cooling system is developed and simulated using the MATLAB/Simulink environment considering the steady state variation in the solar irradiance. Further, the efficiency of BLDC drive system is compared with a conventional Permanent Magnet DC (PMDC motor drive system and from the simulated results it is found that the proposed system performs better.

  11. Study toward high-performance thermally driven air-conditioning systems

    Science.gov (United States)

    Miyazaki, Takahiko; Miyawaki, Jin; Ohba, Tomonori; Yoon, Seong-Ho; Saha, Bidyut Baran; Koyama, Shigeru

    2017-01-01

    The Adsorption heat pump is a technology for cooling and heating by using hot water as a driving heat source. It will largely contribute to energy savings when it is driven by solar thermal energy or waste heat. The system is available in the market worldwide, and there are many examples of application to heat recovery in factories and to solar cooling systems. In the present system, silica gel and zeolite are popular adsorbents in combination with water refrigerant. Our study focused on activated carbon-ethanol pair for adsorption cooling system because of the potential to compete with conventional systems in terms of coefficient of performance. In addition, activated-ethanol pair can generally produce larger cooling effect by an adsorption-desorption cycle compared with that of the conventional pairs in terms of cooling effect per unit adsorbent mass. After the potential of a commercially available activated carbon with highest level specific surface area was evaluated, we developed a new activated carbon that has the optimum pore characteristics for the purpose of solar or waste heat driven cooling systems. In this paper, comparison of refrigerants for adsorption heat pump application is presented, and a newly developed activated carbon for ethanol adsorption heat pump is introduced.

  12. High performance in low-flow solar domestic hot water systems

    Energy Technology Data Exchange (ETDEWEB)

    Dayan, M.

    1997-12-31

    Low-flow solar hot water heating systems employ flow rates on the order of 1/5 to 1/10 of the conventional flow. Low-flow systems are of interest because the reduced flow rate allows smaller diameter tubing, which is less costly to install. Further, low-flow systems result in increased tank stratification. Lower collector inlet temperatures are achieved through stratification and the useful energy produced by the collector is increased. The disadvantage of low-flow systems is the collector heat removal factor decreases with decreasing flow rate. Many solar domestic hot water systems require an auxiliary electric source to operate a pump in order to circulate fluid through the solar collector. A photovoltaic driven pump can be used to replace the standard electrical pump. PV driven pumps provide an ideal means of controlling the flow rate, as pumps will only circulate fluid when there is sufficient radiation. Peak performance was always found to occur when the heat exchanger tank-side flow rate was approximately equal to the average load flow rate. For low collector-side flow rates, a small deviation from the optimum flow rate will dramatically effect system performance.

  13. An exploratory study of the effects of spatial working-memory load on prefrontal activation in low- and high-performing elderly

    Directory of Open Access Journals (Sweden)

    Anouk eVermeij

    2014-11-01

    Full Text Available Older adults show more bilateral prefrontal activation during cognitive performance than younger adults, who typically show unilateral activation. This over-recruitment has been interpreted as compensation for declining structure and function of the brain. Here we examined how the relationship between behavioral performance and prefrontal activation is modulated by different levels of working-memory load. Eighteen healthy older adults (70.8 ± 5.0 years; MMSE 29.3 ± 0.9 performed a spatial working-memory task (n-back. Oxygenated ([O2Hb] and deoxygenated ([HHb] hemoglobin concentration changes were registered by two functional Near-Infrared Spectroscopy (fNIRS channels located over the left and right prefrontal cortex. Increased working-memory load resulted in worse performance compared to the control condition. [O2Hb] increased with rising working-memory load in both fNIRS channels. Based on the performance in the high working-memory load condition, the group was divided into low and high performers. A significant interaction effect of performance level and hemisphere on [O2Hb] increase was found, indicating that high performers were better able to keep the right prefrontal cortex engaged under high cognitive demand. Furthermore, in the low performers group, individuals with a larger decline in task performance from the control to the high working-memory load condition had a larger bilateral increase of [O2Hb]. The high performers did not show a correlation between performance decline and working-memory load related prefrontal activation changes. Thus, additional bilateral prefrontal activation in low performers did not necessarily result in better cognitive performance. Our study showed that bilateral prefrontal activation may not always be successfully compensatory. Individual behavioral performance should be taken into account to be able to distinguish successful and unsuccessful compensation or declined neural efficiency.

  14. Start-up and safety systems of a high performance light water reactor

    Energy Technology Data Exchange (ETDEWEB)

    Schlagenhaufer, Marc; Straflinger, Joerg; Schulenberg, Thomas [Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen (Germany). Inst. for Nuclear and Energy Technologies; Bittermann, Dietmar [AREVA NP GmbH, Erlangen (Germany). NEP-G Process

    2010-05-15

    The HPLWR steam cycle is analysed with the commercial system code APROS. A combined shut-down and start-up plant control system is designed, which is operated below 50% load under constant pressure and tripped turbine. An additional controlling system is implemented, which enables the constant pressure operation and allows preheating of the feedwater as long as possible to avoid severe material temperature changes in thick-walled components. The same shut-down system is used in the opposite direction to start-up the plant at constant pressure. In a future detailed design, the reactivity feedback should be included to allow a realistic tuning of the control parameters. The safety systems of the HPLWR, which allow maintaining the core coolant flow rate, were presented. The ADS, LPCI and HPCI are implemented in the HPLWR steam cycle, which is modelled in APROS, and are currently investigated in respect to needed coolant injection rates at low and high pressure, ADS flow cross sections, ADS actuation pressures and safety response times. (orig.)

  15. Structural integrity and damage assessment of high performance arresting cable systems using an embedded distributed fiber optic sensor (EDIFOS) system

    Science.gov (United States)

    Mendoza, Edgar A.; Kempen, Cornelia; Sun, Sunjian; Esterkin, Yan; Prohaska, John; Bentley, Doug; Glasgow, Andy; Campbell, Richard

    2010-04-01

    Redondo Optics in collaboration with the Cortland Cable Company, TMT Laboratories, and Applied Fiber under a US Navy SBIR project is developing an embedded distributed fiber optic sensor (EDIFOSTM) system for the real-time, structural health monitoring, damage assessment, and lifetime prediction of next generation synthetic material arresting gear cables. The EDIFOSTM system represents a new, highly robust and reliable, technology that can be use for the structural damage assessment of critical cable infrastructures. The Navy is currently investigating the use of new, all-synthetic- material arresting cables. The arresting cable is one of the most stressed components in the entire arresting gear landing system. Synthetic rope materials offer higher performance in terms of the strength-to-weight characteristics, which improves the arresting gear engine's performance resulting in reduced wind-over-deck requirements, higher aircraft bring-back-weight capability, simplified operation, maintenance, supportability, and reduced life cycle costs. While employing synthetic cables offers many advantages for the Navy's future needs, the unknown failure modes of these cables remains a high technical risk. For these reasons, Redondo Optics is investigating the use of embedded fiber optic sensors within the synthetic arresting cables to provide real-time structural assessment of the cable state, and to inform the operator when a particular cable has suffered impact damage, is near failure, or is approaching the limit of its service lifetime. To date, ROI and its collaborators have developed a technique for embedding multiple sensor fibers within the strands of high performance synthetic material cables and use the embedded fiber sensors to monitor the structural integrity of the cable structures during tensile and compressive loads exceeding over 175,000-lbsf without any damage to the cable structure or the embedded fiber sensors.

  16. Building America Best Practices Series, Volume 6: High-Performance Home Technologies: Solar Thermal & Photovoltaic Systems

    Energy Technology Data Exchange (ETDEWEB)

    Baechler, Michael C.; Gilbride, Theresa L.; Ruiz, Kathleen A.; Steward, Heidi E.; Love, Pat M.

    2007-06-04

    This guide is was written by PNNL for the US Department of Energy's Building America program to provide information for residential production builders interested in building near zero energy homes. The guide provides indepth descriptions of various roof-top photovoltaic power generating systems for homes. The guide also provides extensive information on various designs of solar thermal water heating systems for homes. The guide also provides construction company owners and managers with an understanding of how solar technologies can be added to their homes in a way that is cost effective, practical, and marketable. Twelve case studies provide examples of production builders across the United States who are building energy-efficient homes with photovoltaic or solar water heating systems.

  17. A high performance GPU implementation of Surface Energy Balance System (SEBS) based on CUDA-C

    NARCIS (Netherlands)

    Abouali, Mohammad; Timmermans, J.; Castillo, Jose E.; Su, Zhongbo

    2013-01-01

    This paper introduces a new implementation of the Surface Energy Balance System (SEBS) algorithm harnessing the many cores available on Graphics Processing Units (GPUs). This new implementation uses Compute Unified Device Architecture C (CUDA-C) programming model and is designed to be executed on a

  18. Building High-Performing and Improving Education Systems: Quality Assurance and Accountability. Review

    Science.gov (United States)

    Slater, Liz

    2013-01-01

    Monitoring, evaluation, and quality assurance in their various forms are seen as being one of the foundation stones of high-quality education systems. De Grauwe, writing about "school supervision" in four African countries in 2001, linked the decline in the quality of basic education to the cut in resources for supervision and support.…

  19. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    Science.gov (United States)

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  20. Building-Wide, Adaptive Energy Management Systems for High-Performance Buildings: Final CRADA Report

    Energy Technology Data Exchange (ETDEWEB)

    Zavala, Victor M. [Argonne National Lab. (ANL), Argonne, IL (United States). Mathematics and Computer Science

    2016-10-27

    Development and field demonstration of the minimum ratio policy for occupancy-driven, predictive control of outdoor air ventilation. Technology transfer of Argonne’s methods for occupancy estimation and forecasting and for M&V to BuildingIQ for their deployment. Selection of CO2 sensing as the currently best-available technology for occupancy-driven controls. Accelerated restart capability for the commercial BuildingIQ system using horizon shifting strategies applied to receding horizon optimal control problems. Empirical-based evidence of 30% chilled water energy savings and 22% total HVAC energy savings achievable with the BuildingIQ system operating in the APS Office Building on-site at Argonne.

  1. Whisker: a client-server high-performance multimedia research control system.

    Science.gov (United States)

    Cardinal, Rudolf N; Aitken, Michael R F

    2010-11-01

    We describe an original client-server approach to behavioral research control and the Whisker system, a specific implementation of this design. The server process controls several types of hardware, including digital input/output devices, multiple graphical monitors and touchscreens, keyboards, mice, and sound cards. It provides a way to access this hardware for client programs, communicating with them via a simple text-based network protocol based on the standard Internet protocol. Clients to implement behavioral tasks may be written in any network-capable programming language. Applications to date have been in experimental psychology and behavioral and cognitive neuroscience, using rodents, humans, nonhuman primates, dogs, pigs, and birds. This system is flexible and reliable, although there are potential disadvantages in terms of complexity. Its design, features, and performance are described.

  2. Research, Development and Application of High Performance Earthquake Resistant Precast System as Green Construction in Indonesia

    OpenAIRE

    Nurjaman Hari; Hariandja Binsar; Suprapto Gambiro; Faizal Lutfi; Sitepu Haerul

    2017-01-01

    Sustainable construction is a topic that emerges in the world construction as a response to climate change issue. Building construction stage is a stage in sustainable development. Construction concept that confirm to the concept is referred to as green construction. Precast concrete construction is a construction system that meets green construction criteria, because applies the usage of material and construction method that optimize energy consumption and minimize environment impact during ...

  3. Resource-Efficient Data-Intensive System Designs for High Performance and Capacity

    Science.gov (United States)

    2015-09-01

    hardware to deliver client requests directly to appropriate server cores for key-value processing. MICA uses different key-value data structures that are...system architects should keep the total cost, including the equipment, operation, and risk management cost, as low as possible while satisfying other...GET(key) (retrieve the value associated with the key), to clients either locally and remotely. Key-value stores have become important in diverse areas

  4. Management and non-supervisory perceptions surrounding the implementation and significance of high-performance work practices in a nuclear power plant

    Science.gov (United States)

    Ashbridge, Gayle Ann

    Change management has become an imperative for organizations as they move into the 21st century; up to 75 percent of change initiatives fail. Nuclear power plants face the same challenges as industrial firms with the added challenge of deregulation. Faced with this challenge, restructuring the electric utility has raised a number of complex issues. Under traditional cost-of-service regulation, electric utilities were able to pass on their costs to consumers who absorbed them. In the new competitive environment, customers will now choose their suppliers based on the most competitive price. The purpose of this study is to determine the degree of congruence between non-supervisory and supervisory personnel regarding the perceived implementation of high performance workplace practices at a nuclear power plant. This study used as its foundation the practices identified in the Road to High Performance Workplaces: A Guide to Better Jobs and Better Business Results by the U.S. Department of Labor's Office of the American Workplace (1994). The population for this study consisted of organizational members at one nuclear power plant. Over 300 individuals completed surveys on high performance workplace practices. Two surveys were administered, one to non-supervisory personnel and one to first line supervisors and above. The determination of implementation levels was accomplished through descriptive statistical analysis. Results of the study revealed 32 areas of noncongruence between non-supervisory and supervisory personnel in regard to the perceived implementation level of the high performance workplace practices. Factor analysis further revealed that the order in which the respondents place emphasis on the variables varies between the two groups. This study provides recommendations that may improve the nuclear power plants alignment of activities. Recommendations are also provided for additional research on high-performance work practices.

  5. High performance 3-coil wireless power transfer system for the 512-electrode epiretinal prosthesis.

    Science.gov (United States)

    Zhao, Yu; Nandra, Mandheerej; Yu, Chia-Chen; Tai, Yu-chong

    2012-01-01

    The next-generation retinal prostheses feature high image resolution and chronic implantation. These features demand the delivery of power as high as 100 mW to be wireless and efficient. A common solution is the 2-coil inductive power link, used by current retinal prostheses. This power link tends to include a larger-size extraocular receiver coil coupled to the external transmitter coil, and the receiver coil is connected to the intraocular electrodes through a trans-sclera trans-choroid cable. In the long-term implantation of the device, the cable may cause hypotony (low intraocular pressure) and infection. However, when a 2-coil system is constructed from a small-size intraocular receiver coil, the efficiency drops drastically which may induce over heat dissipation and electromagnetic field exposure. Our previous 2-coil system achieved only 7% power transfer. This paper presents a fully intraocular and highly efficient wireless power transfer system, by introducing another inductive coupling link to bypass the trans-sclera trans-choroid cable. With the specific equivalent load of our customized 512-electrode stimulator, the current 3-coil inductive link was measured to have the overall power transfer efficiency around 36%, with 1-inch separation in saline. The high efficiency will favorably reduce the heat dissipation and electromagnetic field exposure to surrounding human tissues. The effect of the eyeball rotation on the power transfer efficiency was investigated as well. The efficiency can still maintain 14.7% with left and right deflection of 30 degree during normal use. The surgical procedure for the coils' implantation into the porcine eye was also demonstrated.

  6. Design of high performance mechatronics high-tech functionality by multidisciplinary system integration

    CERN Document Server

    Munnig Schmidt, R; Rankers, A

    2014-01-01

    Since they entered our world around the middle of the 20th century, the application of mechatronics has enhanced our lives with functionality based on the integration of electronics, control systems and electric drives.This book deals with the special class of mechatronics that has enabled the exceptional levels of accuracy and speed of high-tech equipment applied in the semiconductor industry, realising the continuous shrink in detailing of micro-electronics and MEMS.As well as the more frequently presented standard subjects of dynamics, motion control, electronics and electromechanics, this

  7. High-performance fault-tolerant VLSI systems using micro rollback

    Science.gov (United States)

    Tamir, Yuval; Tremblay, Marc

    1990-01-01

    A technique called micro rollback, which allows most of the performance penalty for concurrent error detection to be eliminated, is presented. Detection is performed in parallel with the transmission of information between modules, thus removing the delay for detection from the critical path. Erroneous information may thus reach its destination module several clock cycles before an error indication. Operations performed on this erroneous information are undone using a hardware mechanism for fast rollback of a few cycles. The implementation of a VLSI processor capable of micro rollback is discussed, as well as several critical issues related to its use in a complete system.

  8. High-performance digital triggering system for phase-controlled rectifiers

    Energy Technology Data Exchange (ETDEWEB)

    Olsen, R.E.

    1983-01-01

    The larger power supplies used to power accelerator magnets are most commonly polyphase rectifiers using phase control. While this method is capable of handling impressive amounts of power, it suffers from one serious disadvantage, namely that of subharmonic ripple. Since the stability of the stored beam depends to a considerable extent on the regulation of the current in the bending magnets, subharmonic ripple, especially that of low frequency, can have a detrimental effect. At the NSLS, we have constructed a 12-pulse, phase control system using digital signal processing techniques that essentially eliminates subharmonic ripple.

  9. High Performance Fuel Cell and Electrolyzer Membrane Electrode Assemblies (MEAs) for Space Energy Storage Systems

    Science.gov (United States)

    Valdez, Thomas I.; Billings, Keith J.; Kisor, Adam; Bennett, William R.; Jakupca, Ian J.; Burke, Kenneth; Hoberecht, Mark A.

    2012-01-01

    Regenerative fuel cells provide a pathway to energy storage system development that are game changers for NASA missions. The fuel cell/ electrolysis MEA performance requirements 0.92 V/ 1.44 V at 200 mA/cm2 can be met. Fuel Cell MEAs have been incorporated into advanced NFT stacks. Electrolyzer stack development in progress. Fuel Cell MEA performance is a strong function of membrane selection, membrane selection will be driven by durability requirements. Electrolyzer MEA performance is catalysts driven, catalyst selection will be driven by durability requirements. Round Trip Efficiency, based on a cell performance, is approximately 65%.

  10. The design of high performance mechatronics high-tech functionality by multidisciplinary system integration

    CERN Document Server

    Munnig Schmidt, R; van Eijk, J

    2011-01-01

    Since they entered our world around the middle of the 20th century, the application of mechatronics has enhanced our lives with functionality based on the integration of electronics, control systems and electric drives. This book deals with the special class of mechatronics that has enabled the exceptional levels of accuracy and speed of high-tech equipment applied in the semiconductor industry, realising the continuous shrink in detailing of micro-electronics and MEMS. As well as the more frequently presented standard subjects of dynamics, motion control, electronics and electromechanics, thi

  11. INL High Performance Building Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  12. A High-Performance Adaptive Incremental Conductance MPPT Algorithm for Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Chendi Li

    2016-04-01

    Full Text Available The output characteristics of photovoltaic (PV arrays vary with the change of environment, and maximum power point (MPP tracking (MPPT techniques are thus employed to extract the peak power from PV arrays. Based on the analysis of existing MPPT methods, a novel incremental conductance (INC MPPT algorithm is proposed with an adaptive variable step size. The proposed algorithm automatically regulates the step size to track the MPP through a step size adjustment coefficient, and a user predefined constant is unnecessary for the convergence of the MPPT method, thus simplifying the design of the PV system. A tuning method of initial step sizes is also presented, which is derived from the approximate linear relationship between the open-circuit voltage and MPP voltage. Compared with the conventional INC method, the proposed method can achieve faster dynamic response and better steady state performance simultaneously under the conditions of extreme irradiance changes. A Matlab/Simulink model and a 5 kW PV system prototype controlled by a digital signal controller (TMS320F28035 were established. Simulations and experimental results further validate the effectiveness of the proposed method.

  13. Cpl6: The New Extensible, High-Performance Parallel Coupler forthe Community Climate System Model

    Energy Technology Data Exchange (ETDEWEB)

    Craig, Anthony P.; Jacob, Robert L.; Kauffman, Brain; Bettge,Tom; Larson, Jay; Ong, Everest; Ding, Chris; He, Yun

    2005-03-24

    Coupled climate models are large, multiphysics applications designed to simulate the Earth's climate and predict the response of the climate to any changes in the forcing or boundary conditions. The Community Climate System Model (CCSM) is a widely used state-of-art climate model that has released several versions to the climate community over the past ten years. Like many climate models, CCSM employs a coupler, a functional unit that coordinates the exchange of data between parts of climate system such as the atmosphere and ocean. This paper describes the new coupler, cpl6, contained in the latest version of CCSM,CCSM3. Cpl6 introduces distributed-memory parallelism to the coupler, a class library for important coupler functions, and a standardized interface for component models. Cpl6 is implemented entirely in Fortran90 and uses Model Coupling Toolkit as the base for most of its classes. Cpl6 gives improved performance over previous versions and scales well on multiple platforms.

  14. A Universal High-Performance Correlation Analysis Detection Model and Algorithm for Network Intrusion Detection System

    Directory of Open Access Journals (Sweden)

    Hongliang Zhu

    2017-01-01

    Full Text Available In big data era, the single detection techniques have already not met the demand of complex network attacks and advanced persistent threats, but there is no uniform standard to make different correlation analysis detection be performed efficiently and accurately. In this paper, we put forward a universal correlation analysis detection model and algorithm by introducing state transition diagram. Based on analyzing and comparing the current correlation detection modes, we formalize the correlation patterns and propose a framework according to data packet timing and behavior qualities and then design a new universal algorithm to implement the method. Finally, experiment, which sets up a lightweight intrusion detection system using KDD1999 dataset, shows that the correlation detection model and algorithm can improve the performance and guarantee high detection rates.

  15. TheSNPpit-A High Performance Database System for Managing Large Scale SNP Data.

    Directory of Open Access Journals (Sweden)

    Eildert Groeneveld

    Full Text Available The fast development of high throughput genotyping has opened up new possibilities in genetics while at the same time producing considerable data handling issues. TheSNPpit is a database system for managing large amounts of multi panel SNP genotype data from any genotyping platform. With an increasing rate of genotyping in areas like animal and plant breeding as well as human genetics, already now hundreds of thousand of individuals need to be managed. While the common database design with one row per SNP can manage hundreds of samples this approach becomes progressively slower as the size of the data sets increase until it finally fails completely once tens or even hundreds of thousands of individuals need to be managed. TheSNPpit has implemented three ideas to also accomodate such large scale experiments: highly compressed vector storage in a relational database, set based data manipulation, and a very fast export written in C with Perl as the base for the framework and PostgreSQL as the database backend. Its novel subset system allows the creation of named subsets based on the filtering of SNP (based on major allele frequency, no-calls, and chromosomes and manually applied sample and SNP lists at negligible storage costs, thus avoiding the issue of proliferating file copies. The named subsets are exported for down stream analysis. PLINK ped and map files are processed as in- and outputs. TheSNPpit allows management of different panel sizes in the same population of individuals when higher density panels replace previous lower density versions as it occurs in animal and plant breeding programs. A completely generalized procedure allows storage of phenotypes. TheSNPpit only occupies 2 bits for storing a single SNP implying a capacity of 4 mio SNPs per 1MB of disk storage. To investigate performance scaling, a database with more than 18.5 mio samples has been created with 3.4 trillion SNPs from 12 panels ranging from 1000 through 20 mio SNPs

  16. Chloride Penetration through Cracks in High-Performance Concrete and Surface Treatment System for Crack Healing

    Directory of Open Access Journals (Sweden)

    In-Seok Yoon

    2012-01-01

    Full Text Available For enhancing the service life of concrete structures, it is very important to minimize crack at surface. Even if these cracks are very small, the problem is to which extend these cracks may jeopardize the durability of these decks. It was proposed that crack depth corresponding with critical crack width from the surface is a crucial factor in view of durability design of concrete structures. It was necessary to deal with chloride penetration through microcracks characterized with the mixing features of concrete. This study is devoted to examine the effect of high strength concrete and reinforcement of steel fiber on chloride penetration through cracks. High strength concrete is regarded as an excellent barrier to resist chloride penetration. However, durability performance of cracked high strength concrete was reduced seriously up to that of ordinary cracked concrete. Steel fiber reinforcement is effective to reduce chloride penetration through cracks because steel fiber reinforcement can lead to reduce crack depth significantly. Meanwhile, surface treatment systems are put on the surface of the concrete in order to seal the concrete. The key-issue is to which extend a sealing is able to ensure that chloride-induced corrosion can be prevented. As a result, penetrant cannot cure cracks, however, coating and combined treatment can prevent chloride from flowing in concrete with maximum crack width of 0.06 mm and 0.08 mm, respectively.

  17. Optimization procedure for algorithms of task scheduling in high performance heterogeneous distributed computing systems

    Directory of Open Access Journals (Sweden)

    Nirmeen A. Bahnasawy

    2011-11-01

    Full Text Available In distributed computing, the schedule by which tasks are assigned to processors is critical to minimizing the execution time of the application. However, the problem of discovering the schedule that gives the minimum execution time is NP-complete. In this paper, a new task scheduling algorithm called Sorted Nodes in Leveled DAG Division (SNLDD is introduced and developed for HeDCSs with consider a bounded number of processors. The main principle of the developed algorithm is to divide the Directed Acyclic Graph (DAG into levels and sort the tasks in each level according to their computation size in descending order. To evaluate the performance of the developed SNLDD algorithm, a comparative study has been done between the developed SNLDD algorithm and the Longest Dynamic Critical Path (LDCP algorithm which is considered the most efficient existing algorithm. According to the comparative results, it is found that the performance of the developed algorithm provides better performance than the LDCP algorithm in terms of speedup, efficiency, complexity, and quality. Also, a new procedure called Superior Performance Optimization Procedure (SPOP has been introduced and implemented in the developed SNLDD algorithm and the LDCP algorithm to minimize the sleek time of the processors in the system. Again, the performance of the SNLDD algorithm outperforms the existing LDCP algorithm after adding the SPOP procedure.

  18. Compressive Sensing Based Bayesian Sparse Channel Estimation for OFDM Communication Systems: High Performance and Low Complexity

    Science.gov (United States)

    Xu, Li; Shan, Lin; Adachi, Fumiyuki

    2014-01-01

    In orthogonal frequency division modulation (OFDM) communication systems, channel state information (CSI) is required at receiver due to the fact that frequency-selective fading channel leads to disgusting intersymbol interference (ISI) over data transmission. Broadband channel model is often described by very few dominant channel taps and they can be probed by compressive sensing based sparse channel estimation (SCE) methods, for example, orthogonal matching pursuit algorithm, which can take the advantage of sparse structure effectively in the channel as for prior information. However, these developed methods are vulnerable to both noise interference and column coherence of training signal matrix. In other words, the primary objective of these conventional methods is to catch the dominant channel taps without a report of posterior channel uncertainty. To improve the estimation performance, we proposed a compressive sensing based Bayesian sparse channel estimation (BSCE) method which cannot only exploit the channel sparsity but also mitigate the unexpected channel uncertainty without scarifying any computational complexity. The proposed method can reveal potential ambiguity among multiple channel estimators that are ambiguous due to observation noise or correlation interference among columns in the training matrix. Computer simulations show that proposed method can improve the estimation performance when comparing with conventional SCE methods. PMID:24983012

  19. Compressive Sensing Based Bayesian Sparse Channel Estimation for OFDM Communication Systems: High Performance and Low Complexity

    Directory of Open Access Journals (Sweden)

    Guan Gui

    2014-01-01

    Full Text Available In orthogonal frequency division modulation (OFDM communication systems, channel state information (CSI is required at receiver due to the fact that frequency-selective fading channel leads to disgusting intersymbol interference (ISI over data transmission. Broadband channel model is often described by very few dominant channel taps and they can be probed by compressive sensing based sparse channel estimation (SCE methods, for example, orthogonal matching pursuit algorithm, which can take the advantage of sparse structure effectively in the channel as for prior information. However, these developed methods are vulnerable to both noise interference and column coherence of training signal matrix. In other words, the primary objective of these conventional methods is to catch the dominant channel taps without a report of posterior channel uncertainty. To improve the estimation performance, we proposed a compressive sensing based Bayesian sparse channel estimation (BSCE method which cannot only exploit the channel sparsity but also mitigate the unexpected channel uncertainty without scarifying any computational complexity. The proposed method can reveal potential ambiguity among multiple channel estimators that are ambiguous due to observation noise or correlation interference among columns in the training matrix. Computer simulations show that proposed method can improve the estimation performance when comparing with conventional SCE methods.

  20. Compressive sensing based Bayesian sparse channel estimation for OFDM communication systems: high performance and low complexity.

    Science.gov (United States)

    Gui, Guan; Xu, Li; Shan, Lin; Adachi, Fumiyuki

    2014-01-01

    In orthogonal frequency division modulation (OFDM) communication systems, channel state information (CSI) is required at receiver due to the fact that frequency-selective fading channel leads to disgusting intersymbol interference (ISI) over data transmission. Broadband channel model is often described by very few dominant channel taps and they can be probed by compressive sensing based sparse channel estimation (SCE) methods, for example, orthogonal matching pursuit algorithm, which can take the advantage of sparse structure effectively in the channel as for prior information. However, these developed methods are vulnerable to both noise interference and column coherence of training signal matrix. In other words, the primary objective of these conventional methods is to catch the dominant channel taps without a report of posterior channel uncertainty. To improve the estimation performance, we proposed a compressive sensing based Bayesian sparse channel estimation (BSCE) method which cannot only exploit the channel sparsity but also mitigate the unexpected channel uncertainty without scarifying any computational complexity. The proposed method can reveal potential ambiguity among multiple channel estimators that are ambiguous due to observation noise or correlation interference among columns in the training matrix. Computer simulations show that proposed method can improve the estimation performance when comparing with conventional SCE methods.

  1. Coal-fired high performance power generating system. Quarterly progress report, October 1--December 31, 1992

    Energy Technology Data Exchange (ETDEWEB)

    1992-12-31

    Our team has outlined a research plan based on an optimized analysis of a 250 MWe combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (FUTAF) which integrates several combustor and air heater designs with appropriate ash management procedures. The Cycle Optimization effort under Task 2 outlines the evolution of our designs. The basic combined cycle approach now includes exhaust gas recirculation to quench the flue gas before it enters the convective air heater. By selecting the quench gas from a downstream location it will be clean enough and cool enough (ca. 300F) to be driven by a commercially available fan and still minimize the volume of the convective air heater. Further modeling studies on the long axial flame, under Task 3, have demonstrated that this configuration is capable of providing the necessary energy flux to the radiant air panels. This flame with its controlled mixing constrains the combustion to take place in a fuel rich environment, thus minimizing the NO{sub x} production. Recent calculations indicate that the NO{sub x} produced is low enough that the SNCR section can further reduce it to within the DOE goal of 0. 15 lbs/MBTU of fuel input. Also under Task 3 the air heater design optimization continued.

  2. Composition and Realization of Source-to-Sink High-Performance Flows: File Systems, Storage, Hosts, LAN and WAN

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chase Qishi [Univ. of Tennessee, Memphis, TN (United States)

    2016-12-01

    A number of Department of Energy (DOE) science applications, involving exascale computing systems and large experimental facilities, are expected to generate large volumes of data, in the range of petabytes to exabytes, which will be transported over wide-area networks for the purpose of storage, visualization, and analysis. To support such capabilities, significant progress has been made in various components including the deployment of 100 Gbps networks with future 1 Tbps bandwidth, increases in end-host capabilities with multiple cores and buses, capacity improvements in large disk arrays, and deployment of parallel file systems such as Lustre and GPFS. High-performance source-to-sink data flows must be composed of these component systems, which requires significant optimizations of the storage-to-host data and execution paths to match the edge and long-haul network connections. In particular, end systems are currently supported by 10-40 Gbps Network Interface Cards (NIC) and 8-32 Gbps storage Host Channel Adapters (HCAs), which carry the individual flows that collectively must reach network speeds of 100 Gbps and higher. Indeed, such data flows must be synthesized using multicore, multibus hosts connected to high-performance storage systems on one side and to the network on the other side. Current experimental results show that the constituent flows must be optimally composed and preserved from storage systems, across the hosts and the networks with minimal interference. Furthermore, such a capability must be made available transparently to the science users without placing undue demands on them to account for the details of underlying systems and networks. And, this task is expected to become even more complex in the future due to the increasing sophistication of hosts, storage systems, and networks that constitute the high-performance flows. The objectives of this proposal are to (1) develop and test the component technologies and their synthesis methods to

  3. Cosensitized Porphyrin System for High-Performance Solar Cells with TOF-SIMS Analysis.

    Science.gov (United States)

    Wu, Wenjun; Xiang, Huaide; Fan, Wei; Wang, Jinglin; Wang, Haifeng; Hua, Xin; Wang, Zhaohui; Long, Yitao; Tian, He; Zhu, Wei-Hong

    2017-05-17

    To date, development of organic sensitizers has been predominately focused on light harvesting, highest occupied molecular orbital and lowest unoccupied molecular orbital energy levels, and the electron transferring process. In contrast, their adsorption mode as well as the dynamic loading behavior onto nanoporous TiO2 is rarely considered. Herein, we have employed the time-of-flight secondary ion mass spectrometry (TOF-SIMS) to gain insight into the competitive dye adsorption mode and kinetics in the cosensitized porphyrin system. Using novel porphyrin dye FW-1 and D-A-π-A featured dye WS-5, the different bond-breaking mode in TOF-SIMS and dynamic dye-loading amount during the coadsorption process are well-compared with two different anchoring groups, such as benzoic acid and cyanoacrylic acid. With the bombardment mode in TOF-SIMS spectra, we have speculated that the cyano group grafts onto nanoporous TiO2 as tridentate binding for the common anchoring unit of cyanoacrylic acid and confirmed it through extensive first-principles density functional theory calculation by anchoring either the carboxyl or cyano group, which shows that the cyano group can efficiently participate in the adsorption of the WS-5 molecule onto the TiO2 nanocrystal. The grafting reinforcement interaction between the cyano group and TiO2 in WS-5 can well-explain the rapid adsorption characteristics. A strong coordinate bond between the lone pair of electrons on the nitrogen or oxygen atom and the Lewis acid sites of TiO2 can increase electron injection efficiencies with respect to those from the bond between the benzoic acid group and the Brønsted acid sites of the TiO2 surface. Upon optimization of the coadsorption process with dye WS-5, the photoelectric conversion efficiency based on porphyrin dye FW-1 is increased from 6.14 to 9.72%. The study on the adsorption dynamics of organic sensitizers with TOF-SIMS analysis might provide a new venue for improvement of cosensitized solar cells.

  4. Run-Time Dynamically-Adaptable FPGA-Based Architecture for High-Performance Autonomous Distributed Systems

    OpenAIRE

    Valverde Alcalá, Juan

    2015-01-01

    Esta tesis doctoral se enmarca dentro del campo de los sistemas embebidos reconfigurables, redes de sensores inalámbricas para aplicaciones de altas prestaciones, y computación distribuida. El documento se centra en el estudio de alternativas de procesamiento para sistemas embebidos autónomos distribuidos de altas prestaciones (por sus siglas en inglés, High-Performance Autonomous Distributed Systems (HPADS)), así como su evolución hacia el procesamiento de alta resolución. El estudio se ha ...

  5. Development of 4.6 GHz lower hybrid current drive system for steady state and high performance plasma in EAST

    Energy Technology Data Exchange (ETDEWEB)

    Liu, F.K.; Li, J.G.; Shan, J.F.; Wang, M.; Liu, L.; Zhao, L.M.; Hu, H.C.; Feng, J.Q.; Yang, Y.; Jia, H.; Wang, X.J.; Wu, Z.G.; Ma, W.D.; Huang, Y.Y.; Xu, H.D.; Zhang, J.; Cheng, M.; Xu, L.; Li, M.H.; Li, Y.C.; and others

    2016-12-15

    In order to achieve steady state and high performance plasma in EAST, a new lower hybrid current drive system at a frequency of 4.6 GHz has been built. The system is composed of 24 continuous wave (CW) klystron amplifiers to generate 6MW/CW microwave, 24 standard rectangle waveguide transmission lines with water-cooling plate, a multi-junction grill composed of 576 active (in groups of 8) and 84 passive sub-waveguides arranged in 12 rows and 6 columns, and four sets of high voltage power supplies. The power value and the spectrum of the launched microwave from the antenna can be controlled by the low-power microwave circuits in front of the klystrons. The new LHCD system has been applied to the experiments on EAST tokmak since 2014, and the obtained results suggest that it is effective to couple the wave into plasma and drive plasma current.

  6. High performance in software development

    CERN Multimedia

    CERN. Geneva; Haapio, Petri; Liukkonen, Juha-Matti

    2015-01-01

    What are the ingredients of high-performing software? Software development, especially for large high-performance systems, is one the most complex tasks mankind has ever tried. Technological change leads to huge opportunities but challenges our old ways of working. Processing large data sets, possibly in real time or with other tight computational constraints, requires an efficient solution architecture. Efficiency requirements span from the distributed storage and large-scale organization of computation and data onto the lowest level of processor and data bus behavior. Integrating performance behavior over these levels is especially important when the computation is resource-bounded, as it is in numerics: physical simulation, machine learning, estimation of statistical models, etc. For example, memory locality and utilization of vector processing are essential for harnessing the computing power of modern processor architectures due to the deep memory hierarchies of modern general-purpose computers. As a r...

  7. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    Science.gov (United States)

    Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.

  8. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    Directory of Open Access Journals (Sweden)

    David K Brown

    Full Text Available Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS, a workflow management system and web interface for high performance computing (HPC. JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.

  9. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing

    Science.gov (United States)

    Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450

  10. Development of a High-performance Optical System and Fluorescent Converters for High-resolution Neutron Imaging

    Science.gov (United States)

    Sakai, T.; Yasuda, R.; Iikura, H.; Nojima, T.; Matsubayashi, M.

    Two novel devices for use in neutron imaging technique are introduced. The first one is a high-performance optical lens for video camera systems. The lens system has a magnification of 1:1 and an F value of 3. The optical resolution is less than 5 μm. The second device is a high-resolution fluorescent plate that converts neutrons into visible light. The fluorescent converter material consists of a mixture of 6LiF and ZnS(Ag) fine powder, and the thickness of the converter is material is as little as 15 μm. The surface of the plate is coated with a 1 μm-thick gadolinium oxide layer. This layer is optically transparent and acts as an electron emitter for neutron detection. Our preliminary results show that the developed optical lens and fluorescent converter plates are very promising for high-resolution neutron imaging.

  11. Relationships of cognitive and metacognitive learning strategies to mathematics achievement in four high-performing East Asian education systems.

    Science.gov (United States)

    Areepattamannil, Shaljan; Caleon, Imelda S

    2013-01-01

    The authors examined the relationships of cognitive (i.e., memorization and elaboration) and metacognitive learning strategies (i.e., control strategies) to mathematics achievement among 15-year-old students in 4 high-performing East Asian education systems: Shanghai-China, Hong Kong-China, Korea, and Singapore. In all 4 East Asian education systems, memorization strategies were negatively associated with mathematics achievement, whereas control strategies were positively associated with mathematics achievement. However, the association between elaboration strategies and mathematics achievement was a mixed bag. In Shanghai-China and Korea, elaboration strategies were not associated with mathematics achievement. In Hong Kong-China and Singapore, on the other hand, elaboration strategies were negatively associated with mathematics achievement. Implications of these findings are briefly discussed.

  12. Development of a high-performance coal-fired power generating system with pyrolysis gas and char-fired high temperature furnace (HITAF). Volume 1, Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-02-01

    A major objective of the coal-fired high performance power systems (HIPPS) program is to achieve significant increases in the thermodynamic efficiency of coal use for electric power generation. Through increased efficiency, all airborne emissions can be decreased, including emissions of carbon dioxide. High Performance power systems as defined for this program are coal-fired, high efficiency systems where the combustion products from coal do not contact the gas turbine. Typically, this type of a system will involve some indirect heating of gas turbine inlet air and then topping combustion with a cleaner fuel. The topping combustion fuel can be natural gas or another relatively clean fuel. Fuel gas derived from coal is an acceptable fuel for the topping combustion. The ultimate goal for HIPPS is to, have a system that has 95 percent of its heat input from coal. Interim systems that have at least 65 percent heat input from coal are acceptable, but these systems are required to have a clear development path to a system that is 95 percent coal-fired. A three phase program has been planned for the development of HIPPS. Phase 1, reported herein, includes the development of a conceptual design for a commercial plant. Technical and economic feasibility have been analysed for this plant. Preliminary R&D on some aspects of the system were also done in Phase 1, and a Research, Development and Test plan was developed for Phase 2. Work in Phase 2 include s the testing and analysis that is required to develop the technology base for a prototype plant. This work includes pilot plant testing at a scale of around 50 MMBtu/hr heat input. The culmination of the Phase 2 effort will be a site-specific design and test plan for a prototype plant. Phase 3 is the construction and testing of this plant.

  13. High-Performance Networking

    CERN Document Server

    CERN. Geneva

    2003-01-01

    The series will start with an historical introduction about what people saw as high performance message communication in their time and how that developed to the now to day known "standard computer network communication". It will be followed by a far more technical part that uses the High Performance Computer Network standards of the 90's, with 1 Gbit/sec systems as introduction for an in depth explanation of the three new 10 Gbit/s network and interconnect technology standards that exist already or emerge. If necessary for a good understanding some sidesteps will be included to explain important protocols as well as some necessary details of concerned Wide Area Network (WAN) standards details including some basics of wavelength multiplexing (DWDM). Some remarks will be made concerning the rapid expanding applications of networked storage.

  14. A MapReduce Based High Performance Neural Network in Enabling Fast Stability Assessment of Power Systems

    Directory of Open Access Journals (Sweden)

    Yang Liu

    2017-01-01

    Full Text Available Transient stability assessment is playing a vital role in modern power systems. For this purpose, machine learning techniques have been widely employed to find critical conditions and recognize transient behaviors based on massive data analysis. However, an ever increasing volume of data generated from power systems poses a number of challenges to traditional machine learning techniques, which are computationally intensive running on standalone computers. This paper presents a MapReduce based high performance neural network to enable fast stability assessment of power systems. Hadoop, which is an open-source implementation of the MapReduce model, is first employed to parallelize the neural network. The parallel neural network is further enhanced with HaLoop to reduce the computation overhead incurred in the iteration process of the neural network. In addition, ensemble techniques are employed to accommodate the accuracy loss of the parallelized neural network in classification. The parallelized neural network is evaluated with both the IEEE 68-node system and a real power system from the aspects of computation speedup and stability assessment.

  15. Fuzzy Adaptive Repetitive Control for Periodic Disturbance with Its Application to High Performance Permanent Magnet Synchronous Motor Speed Servo Systems

    Directory of Open Access Journals (Sweden)

    Junxiao Wang

    2016-09-01

    Full Text Available For reducing the steady state speed ripple, especially in high performance speed servo system applications, the steady state precision is more and more important for real servo systems. This paper investigates the steady state speed ripple periodic disturbance problem for a permanent magnet synchronous motor (PMSM servo system; a fuzzy adaptive repetitive controller is designed in the speed loop based on repetitive control and fuzzy information theory for reducing periodic disturbance. Firstly, the various sources of the PMSM speed ripple problem are described and analyzed. Then, the mathematical model of PMSM is given. Subsequently, a fuzzy adaptive repetitive controller based on repetitive control and fuzzy logic control is designed for the PMSM speed servo system. In addition, the system stability analysis is also deduced. Finally, the simulation and experiment implementation are respectively based on the MATLAB/Simulink and TMS320F2808 of Texas instrument company, DSP (digital signal processor hardware platform. Comparing to the proportional integral (PI controller, simulation and experimental results show that the proposed fuzzy adaptive repetitive controller has better periodic disturbance rejection ability and higher steady state precision.

  16. High-performance Sonitopia (Sonic Utopia): Hyper intelligent Material-based Architectural Systems for Acoustic Energy Harvesting

    Science.gov (United States)

    Heidari, F.; Mahdavinejad, M.

    2017-08-01

    The rate of energy consumption in all over the world, based on reliable statistics of international institutions such as the International Energy Agency (IEA) shows significant increase in energy demand in recent years. Periodical recorded data shows a continuous increasing trend in energy consumption especially in developed countries as well as recently emerged developing economies such as China and India. While air pollution and water contamination as results of high consumption of fossil energy resources might be consider as menace to civic ideals such as livability, conviviality and people-oriented cities. In other hand, automobile dependency, cars oriented design and other noisy activities in urban spaces consider as threats to urban life. Thus contemporary urban design and planning concentrates on rethinking about ecology of sound, reorganizing the soundscape of neighborhoods, redesigning the sonic order of urban space. It seems that contemporary architecture and planning trends through soundscape mapping look for sonitopia (Sonic + Utopia) This paper is to propose some interactive hyper intelligent material-based architectural systems for acoustic energy harvesting. The proposed architectural design system may be result in high-performance architecture and planning strategies for future cities. The ultimate aim of research is to develop a comprehensive system for acoustic energy harvesting which cover the aim of noise reduction as well as being in harmony with architectural design. The research methodology is based on a literature review as well as experimental and quasi-experimental strategies according the paradigm of designedly ways of doing and knowing. While architectural design has solution-focused essence in problem-solving process, the proposed systems had better be hyper intelligent rather than predefined procedures. Therefore, the steps of the inference mechanism of the research include: 1- understanding sonic energy and noise potentials as energy

  17. High performance AC drives

    CERN Document Server

    Ahmad, Mukhtar

    2010-01-01

    This book presents a comprehensive view of high performance ac drives. It may be considered as both a text book for graduate students and as an up-to-date monograph. It may also be used by R & D professionals involved in the improvement of performance of drives in the industries. The book will also be beneficial to the researchers pursuing work on multiphase drives as well as sensorless and direct torque control of electric drives since up-to date references in these topics are provided. It will also provide few examples of modeling, analysis and control of electric drives using MATLAB/SIMULIN

  18. High-Performance Seizure Detection System Using a Wavelet-Approximate Entropy-fSVM Cascade With Clinical Validation.

    Science.gov (United States)

    Shen, Chia-Ping; Chen, Chih-Chuan; Hsieh, Sheau-Ling; Chen, Wei-Hsin; Chen, Jia-Ming; Chen, Chih-Min; Lai, Feipei; Chiu, Ming-Jang

    2013-10-01

    The classification of electroencephalography (EEG) signals is one of the most important methods for seizure detection. However, verification of an atypical epileptic seizure often can only be done through long-term EEG monitoring for 24 hours or longer. Hence, automatic EEG signal analysis for clinical screening is necessary for the diagnosis of epilepsy. We propose an EEG analysis system of seizure detection, based on a cascade of wavelet-approximate entropy for feature selection, Fisher scores for adaptive feature selection, and support vector machine for feature classification. Performance of the system was tested on open source data, and the overall accuracy reached 99.97%. We further tested the performance of the system on clinical EEG obtained from a clinical EEG laboratory and bedside EEG recordings. The results showed an overall accuracy of 98.73% for routine EEG, and 94.32% for bedside EEG, which verified the high performance and usefulness of such a cascade system for seizure detection. Also, the prediction model, trained by routine EEG, can be successfully generalized to bedside EEG of independent patients.

  19. High Performance, Low Operating Voltage n-Type Organic Field Effect Transistor Based on Inorganic-Organic Bilayer Dielectric System

    Science.gov (United States)

    Dey, A.; Singh, A.; Kalita, A.; Das, D.; Iyer, P. K.

    2016-04-01

    The performance of organic field-effect transistors (OFETs) fabricated utilizing vacuum deposited n-type conjugated molecule N,N’-Dioctadecyl-1,4,5,8-naphthalenetetracarboxylic diimide (NDIOD2) were investigated using single and bilayer dielectric system over a low-cost glass substrate. Single layer device structure consists of Poly (vinyl alcohol) (PVA) as the dielectric material whereas the bilayer systems contain two different device configuration namely aluminum oxide/Poly (vinyl alcohol) (Al2O3/PVA) and aluminum oxide/Poly (methyl mefhacrylate) (Al2O3/PMMA) in order to reduce the operating voltage and improve the device performance. It was observed that the devices with Al2O3/PMMA bilayer dielectric system and top contact aluminum electrodes exhibit excellent n-channel behaviour under vacuum compared to the other two structures with electron mobility value of 0.32 cm2/Vs, threshold voltages ~1.8 V and current on/off ratio ~104, operating under a very low voltage (6 V). These devices demonstrate highly stable electrical behaviour under multiple scans and low threshold voltage instability in vacuum condition even after 7 days than the Al2O3/PVA device structure. This low operating voltage, high performance OTFT device with bilayer dielectric system is expected to have diverse applications in the next generation of OTFT technologies.

  20. High-performance work systems and creativity implementation : the role of psychological capital and psychological safety

    NARCIS (Netherlands)

    Agarwal, Promila; Farndale, E.

    Unimplemented creative ideas are potentially wasted opportunities for organisations. Although it is largely understood how to encourage creativity among employees, how to ensure this creativity is implemented remains underexplored. The objective of the current study is to identify the underlying

  1. System Software and Tools for High Performance Computing Environments: A report on the findings of the Pasadena Workshop, April 14--16, 1992

    Energy Technology Data Exchange (ETDEWEB)

    Sterling, T. [Universities Space Research Association, Washington, DC (United States); Messina, P. [Jet Propulsion Lab., Pasadena, CA (United States); Chen, M. [Yale Univ., New Haven, CT (United States)] [and others

    1993-04-01

    The Pasadena Workshop on System Software and Tools for High Performance Computing Environments was held at the Jet Propulsion Laboratory from April 14 through April 16, 1992. The workshop was sponsored by a number of Federal agencies committed to the advancement of high performance computing (HPC) both as a means to advance their respective missions and as a national resource to enhance American productivity and competitiveness. Over a hundred experts in related fields from industry, academia, and government were invited to participate in this effort to assess the current status of software technology in support of HPC systems. The overall objectives of the workshop were to understand the requirements and current limitations of HPC software technology and to contribute to a basis for establishing new directions in research and development for software technology in HPC environments. This report includes reports written by the participants of the workshop`s seven working groups. Materials presented at the workshop are reproduced in appendices. Additional chapters summarize the findings and analyze their implications for future directions in HPC software technology development.

  2. Using sewage sludge pyrolytic gas to modify titanium alloy to obtain high-performance anodes in bio-electrochemical systems

    Science.gov (United States)

    Gu, Yuan; Ying, Kang; Shen, Dongsheng; Huang, Lijie; Ying, Xianbin; Huang, Haoqian; Cheng, Kun; Chen, Jiazheng; Zhou, Yuyang; Chen, Ting; Feng, Huajun

    2017-12-01

    Titanium is under consideration as a potential stable bio-anode because of its high conductivity, suitable mechanical properties, and electrochemical inertness in the operating potential window of bio-electrochemical systems; however, its application is limited by its poor electron-transfer capacity with electroactive bacteria and weak ability to form biofilms on its hydrophobic surface. This study reports an effective and low-cost way to convert a hydrophobic titanium alloy surface into a hydrophilic surface that can be used as a bio-electrode with higher electron-transfer rates. Pyrolytic gas of sewage sludge is used to modify the titanium alloy. The current generation, anodic biofilm formation surface, and hydrophobicity are systematically investigated by comparing bare electrodes with three modified electrodes. Maximum current density (15.80 A/m2), achieved using a modified electrode, is 316-fold higher than that of the bare titanium alloy electrode (0.05 A/m2) and that achieved by titanium alloy electrodes modified by other methods (12.70 A/m2). The pyrolytic gas-modified titanium alloy electrode can be used as a high-performance and scalable bio-anode for bio-electrochemical systems because of its high electron-transfer rates, hydrophilic nature, and ability to achieve high current density.

  3. An open, parallel I/O computer as the platform for high-performance, high-capacity mass storage systems

    Science.gov (United States)

    Abineri, Adrian; Chen, Y. P.

    1992-01-01

    APTEC Computer Systems is a Portland, Oregon based manufacturer of I/O computers. APTEC's work in the context of high density storage media is on programs requiring real-time data capture with low latency processing and storage requirements. An example of APTEC's work in this area is the Loral/Space Telescope-Data Archival and Distribution System. This is an existing Loral AeroSys designed system, which utilizes an APTEC I/O computer. The key attributes of a system architecture that is suitable for this environment are as follows: (1) data acquisition alternatives; (2) a wide range of supported mass storage devices; (3) data processing options; (4) data availability through standard network connections; and (5) an overall system architecture (hardware and software designed for high bandwidth and low latency). APTEC's approach is outlined in this document.

  4. A Globally Distributed Grid Monitoring System to Facilitate High-Performance Computing at D0/SAM-Grid

    Energy Technology Data Exchange (ETDEWEB)

    Rana, Abhishek S. [Texas U., Arlington

    2002-01-01

    A grid environment involves large scale sharing of resources that are distributed from a geographical or an administrative perspective. There is a need for systems that enable continuous discovery and monitoring of the components of a grid. In this work, we discuss the development and deployment of a monitoring system that has been designed as a prototype for the D0/SAM-Grid. We have developed a system that uses a layered architecture for information generation and processing, utilizes the various grid middleware tools, and implements Integration and Enquiry Protocols using existing Discovery Protocols to provide a user with a coherent view of all current activity in this grid - in the form of a web portal interface. The prototype system has been deployed for monitoring of 11 sites geographically distributed in 5 countries across 3 continents. This work focuses on the D0/SAM-Grid, and is based on the SAM system developed at Fermilab.

  5. High Performance Nano-Constituent Buffer Layer Thin Films to Enable Low Cost Integrated On-the-Move Communications Systems

    National Research Council Canada - National Science Library

    Cole, M. W; Nothwang, W. D; Hubbard, C; Ngo, E; Hirsch, S

    2004-01-01

    .... Utilizing a coplanar device design we successfully designed, fabricated, characterized, and optimized a high performance Ta2O5 thin film passive buffer layer on Si substrates, which will allow...

  6. High Performance Real-Time Visualization of Voluminous Scientific Data Through the NOAA Earth Information System (NEIS).

    Science.gov (United States)

    Stewart, J.; Hackathorn, E. J.; Joyce, J.; Smith, J. S.

    2014-12-01

    Within our community data volume is rapidly expanding. These data have limited value if one cannot interact or visualize the data in a timely manner. The scientific community needs the ability to dynamically visualize, analyze, and interact with these data along with other environmental data in real-time regardless of the physical location or data format. Within the National Oceanic Atmospheric Administration's (NOAA's), the Earth System Research Laboratory (ESRL) is actively developing the NOAA Earth Information System (NEIS). Previously, the NEIS team investigated methods of data discovery and interoperability. The recent focus shifted to high performance real-time visualization allowing NEIS to bring massive amounts of 4-D data, including output from weather forecast models as well as data from different observations (surface obs, upper air, etc...) in one place. Our server side architecture provides a real-time stream processing system which utilizes server based NVIDIA Graphical Processing Units (GPU's) for data processing, wavelet based compression, and other preparation techniques for visualization, allows NEIS to minimize the bandwidth and latency for data delivery to end-users. Client side, users interact with NEIS services through the visualization application developed at ESRL called TerraViz. Terraviz is developed using the Unity game engine and takes advantage of the GPU's allowing a user to interact with large data sets in real time that might not have been possible before. Through these technologies, the NEIS team has improved accessibility to 'Big Data' along with providing tools allowing novel visualization and seamless integration of data across time and space regardless of data size, physical location, or data format. These capabilities provide the ability to see the global interactions and their importance for weather prediction. Additionally, they allow greater access than currently exists helping to foster scientific collaboration and new

  7. High-performance simulation-based algorithms for an alpine ski racer’s trajectory optimization in heterogeneous computer systems

    Directory of Open Access Journals (Sweden)

    Dębski Roman

    2014-09-01

    Full Text Available Effective, simulation-based trajectory optimization algorithms adapted to heterogeneous computers are studied with reference to the problem taken from alpine ski racing (the presented solution is probably the most general one published so far. The key idea behind these algorithms is to use a grid-based discretization scheme to transform the continuous optimization problem into a search problem over a specially constructed finite graph, and then to apply dynamic programming to find an approximation of the global solution. In the analyzed example it is the minimum-time ski line, represented as a piecewise-linear function (a method of elimination of unfeasible solutions is proposed. Serial and parallel versions of the basic optimization algorithm are presented in detail (pseudo-code, time and memory complexity. Possible extensions of the basic algorithm are also described. The implementation of these algorithms is based on OpenCL. The included experimental results show that contemporary heterogeneous computers can be treated as μ-HPC platforms-they offer high performance (the best speedup was equal to 128 while remaining energy and cost efficient (which is crucial in embedded systems, e.g., trajectory planners of autonomous robots. The presented algorithms can be applied to many trajectory optimization problems, including those having a black-box represented performance measure

  8. High Performance Proactive Digital Forensics

    Science.gov (United States)

    Alharbi, Soltan; Moa, Belaid; Weber-Jahnke, Jens; Traore, Issa

    2012-10-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  9. TINA, a new fully automated high-performance droplet freezing assay coupled to a customized infrared detection system

    Science.gov (United States)

    Kunert, Anna Theresa; Lamneck, Mark; Gurk, Christian; Helleis, Frank; Klimach, Thomas; Scheel, Jan Frederik; Pöschl, Ulrich; Fröhlich-Nowoisky, Janine

    2017-04-01

    Heterogeneous ice nucleation is frequently investigated by simultaneously cooling a defined number of droplets of equal volume in droplet freezing assays. In 1971, Gabor Vali established the quantitative assessment of ice nuclei active at specific temperatures for many droplet freezing assays. Since then, several instruments have been developed, and various modifications and improvements have been made. However, for quantitative analysis of ice nuclei, the current known droplet freezing assays are still limited by either small droplet numbers, large droplet volumes, inadequate separation of the single droplets, which can result in mutual interferences, or imprecise temperature control within the system. Here, we present the Twin Ice Nucleation Assay (TINA), which represents an improvement of the until now existing droplet freezing assays in terms of temperature range and statistics. Above all, we developed a distinct detection system for freezing events in droplet freezing assays, where the temperature gradient of each single droplet is tracked individually by infrared cameras coupled to a self-written software. In the fully automated setup, ice nucleation can be studied in two independently cooled, customized aluminum blocks run by a high-performance thermostat. We developed a cooling setup, which allows both huge and tiny temperature changes within a very short period of time, combined with an optimal insulation. Hence, measurements can be performed at temperatures down to -55 °C (218 K) and at cooling rates up to 3 K min-1. Besides that, TINA provides the analysis of nearly 1000 droplets per run with various droplet volumes between 1 µL and 50 µL. This enables a fast and more precise analysis of biological samples with complex IN composition as well as better statistics for every sample at the same time.

  10. High-performance flat data center network architecture based on scalable and flow-controlled optical switching system

    Science.gov (United States)

    Calabretta, Nicola; Miao, Wang; Dorren, Harm

    2016-03-01

    Traffic in data centers networks (DCNs) is steadily growing to support various applications and virtualization technologies. Multi-tenancy enabling efficient resource utilization is considered as a key requirement for the next generation DCs resulting from the growing demands for services and applications. Virtualization mechanisms and technologies can leverage statistical multiplexing and fast switch reconfiguration to further extend the DC efficiency and agility. We present a novel high performance flat DCN employing bufferless and distributed fast (sub-microsecond) optical switches with wavelength, space, and time switching operation. The fast optical switches can enhance the performance of the DCNs by providing large-capacity switching capability and efficiently sharing the data plane resources by exploiting statistical multiplexing. Benefiting from the Software-Defined Networking (SDN) control of the optical switches, virtual DCNs can be flexibly created and reconfigured by the DCN provider. Numerical and experimental investigations of the DCN based on the fast optical switches show the successful setup of virtual network slices for intra-data center interconnections. Experimental results to assess the DCN performance in terms of latency and packet loss show less than 10^-5 packet loss and 640ns end-to-end latency with 0.4 load and 16- packet size buffer. Numerical investigation on the performance of the systems when the port number of the optical switch is scaled to 32x32 system indicate that more than 1000 ToRs each with Terabit/s interface can be interconnected providing a Petabit/s capacity. The roadmap to photonic integration of large port optical switches will be also presented.

  11. High performance data transfer

    Science.gov (United States)

    Cottrell, R.; Fang, C.; Hanushevsky, A.; Kreuger, W.; Yang, W.

    2017-10-01

    The exponentially increasing need for high speed data transfer is driven by big data, and cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software. This has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a scalable, balanced, easy to deploy and use way while minimizing power and space utilization. In collaboration with several commercial vendors, Proofs of Concept (PoC) consisting of clusters have been put together using off-the- shelf components to test the ZX scalability and ability to balance services using multiple cores, and links. The PoCs are based on SSD flash storage that is managed by a parallel file system. Each cluster occupies 4 rack units. Using the PoCs, between clusters we have achieved almost 200Gbps memory to memory over two 100Gbps links, and 70Gbps parallel file to parallel file with encryption over a 5000 mile 100Gbps link.

  12. [An improvement of the calibration results for grey analytical system in high performance liquid chromatography applying constrained background bilinearization method based on genetic algorithm optimization strategy].

    Science.gov (United States)

    Zhang, Yaxiong; Nie, Xianling

    2017-06-08

    Constrained background bilinearization (CBBL) method was applied for multivariate calibration analysis of the grey analytical system in high performance liquid chromatography (HPLC). By including the variables of the concentrations and the retention time of the analytes simultaneously, the standard CBBL was modified for the multivariate calibration of the HPLC system with poor retention precision. The CBBL was optimized globally by genetic algorithm (GA). That is to say, both the concentrations and the retention times of the analytes were optimized globally and simultaneously by GA. The modified CBBL was applied in the calibration analysis for both simulated and experimental HPLC system with poor retention precision. The experimental data were collected from HPLC separation system for phenolic compounds. The modified CBBL was verified to be useful to prevent the inherent limitation of the standard CBBL, which means that the standard CBBL may result in poor calibration results in the case of poor retention precision in chromatography system. Moreover, the modified CBBL can give not only the concentrations but also the retention time of the analytes. i. e., more useful information of the analytes can be generated by the modified CBBL. Subsequently, nearly ideal calibration results were obtained. On the other hand, comparing with the calibration results by the classical rank annihilation factor analysis (RAFA) and residual bilinearization (RBL) method, the results given by the modified CBBL were also improved significantly for the HPLC systems studied in this work.

  13. High-performance two-axis gimbal system for free space laser communications onboard unmanned aircraft systems

    Science.gov (United States)

    Locke, Michael; Czarnomski, Mariusz; Qadir, Ashraf; Setness, Brock; Baer, Nicolai; Meyer, Jennifer; Semke, William H.

    2011-03-01

    A custom designed and manufactured gimbal with a wide field-of-view and fast response time is developed. This enhanced custom design is a 24 volt system with integrated motor controllers and drivers which offers a full 180o fieldof- view in both azimuth and elevation; this provides a more continuous tracking capability as well as increased velocities of up to 479° per second. The addition of active high-frequency vibration control, to complement the passive vibration isolation system, is also in development. The ultimate goal of this research is to achieve affordable, reliable, and secure air-to-air laser communications between two separate remotely piloted aircraft. As a proof-of-concept, the practical implementation of an air-to-ground laserbased video communications payload system flown by a small Unmanned Aerial Vehicle (UAV) will be demonstrated. A numerical tracking algorithm has been written, tested, and used to aim the airborne laser transmitter at a stationary ground-based receiver with known GPS coordinates; however, further refinement of the tracking capabilities is dependent on an improved gimbal design for precision pointing of the airborne laser transmitter. The current gimbal pointing system is a two-axis, commercial-off-the-shelf component, which is limited in both range and velocity. The current design is capable of 360o of pan and 78o of tilt at a velocity of 60o per second. The control algorithm used for aiming the gimbal is executed on a PC-104 format embedded computer onboard the payload to accurately track a stationary ground-based receiver. This algorithm autonomously calculates a line-of-sight vector in real-time by using the UAV autopilot's Differential Global Positioning System (DGPS) which provides latitude, longitude, and altitude and Inertial Measurement Unit (IMU) which provides the roll, pitch, and yaw data, along with the known Global Positioning System (GPS) location of the ground-based photodiode array receiver.

  14. Development of a Computational Steering Framework for High Performance Computing Environments on Blue Gene/P Systems

    KAUST Repository

    Danani, Bob K.

    2012-07-01

    Computational steering has revolutionized the traditional workflow in high performance computing (HPC) applications. The standard workflow that consists of preparation of an application’s input, running of a simulation, and visualization of simulation results in a post-processing step is now transformed into a real-time interactive workflow that significantly reduces development and testing time. Computational steering provides the capability to direct or re-direct the progress of a simulation application at run-time. It allows modification of application-defined control parameters at run-time using various user-steering applications. In this project, we propose a computational steering framework for HPC environments that provides an innovative solution and easy-to-use platform, which allows users to connect and interact with running application(s) in real-time. This framework uses RealityGrid as the underlying steering library and adds several enhancements to the library to enable steering support for Blue Gene systems. Included in the scope of this project is the development of a scalable and efficient steering relay server that supports many-to-many connectivity between multiple steered applications and multiple steering clients. Steered applications can range from intermediate simulation and physical modeling applications to complex computational fluid dynamics (CFD) applications or advanced visualization applications. The Blue Gene supercomputer presents special challenges for remote access because the compute nodes reside on private networks. This thesis presents an implemented solution and demonstrates it on representative applications. Thorough implementation details and application enablement steps are also presented in this thesis to encourage direct usage of this framework.

  15. A High Performance Digital Time Interval Spectrometer: An Embedded, FPGA-Based System With Reduced Dead Time Behaviour

    Directory of Open Access Journals (Sweden)

    Arkani Mohammad

    2015-12-01

    Full Text Available In this work, a fast 32-bit one-million-channel time interval spectrometer is proposed based on field programmable gate arrays (FPGAs. The time resolution is adjustable down to 3.33 ns (= T, the digitization/discretization period based on a prototype system hardware. The system is capable to collect billions of time interval data arranged in one million timing channels. This huge number of channels makes it an ideal measuring tool for very short to very long time intervals of nuclear particle detection systems. The data are stored and updated in a built-in SRAM memory during the measuring process, and then transferred to the computer. Two time-to-digital converters (TDCs working in parallel are implemented in the design to immune the system against loss of the first short time interval events (namely below 10 ns considering the tests performed on the prototype hardware platform of the system. Additionally, the theory of multiple count loss effect is investigated analytically. Using the Monte Carlo method, losses of counts up to 100 million events per second (Meps are calculated and the effective system dead time is estimated by curve fitting of a non-extendable dead time model to the results (τNE = 2.26 ns. An important dead time effect on a measured random process is the distortion on the time spectrum; using the Monte Carlo method this effect is also studied. The uncertainty of the system is analysed experimentally. The standard deviation of the system is estimated as ± 36.6 × T (T = 3.33 ns for a one-second time interval test signal (300 million T in the time interval.

  16. Integrated work management system.

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Edward J., Jr.; Henry, Karen Lynne

    2010-06-01

    Sandia National Laboratories develops technologies to: (1) sustain, modernize, and protect our nuclear arsenal (2) Prevent the spread of weapons of mass destruction; (3) Provide new capabilities to our armed forces; (4) Protect our national infrastructure; (5) Ensure the stability of our nation's energy and water supplies; and (6) Defend our nation against terrorist threats. We identified the need for a single overarching Integrated Workplace Management System (IWMS) that would enable us to focus on customer missions and improve FMOC processes. Our team selected highly configurable commercial-off-the-shelf (COTS) software with out-of-the-box workflow processes that integrate strategic planning, project management, facility assessments, and space management, and can interface with existing systems, such as Oracle, PeopleSoft, Maximo, Bentley, and FileNet. We selected the Integrated Workplace Management System (IWMS) from Tririga, Inc. Facility Management System (FMS) Benefits are: (1) Create a single reliable source for facility data; (2) Improve transparency with oversight organizations; (3) Streamline FMOC business processes with a single, integrated facility-management tool; (4) Give customers simple tools and real-time information; (5) Reduce indirect costs; (6) Replace approximately 30 FMOC systems and 60 homegrown tools (such as Microsoft Access databases); and (7) Integrate with FIMS.

  17. Development of a high-performance, coal-fired power generating system with a pyrolysis gas and char-fired high-temperature furnace

    Energy Technology Data Exchange (ETDEWEB)

    Shenker, J.

    1995-11-01

    A high-performance power system (HIPPS) is being developed. This system is a coal-fired, combined-cycle plant that will have an efficiency of at least 47 percent, based on the higher heating value of the fuel. The original emissions goal of the project was for NOx and SOx to each be below 0.15 lb/MMBtu. In the Phase 2 RFP this emissions goal was reduced to 0.06 lb/MMBtu. The ultimate goal of HIPPS is to have an all-coal-fueled system, but initial versions of the system are allowed up to 35 percent heat input from natural gas. Foster Wheeler Development Corporation is currently leading a team effort with AlliedSignal, Bechtel, Foster Wheeler Energy Corporation, Research-Cottrell, TRW and Westinghouse. Previous work on the project was also done by General Electric. The HIPPS plant will use a high-Temperature Advanced Furnace (HITAF) to achieve combined-cycle operation with coal as the primary fuel. The HITAF is an atmospheric-pressure, pulverized-fuel-fired boiler/air heater. The HITAF is used to heat air for the gas turbine and also to transfer heat to the steam cycle. its design and functions are very similar to conventional PC boilers. Some important differences, however, arise from the requirements of the combined cycle operation.

  18. Systems biology at work

    NARCIS (Netherlands)

    Martins Dos Santos, V.A.P.; Damborsky, J.

    2010-01-01

    In his editorial overview for the 2008 Special Issue on this topic, the late Jaroslav Stark pointedly noted that systems biology is no longer a niche pursuit, but a recognized discipline in its own right “noisily” coming of age [1]. Whilst general underlying principles and basic techniques are now

  19. High performance image acquisition and processing architecture for fast plant system controllers based on FPGA and GPU

    Energy Technology Data Exchange (ETDEWEB)

    Nieto, J., E-mail: jnieto@sec.upm.es [Grupo de Investigación en Instrumentación y Acústica Aplicada, Universidad Politécnica de Madrid, Crta. Valencia Km-7, Madrid 28031 (Spain); Sanz, D.; Guillén, P.; Esquembri, S.; Arcas, G. de; Ruiz, M. [Grupo de Investigación en Instrumentación y Acústica Aplicada, Universidad Politécnica de Madrid, Crta. Valencia Km-7, Madrid 28031 (Spain); Vega, J.; Castro, R. [Asociación EURATOM/CIEMAT para Fusión, Madrid (Spain)

    2016-11-15

    Highlights: • To test an image acquisition and processing system for Camera Link devices based in a FPGA, compliant with ITER fast controllers. • To move data acquired from the set NI1483-NIPXIe7966R directly to a NVIDIA GPU using NVIDIA GPUDirect RDMA technology. • To obtain a methodology to include GPUs processing in ITER Fast Plant Controllers, using EPICS integration through Nominal Device Support (NDS). - Abstract: The two dominant technologies that are being used in real time image processing are Field Programmable Gate Array (FPGA) and Graphical Processor Unit (GPU) due to their algorithm parallelization capabilities. But not much work has been done to standardize how these technologies can be integrated in data acquisition systems, where control and supervisory requirements are in place, such as ITER (International Thermonuclear Experimental Reactor). This work proposes an architecture, and a development methodology, to develop image acquisition and processing systems based on FPGAs and GPUs compliant with ITER fast controller solutions. A use case based on a Camera Link device connected to an FPGA DAQ device (National Instruments FlexRIO technology), and a NVIDIA Tesla GPU series card has been developed and tested. The architecture proposed has been designed to optimize system performance by minimizing data transfer operations and CPU intervention thanks to the use of NVIDIA GPUDirect RDMA and DMA technologies. This allows moving the data directly between the different hardware elements (FPGA DAQ-GPU-CPU) avoiding CPU intervention and therefore the use of intermediate CPU memory buffers. A special effort has been put to provide a development methodology that, maintaining the highest possible abstraction from the low level implementation details, allows obtaining solutions that conform to CODAC Core System standards by providing EPICS and Nominal Device Support.

  20. CLUPI, a high-performance imaging system on the rover of the 2018 mission to discover biofabrics on Mars

    Science.gov (United States)

    Josset, J.-L.; Westall, F.; Hofmann, B. A.; Spray, J. G.; Cockell, C.; Kempe, S.; Griffiths, A. D.; Coradini, A.; Colangeli, L.; Koschny, D.; Pullan, D.; Föllmi, K.; Diamond, L.; Josset, M.; Javaux, E.; Esposito, F.

    2011-10-01

    The scientific objectives of the 2018 ExoMars rover mission are to search for traces of past or present life and to characterise the near-sub surface. Both objectives require study of the rock/regolith materials in terms of structure, textures, mineralogy, and elemental and organic composition. The 2018 ExoMars rover payload consists of a suite of complementary instruments designed to reach these objectives. CLUPI, the high-performance colour close up imager, on board the 2018 ExoMars Rover plays an important role in attaining the mission objectives: it is the equivalent of the hand lens that no geologist is without when undertaking field work. CLUPI is a powerful, highly integrated miniaturized (smart assembly in titanium that can sustain a wide temperature range. The concept benefits from well-proven heritage: Proba, Rosetta, MarsExpress and Smart-1 missions… In a typical field scenario, the geologist will use his/her eyes to make an overview of an area and the outcrops within it to determine sites of particular interest for more detailed study. In the ExoMars scenario, the PanCam wide angle cameras (WACS) will be used for this task. After having made a preliminary general evaluation, the geologist will approach a particular outcrop for closer observation of structures at the decimetre to subdecimeter scale (ExoMars' High Resolution Camera) before finally getting very close up to the surface with a hand lens (ExoMars' CLUPI), and/or taking a hand specimen, for detailed observation of textures and minerals. Using structural, textural and preliminary compositional analysis, the geologist identifies the materials and makes a decision as to whether they are of sufficient interest to be subsampled for laboratory analysis (using the ExoMars drill and laboratory instruments). Given the time and energy expense necessary for drilling and analysing samples in the rover laboratory, preliminary screening of the materials to chose those most likely to be of interest is

  1. IEEE 802.15.4 Frame Aggregation Enhancement to Provide High Performance in Life-Critical Patient Monitoring Systems.

    Science.gov (United States)

    Akbar, Muhammad Sajjad; Yu, Hongnian; Cang, Shuang

    2017-01-28

    In wireless body area sensor networks (WBASNs), Quality of Service (QoS) provision for patient monitoring systems in terms of time-critical deadlines, high throughput and energy efficiency is a challenging task. The periodic data from these systems generates a large number of small packets in a short time period which needs an efficient channel access mechanism. The IEEE 802.15.4 standard is recommended for low power devices and widely used for many wireless sensor networks applications. It provides a hybrid channel access mechanism at the Media Access Control (MAC) layer which plays a key role in overall successful transmission in WBASNs. There are many WBASN's MAC protocols that use this hybrid channel access mechanism in variety of sensor applications. However, these protocols are less efficient for patient monitoring systems where life critical data requires limited delay, high throughput and energy efficient communication simultaneously. To address these issues, this paper proposes a frame aggregation scheme by using the aggregated-MAC protocol data unit (A-MPDU) which works with the IEEE 802.15.4 MAC layer. To implement the scheme accurately, we develop a traffic patterns analysis mechanism to understand the requirements of the sensor nodes in patient monitoring systems, then model the channel access to find the performance gap on the basis of obtained requirements, finally propose the design based on the needs of patient monitoring systems. The mechanism is initially verified using numerical modelling and then simulation is conducted using NS2.29, Castalia 3.2 and OMNeT++. The proposed scheme provides the optimal performance considering the required QoS.

  2. High Performance Marine Vessels

    CERN Document Server

    Yun, Liang

    2012-01-01

    High Performance Marine Vessels (HPMVs) range from the Fast Ferries to the latest high speed Navy Craft, including competition power boats and hydroplanes, hydrofoils, hovercraft, catamarans and other multi-hull craft. High Performance Marine Vessels covers the main concepts of HPMVs and discusses historical background, design features, services that have been successful and not so successful, and some sample data of the range of HPMVs to date. Included is a comparison of all HPMVs craft and the differences between them and descriptions of performance (hydrodynamics and aerodynamics). Readers will find a comprehensive overview of the design, development and building of HPMVs. In summary, this book: Focuses on technology at the aero-marine interface Covers the full range of high performance marine vessel concepts Explains the historical development of various HPMVs Discusses ferries, racing and pleasure craft, as well as utility and military missions High Performance Marine Vessels is an ideal book for student...

  3. Redesigning Community Colleges for Completion: Lessons from Research on High-Performance Organizations. CCRC Working Paper No. 24. Assessment of Evidence Series

    Science.gov (United States)

    Jenkins, Davis

    2011-01-01

    This paper examines the research from within and outside of higher education on the practices of high-performance organizations. It assesses the extent to which community colleges generally are following these practices and evaluates current reform efforts in light of models of organizational effectiveness that emerge from the research literature.…

  4. The choice of the principle of functioning of the system of magnetic levitation for the device of high-performance testing of powder permanent magnets

    Science.gov (United States)

    Shaykhutdinov, D. V.; Gorbatenko, N. I.; Narakidze, N. D.; Vlasov, A. S.; Stetsenko, I. A.

    2017-02-01

    The present article focuses on permanent magnets quality control problems. High-performance direct-flow type systems for the mechanical engineering production processes are considered. The main lack of the existing high-performance direct-flow type systems is a completing phase of movement of a tested product when the movement is oscillatory and abrupt braking may be harmful for high fragility samples. A special system for permanent magnets control is offered. The system realizes the magnetic levitation of a test sample. Active correction of the electric current in magnetizing coils as the basic functioning principle of this system is offered. The system provides the required parameters of the movement of the test sample by using opposite connection of magnetizing coils. This new technique provides aperiodic nature of the movement and limited acceleration with saving of high accuracy and required timeframe of the installation in the measuring position.

  5. The Methods of Implementation of the Three-dimensional Pseudorandom Number Generator DOZEN for Heterogeneous CPU/GPU /FPGA High-performance Systems

    Directory of Open Access Journals (Sweden)

    Nikolay Petrovich Vasilyev

    2015-03-01

    Full Text Available The paper describes the scope of information security protocols based on PRN G in industrial systems. A method for implementing three-dimensional pseudorandom number generator D O Z E N in hybrid systems is provided. The description and results of studies parallel CUDA-version of the algorithm for use in hybrid data centers and high-performance FPGA-version for use in hardware solutions in controlled facilities of SCADA-systems are given.

  6. High performance flexible heat pipes

    Science.gov (United States)

    Shaubach, R. M.; Gernert, N. J.

    1985-01-01

    A Phase I SBIR NASA program for developing and demonstrating high-performance flexible heat pipes for use in the thermal management of spacecraft is examined. The program combines several technologies such as flexible screen arteries and high-performance circumferential distribution wicks within an envelope which is flexible in the adiabatic heat transport zone. The first six months of work during which the Phase I contract goal were met, are described. Consideration is given to the heat-pipe performance requirements. A preliminary evaluation shows that the power requirement for Phase II of the program is 30.5 kilowatt meters at an operating temperature from 0 to 100 C.

  7. Final Assessment of Preindustrial Solid-State Route for High-Performance Mg-System Alloys Production: Concluding the EU Green Metallurgy Project

    Science.gov (United States)

    D'Errico, Fabrizio; Plaza, Gerardo Garces; Giger, Franz; Kim, Shae K.

    2013-10-01

    The Green Metallurgy Project, a LIFE+ project co-financed by the European Union Commission, has now been completed. The purpose of the Green Metallurgy Project was to establish and assess a preindustrial process capable of using nanostructured-based high-performance Mg-Zn(Y) magnesium alloys and fully recycled eco-magnesium alloys. In this work, the Consortium presents the final outcome and verification of the completed prototype construction. To compare upstream cradle-to-grave footprints when ternary nanostructured Mg-Y-Zn alloys or recycled eco-magnesium chips are produced during the process cycle using the same equipment, a life cycle analysis was completed following the ISO 14040 methodology. During tests to fine tune the prototype machinery and compare the quality of semifinished bars produced using the scaled up system, the Buhler team produced interesting and significant results. Their tests showed the ternary Mg-Y-Zn magnesium alloys to have a highest specific strength over 6000 series wrought aluminum alloys usually employed in automotive components.

  8. High performance homes

    DEFF Research Database (Denmark)

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    . Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy......Can prefabrication contribute to the development of high performance homes? To answer this question, this chapter defines high performance in more broadly inclusive terms, acknowledging the technical, architectural, social and economic conditions under which energy consumption and production occur...... efficient to operate and valuable for building communities. Herein discussed are two successful examples of low energy prefabricated housing projects built in Copenhagen Denmark, which embraced both the constraints and possibilities offered by prefabrication....

  9. High Performance Computing at NASA

    Science.gov (United States)

    Bailey, David H.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    The speaker will give an overview of high performance computing in the U.S. in general and within NASA in particular, including a description of the recently signed NASA-IBM cooperative agreement. The latest performance figures of various parallel systems on the NAS Parallel Benchmarks will be presented. The speaker was one of the authors of the NAS (National Aerospace Standards) Parallel Benchmarks, which are now widely cited in the industry as a measure of sustained performance on realistic high-end scientific applications. It will be shown that significant progress has been made by the highly parallel supercomputer industry during the past year or so, with several new systems, based on high-performance RISC processors, that now deliver superior performance per dollar compared to conventional supercomputers. Various pitfalls in reporting performance will be discussed. The speaker will then conclude by assessing the general state of the high performance computing field.

  10. Danish High Performance Concretes

    DEFF Research Database (Denmark)

    Nielsen, M. P.; Christoffersen, J.; Frederiksen, J.

    1994-01-01

    In this paper the main results obtained in the research program High Performance Concretes in the 90's are presented. This program was financed by the Danish government and was carried out in cooperation between The Technical University of Denmark, several private companies, and Aalborg University...

  11. The High Performance Computing Initiative

    Science.gov (United States)

    Holcomb, Lee B.; Smith, Paul H.; Macdonald, Michael J.

    1991-01-01

    The paper discusses NASA High Performance Computing Initiative (HPCI), an essential component of the Federal High Performance Computing Program. The HPCI program is designed to provide a thousandfold increase in computing performance, and apply the technologies to NASA 'Grand Challenges'. The Grand Challenges chosen include integrated multidisciplinary simulations and design optimizations of aerospace vehicles throughout the mission profiles; the multidisciplinary modeling and data analysis of the earth and space science physical phenomena; and the spaceborne control of automated systems, handling, and analysis of sensor data and real-time response to sensor stimuli.

  12. High Performance Concrete

    Directory of Open Access Journals (Sweden)

    Traian Oneţ

    2009-01-01

    Full Text Available The paper presents the last studies and researches accomplished in Cluj-Napoca related to high performance concrete, high strength concrete and self compacting concrete. The purpose of this paper is to raid upon the advantages and inconveniences when a particular concrete type is used. Two concrete recipes are presented, namely for the concrete used in rigid pavement for roads and another one for self-compacting concrete.

  13. A Parallel Neuromorphic Text Recognition System and Its Implementation on a Heterogeneous High-Performance Computing Cluster

    Science.gov (United States)

    2013-01-01

    segmentation , feature extraction , and character classification [22]. Some typically used feature extraction techniques...for OCR include, template matching [23], zoning [24], moments extraction [25], [26], contour information, etc. A detailed survey of feature extraction ...mentioned work addresses the performance of OCR . Our review shows that existing OCR technique usually requires complicated feature extraction

  14. High-Performance Control of Paralleled Three-Phase Inverters for Residential Microgrid Architectures Based on Online Uninterruptable Power Systems

    DEFF Research Database (Denmark)

    Zhang, Chi; Guerrero, Josep M.; Vasquez, Juan Carlos

    2015-01-01

    In this paper, a control strategy for the parallel operation of three-phase inverters forming an online uninterruptible power system (UPS) is presented. The UPS system consists of a cluster of paralleled inverters with LC filters directly connected to an AC critical bus and an AC/DC forming a DC ...

  15. Architecture of a high-performance surgical guidance system based on C-arm cone-beam CT: software platform for technical integration and clinical translation

    Science.gov (United States)

    Uneri, Ali; Schafer, Sebastian; Mirota, Daniel; Nithiananthan, Sajendra; Otake, Yoshito; Reaungamornrat, Sureerat; Yoo, Jongheun; Stayman, J. Webster; Reh, Douglas; Gallia, Gary L.; Khanna, A. Jay; Hager, Gregory; Taylor, Russell H.; Kleinszig, Gerhard; Siewerdsen, Jeffrey H.

    2011-03-01

    the development of a CBCT guidance system (reported here for the first time) that leverages the technical developments in Carm CBCT and associated technologies for realizing a high-performance system for translation to clinical studies.

  16. Implementation of High Performance Microstepping Driver Using FPGA with the Aim of Realizing Accurate Control on a Linear Motion System

    Directory of Open Access Journals (Sweden)

    Farid Alidoust Aghdam

    2013-01-01

    Full Text Available This paper presents an FPGA-based microstepping driver which drives a linear motion system with a smooth and precise way. Proposed driver built on a Spartan3 FPGA (XC3S400 core development board from Xilinx. Implementation of driver realized by an FPGA and using Verilog hardware description language in the Xilinx ISE environment. The driver’s control behavior can be adapted just by altering Verilog scripts. In addition, a linear motion system developed (with 4 mm movement per motor revolution and coupled it to the stepper motor. The performance of the driver is tested by measuring the distance traveled on linear motion system. The experimental results verified using hardware-in-loop Matlab and Xilinx cosimulation method. This driver accomplishes a firm and accurate control and is responsive.

  17. High Performance Liquid Chromatography

    Science.gov (United States)

    Talcott, Stephen

    High performance liquid chromatography (HPLC) has many applications in food chemistry. Food components that have been analyzed with HPLC include organic acids, vitamins, amino acids, sugars, nitrosamines, certain pesticides, metabolites, fatty acids, aflatoxins, pigments, and certain food additives. Unlike gas chromatography, it is not necessary for the compound being analyzed to be volatile. It is necessary, however, for the compounds to have some solubility in the mobile phase. It is important that the solubilized samples for injection be free from all particulate matter, so centrifugation and filtration are common procedures. Also, solid-phase extraction is used commonly in sample preparation to remove interfering compounds from the sample matrix prior to HPLC analysis.

  18. Clojure high performance programming

    CERN Document Server

    Kumar, Shantanu

    2013-01-01

    This is a short, practical guide that will teach you everything you need to know to start writing high performance Clojure code.This book is ideal for intermediate Clojure developers who are looking to get a good grip on how to achieve optimum performance. You should already have some experience with Clojure and it would help if you already know a little bit of Java. Knowledge of performance analysis and engineering is not required. For hands-on practice, you should have access to Clojure REPL with Leiningen.

  19. High Performers in Complex Spatial Systems: A Self-Organizing Mapping Approach with Reference to The Netherlands

    NARCIS (Netherlands)

    Kourtit, K.; Arribas-Bel, D.; Nijkamp, P.

    2012-01-01

    This paper addresses the performance of creative firms from the perspective of complex spatial systems. Based on an extensive high-dimensional database on both the attributes of individual creative firms in the Netherlands and a series of detailed regional facilitating and driving factors related,

  20. Age-differentiated work systems

    CERN Document Server

    Frieling, Ekkehart; Wegge, Jürgen

    2013-01-01

    The disproportionate aging of the population of working age in many nations around the world is a unique occurrence in the history of humankind. In the light of demographic change, it is becoming increasingly important to develop and use the potential of older employees. This edited volume Age-differentiated Work Systems provides a final report on a six-year priority program funded by the German Research Foundation (DFG) and presents selected research findings of 17 interdisciplinary project teams. The idea is that it will serve both as a reference book and overview of the current state of research in ergonomics, occupational psychology and related disciplines. It provides new models, methods, and procedures for analyzing and designing age-differentiated work systems with the aim of supporting subject matter experts from different areas in their decisions on labor and employment policies. Therefore over 40 laboratory experiments involving 2,000 participants and 50 field studies involving over 25,000 employees...

  1. Development of a high performance surface slope measuring system for two-dimensional mapping of x-ray optics

    Science.gov (United States)

    Lacey, Ian; Adam, Jérôme; Centers, Gary P.; Gevorkyan, Gevork S.; Nikitin, Sergey M.; Smith, Brian V.; Yashchuk, Valeriy V.

    2017-09-01

    The research and development work on the Advanced Light Source (ALS) upgrade to a diffraction limited storage ring light source, ALS-U, has brought to focus the need for near-perfect x-ray optics, capable of delivering light to experiments without significant degradation of brightness and coherence. The desired surface quality is characterized with residual (after subtraction of an ideal shape) surface slope and height errors of original scanning mode for 2D mapping. We demonstrate the efficiency of the developed 2D mapping via comparison with 1D slope measurements performed with the same hyperbolic test mirror using the ALS developmental long trace profiler. The details of the OSMS design and the developed measuring techniques are also provided.

  2. Communication, Work Systems and HRD

    Science.gov (United States)

    Pace, R. Wayne

    2013-01-01

    Purpose: The purpose of this article is to show the foundational place that communication theory and its practice occupies in functioning work systems. Design/methodology/approach: This paper defines the word communication in terms of the creation and interpretation of displays, describes what it means to have a theoretical foundation for a…

  3. Compensation of Wave-Induced Motion and Force Phenomena for Ship-Based High Performance Robotic and Human Amplifying Systems

    Energy Technology Data Exchange (ETDEWEB)

    Love, LJL

    2003-09-24

    The decrease in manpower and increase in material handling needs on many Naval vessels provides the motivation to explore the modeling and control of Naval robotic and robotic assistive devices. This report addresses the design, modeling, control and analysis of position and force controlled robotic systems operating on the deck of a moving ship. First we provide background information that quantifies the motion of the ship, both in terms of frequency and amplitude. We then formulate the motion of the ship in terms of homogeneous transforms. This transformation provides a link between the motion of the ship and the base of a manipulator. We model the kinematics of a manipulator as a serial extension of the ship motion. We then show how to use these transforms to formulate the kinetic and potential energy of a general, multi-degree of freedom manipulator moving on a ship. As a demonstration, we consider two examples: a one degree-of-freedom system experiencing three sea states operating in a plane to verify the methodology and a 3 degree of freedom system experiencing all six degrees of ship motion to illustrate the ease of computation and complexity of the solution. The first series of simulations explore the impact wave motion has on tracking performance of a position controlled robot. We provide a preliminary comparison between conventional linear control and Repetitive Learning Control (RLC) and show how fixed time delay RLC breaks down due to the varying nature wave disturbance frequency. Next, we explore the impact wave motion disturbances have on Human Amplification Technology (HAT). We begin with a description of the traditional HAT control methodology. Simulations show that the motion of the base of the robot, due to ship motion, generates disturbances forces reflected to the operator that significantly degrade the positioning accuracy and resolution at higher sea states. As with position-controlled manipulators, augmenting the control with a Repetitive

  4. Implementation of Molecular Systems for Identification of Genetic Polymorphism in Winter Wheat to Obtain High-Performance Specialized Varieties

    Directory of Open Access Journals (Sweden)

    Morgun, B.V.

    2016-03-01

    Full Text Available The molecular genetic polymorphism detection systems to screen the presence of alleles in winter wheat 100 varieties were developed. Polymerase chain reactions were deployed to identify relevant genes. The level of allele prevalence of low and medium activity of polyphenol oxidase enzymes was defined and the validation was carried out. Wheat varieties carrying rye 1AL.1RS, 1BL.1RS translocations were characterized and those containing recessive allele of Tamyb10 gene, with Stb4 gene resistance to Septoria linked to polymorphic locus Xgwm111. Waxy wheat variety was discovered and other varieties carrying atypical functional Wx-B1e allele. Characteristics of 100 elite and perspective varieties of wheat were compiled for the presence of alleles of genes determining grain quality (genes PPO, Tamyb10-A1, Wx, resistance to biotic and abiotic stress (rye translocative material, Tamyb10-A1, Stb4.

  5. Ultra-high performance mirror systems for the imaging and coherence beamline I13 at the Diamond Light Source

    Science.gov (United States)

    Wagner, U. H.; Alcock, S.; Ludbrook, G.; Wiatryzk, J.; Rau, C.

    2012-05-01

    I13L is a 250m long hard x-ray beamline (6 keV to 35 keV) currently under construction at the Diamond Light Source. The beamline comprises of two independent experimental endstations: one for imaging in direct space using x-ray microscopy and one for imaging in reciprocal space using coherent diffraction based imaging techniques. To minimise the impact of thermal fluctuations and vibrations onto the beamline performance, we are developing a new generation of ultra-stable beamline instrumentation with highly repeatable adjustment mechanisms using low thermal expansion materials like granite and large piezo-driven flexure stages. For minimising the beam distortion we use very high quality optical components like large ion-beam polished mirrors. In this paper we present the first metrology results on a newly designed mirror system following this design philosophy.

  6. TWRS Systems Engineering Working Plan

    Energy Technology Data Exchange (ETDEWEB)

    Eiholzer, C.R.

    1994-09-16

    The purpose of this Systems Engineering (SE) Working Plan (SEWP) is to describe how the Westinghouse Hanford Company (WHC) Tank Waste Remediation System (TWRS) will implement the SE polity and guidance provided in the Tank Waste Remediation System (TWRS) Systems Engineering Management Plan (SEMP). Sections 2.0 through 4.0 cover how the SE process and management will be performed to develop a technical baseline within TWRS. Section 5.0 covers the plans and schedules to implement the SE process and management within TWRS. Detailed information contained in the TWRS Program SEMP is not repeated in this document. This SEWP and the SE discipline defined within apply to the TWRS Program and new and ongoing TWRS projects or activities, including new facilities and safety. The SE process will be applied to the existing Tank Farm operations where the Richland TWRS Program Office management determines the process appropriate and where value will be added to existing Tank Farm system and operations.

  7. High Performance Electronics on Flexible Silicon

    KAUST Repository

    Sevilla, Galo T.

    2016-09-01

    Over the last few years, flexible electronic systems have gained increased attention from researchers around the world because of their potential to create new applications such as flexible displays, flexible energy harvesters, artificial skin, and health monitoring systems that cannot be integrated with conventional wafer based complementary metal oxide semiconductor processes. Most of the current efforts to create flexible high performance devices are based on the use of organic semiconductors. However, inherent material\\'s limitations make them unsuitable for big data processing and high speed communications. The objective of my doctoral dissertation is to develop integration processes that allow the transformation of rigid high performance electronics into flexible ones while maintaining their performance and cost. In this work, two different techniques to transform inorganic complementary metal-oxide-semiconductor electronics into flexible ones have been developed using industry compatible processes. Furthermore, these techniques were used to realize flexible discrete devices and circuits which include metal-oxide-semiconductor field-effect-transistors, the first demonstration of flexible Fin-field-effect-transistors, and metal-oxide-semiconductors-based circuits. Finally, this thesis presents a new technique to package, integrate, and interconnect flexible high performance electronics using low cost additive manufacturing techniques such as 3D printing and inkjet printing. This thesis contains in depth studies on electrical, mechanical, and thermal properties of the fabricated devices.

  8. High performance pseudo-analytical simulation of multi-object adaptive optics over multi-GPU systems

    KAUST Repository

    Abdelfattah, Ahmad

    2014-01-01

    Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique dedicated to the special case of wide-field multi-object spectrographs (MOS). It applies dedicated wavefront corrections to numerous independent tiny patches spread over a large field of view (FOV). The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. The output of this study helps the design of a new instrument called MOSAIC, a multi-object spectrograph proposed for the European Extremely Large Telescope (E-ELT). We have developed a novel hybrid pseudo-analytical simulation scheme that allows us to accurately simulate in detail the tomographic problem. The main challenge resides in the computation of the tomographic reconstructor, which involves pseudo-inversion of a large dense symmetric matrix. The pseudo-inverse is computed using an eigenvalue decomposition, based on the divide and conquer algorithm, on multicore systems with multi-GPUs. Thanks to a new symmetric matrix-vector product (SYMV) multi-GPU kernel, our overall implementation scores significant speedups over standard numerical libraries on multicore, like Intel MKL, and up to 60% speedups over the standard MAGMA implementation on 8 Kepler K20c GPUs. At 40,000 unknowns, this appears to be the largest-scale tomographic AO matrix solver submitted to computation, to date, to our knowledge and opens new research directions for extreme scale AO simulations. © 2014 Springer International Publishing Switzerland.

  9. Final Project Report: Composition and Realization of Source-to-Sink High-Performance Flows: File Systems, Storage, Hosts, LAN and WAN

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chase [New Jersey Inst. of Technology, Newark, NJ (United States)

    2017-09-06

    A number of Department of Energy (DOE) science applications, involving exascale computing systems and large experimental facilities, are expected to generate large volumes of data, in the range of petabytes to exabytes, which will be transported over wide-area networks for the purpose of storage, visualization, and analysis. The objectives of this proposal are to (1) develop and test the component technologies and their synthesis methods to achieve source-to-sink high-performance flows, and (2) develop tools that provide these capabilities through simple interfaces to users and applications. In terms of the former, we propose to develop (1) optimization methods that align and transition multiple storage flows to multiple network flows on multicore, multibus hosts; and (2) edge and long-haul network path realization and maintenance using advanced provisioning methods including OSCARS and OpenFlow. We also propose synthesis methods that combine these individual technologies to compose high-performance flows using a collection of constituent storage-network flows, and realize them across the storage and local network connections as well as long-haul connections. We propose to develop automated user tools that profile the hosts, storage systems, and network connections; compose the source-to-sink complex flows; and set up and maintain the needed network connections.

  10. Responsive design high performance

    CERN Document Server

    Els, Dewald

    2015-01-01

    This book is ideal for developers who have experience in developing websites or possess minor knowledge of how responsive websites work. No experience of high-level website development or performance tweaking is required.

  11. A High Performance Remote Sensing Product Generation System Based on a Service Oriented Architecture for the Next Generation of Geostationary Operational Environmental Satellites

    Directory of Open Access Journals (Sweden)

    Satya Kalluri

    2015-08-01

    Full Text Available The Geostationary Operational Environmental Satellite (GOES series R, S, T, U (GOES-R will collect remote sensing data at several orders of magnitude compared to legacy missions, 24 × 7, over its 20-year operational lifecycle. A suite of 34 Earth and space weather products must be produced at low latency for timely delivery to forecasters. A ground system (GS has been developed to meet these challenging requirements, using High Performance Computing (HPC within a Service Oriented Architecture (SOA. This approach provides a robust, flexible architecture to support the operational GS as it generates remote sensing products by ingesting and combining data from multiple sources. Test results show that the system meets the key latency and availability requirements for all products.

  12. High performance SPWM frequency converter three-phase cage induction motor's synchronous modulation variable frequency speed regulation system

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Xiaomei; Chen Yaozhong [Taiyuan University of Technology, College of Information Engineering, Shanxi (China)

    2000-08-01

    The paper discusses the synchronous modulation model of SPWM frequency converter at the carrier ratio N=27, and presents the interval values of a small period pulse at modulation depth M=0.1{approx} 0.7 and the line-voltage u{sub AB(t)} express formula of double-pole modulation at M=0.1. Basing on the parameters of a practical three-phase cage induction motor the fundamental frequency f{sub 1} and mechanical characteristic parameters are calculated. The system's control part is simple, the mechanical characteristic is hard and running steadily at a low speed. So it can constitute high performance system with variable frequency and speed regulation. (orig.)

  13. NGINX high performance

    CERN Document Server

    Sharma, Rahul

    2015-01-01

    System administrators, developers, and engineers looking for ways to achieve maximum performance from NGINX will find this book beneficial. If you are looking for solutions such as how to handle more users from the same system or load your website pages faster, then this is the book for you.

  14. High Performance Nanolauncher Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed Low Cost Nanolauncher (LCN) is an upper stage using a new, inexpensive propulsion system. The Phase I program will combine several technologies with a...

  15. Designing High-Performance Schools: A Practical Guide to Organizational Reengineering.

    Science.gov (United States)

    Duffy, Francis M.

    This book offers a step-by-step, systematic process for designing high-performance learning organizations. The process helps administrators develop proposals for redesigning school districts that are tailored to the district's environment, work system, and social system. Chapter 1 describes the characteristics of high-performing organizations, and…

  16. High-performing physician executives.

    Science.gov (United States)

    Brown, M; Larson, S R; McCool, B P

    1988-01-01

    Physician leadership extends beyond traditional clinical disciplines to hospital administration, group practice management, health policy making, management of managed care programs, and many business positions. What kind of person makes a good physician executive? What stands out as the most important motivations, attributes, and interests of high-performing physician executives? How does this compare with non-physician health care executives? Such questions have long been high on the agenda of executives in other industries. This article builds on existing formal assessments of leadership attributes of high-performing business, government, and educational executives and on closer examination of health care executives. Previous studies looked at the need for innovative, entrepreneurial, energetic, community-oriented leaders for positions throughout health care. Traits that distinguish excellence and leadership were described by Brown and McCool.* That study characterized successful leaders in terms of physical strengths (high energy, good health, and propensity for hard work), mental strengths (creativity, intuition, and innovation), and organizational strengths (mission orientation, vision, and entrepreneurial spirit). In this investigation, a subset of health care executives, including physician executives, was examined more closely. It was initially assumed that successful physician executives exhibit many of the same positive traits as do nonphysician executives. This assumption was tested with physician leaders in a range of administrative and managerial positions. We also set out to identify key differences between physician and nonphysician executives. Even with our limited exploration, it seems to us that physician executives probably do differ from nonphysician executives.

  17. Determination of sunset yellow and tartrazine in food samples by combining ionic liquid-based aqueous two-phase system with high performance liquid chromatography.

    Science.gov (United States)

    Sha, Ou; Zhu, Xiashi; Feng, Yanli; Ma, Weixing

    2014-01-01

    We proposed a simple and effective method, by coupling ionic liquid-based aqueous two-phase systems (IL-ATPSs) with high performance liquid chromatography (HPLC), for the analysis of determining tartrazine and sunset yellow in food samples. Under the optimized conditions, IL-ATPSs generated an extraction efficiency of 99% for both analytes, which could then be directly analyzed by HPLC without further treatment. Calibration plots were linear in the range of 0.01-50.0 μg/mL for both Ta and SY. The limits of detection were 5.2 ng/mL for Ta and 6.9 ng/mL for SY. This method proves successful for the separation/analysis of tartrazine and sunset yellow in soft drink sample, candy sample, and instant powder drink and leads to consistent results as obtained from the Chinese national standard method.

  18. Determination of Sunset Yellow and Tartrazine in Food Samples by Combining Ionic Liquid-Based Aqueous Two-Phase System with High Performance Liquid Chromatography

    Directory of Open Access Journals (Sweden)

    Ou Sha

    2014-01-01

    Full Text Available We proposed a simple and effective method, by coupling ionic liquid-based aqueous two-phase systems (IL-ATPSs with high performance liquid chromatography (HPLC, for the analysis of determining tartrazine and sunset yellow in food samples. Under the optimized conditions, IL-ATPSs generated an extraction efficiency of 99% for both analytes, which could then be directly analyzed by HPLC without further treatment. Calibration plots were linear in the range of 0.01–50.0 μg/mL for both Ta and SY. The limits of detection were 5.2 ng/mL for Ta and 6.9 ng/mL for SY. This method proves successful for the separation/analysis of tartrazine and sunset yellow in soft drink sample, candy sample, and instant powder drink and leads to consistent results as obtained from the Chinese national standard method.

  19. ImageMiner: a software system for comparative analysis of tissue microarrays using content-based image retrieval, high-performance computing, and grid technology.

    Science.gov (United States)

    Foran, David J; Yang, Lin; Chen, Wenjin; Hu, Jun; Goodell, Lauri A; Reiss, Michael; Wang, Fusheng; Kurc, Tahsin; Pan, Tony; Sharma, Ashish; Saltz, Joel H

    2011-01-01

    The design and implementation of ImageMiner, a software platform for performing comparative analysis of expression patterns in imaged microscopy specimens such as tissue microarrays (TMAs), is described. ImageMiner is a federated system of services that provides a reliable set of analytical and data management capabilities for investigative research applications in pathology. It provides a library of image processing methods, including automated registration, segmentation, feature extraction, and classification, all of which have been tailored, in these studies, to support TMA analysis. The system is designed to leverage high-performance computing machines so that investigators can rapidly analyze large ensembles of imaged TMA specimens. To support deployment in collaborative, multi-institutional projects, ImageMiner features grid-enabled, service-based components so that multiple instances of ImageMiner can be accessed remotely and federated. The experimental evaluation shows that: (1) ImageMiner is able to support reliable detection and feature extraction of tumor regions within imaged tissues; (2) images and analysis results managed in ImageMiner can be searched for and retrieved on the basis of image-based features, classification information, and any correlated clinical data, including any metadata that have been generated to describe the specified tissue and TMA; and (3) the system is able to reduce computation time of analyses by exploiting computing clusters, which facilitates analysis of larger sets of tissue samples.

  20. High performance polymer concrete

    Directory of Open Access Journals (Sweden)

    Frías, M.

    2007-06-01

    Full Text Available This paper studies the performance of concrete whose chief components are natural aggregate and an organic binder —a thermosetting polyester resin— denominated polymer concrete or PC. The material was examined macro- and microscopically and its basic physical and mechanical properties were determined using mercury porosimetry, scanning electron microscopy (SEM-EDAX, X-ray diffraction (XRD and strength tests (modulus of elasticity, stress-strain curves and ultimate strengths. According to the results of these experimental studies, the PC exhibited a low density (4.8%, closed pore system and a concomitantly continuous internal microstructure. This would at least partially explain its mechanical out-performance of traditional concrete, with average compressive and flexural strength values of 100 MPa and over 20 MPa, respectively. In the absence of standard criteria, the bending test was found to be a useful supplement to compressive strength tests for establishing PC strength classes.Este trabajo de investigación aborda el estudio de un hormigón de altas prestaciones, formado por áridos naturales y un aglomerante orgánico constituido por una resina termoestable poliéster, denominado hormigón polimérico HP. Se describe el material a nivel microscópico y macroscópico, presentando sus propiedades físicas y mecánicas fundamentales, mediante diferentes técnicas experimentales, tales como: porosimetría de mercurio, microscopía electrónica (SEM-EDAX, difracción de rayos X (DRX y ensayos mecánicos (módulo de elasticidad, curvas tensión- deformación y resistencias últimas. Como consecuencia del estudio experimental llevado a cabo, se ha podido apreciar cómo el HP está formado por porosidad cerrada del 4,8%, proporcionando una elevada continuidad a su microestructura interna, lo que justifica, en parte, la mejora de propiedades mecánicas respecto al hormigón tradicional, con unos valores medios de resistencia a compresión de 100

  1. High Performance Methane Thrust Chamber (HPMTC) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — ORBITEC proposes to develop a High-Performance Methane Thrust Chamber (HPMRE) to meet the demands of advanced chemical propulsion systems for deep-space mission...

  2. Role of information systems in controlling costs: the electronic medical record (EMR) and the high-performance computing and communications (HPCC) efforts

    Science.gov (United States)

    Kun, Luis G.

    1994-12-01

    On October 18, 1991, the IEEE-USA produced an entity statement which endorsed the vital importance of the High Performance Computer and Communications Act of 1991 (HPCC) and called for the rapid implementation of all its elements. Efforts are now underway to develop a Computer Based Patient Record (CBPR), the National Information Infrastructure (NII) as part of the HPCC, and the so-called `Patient Card'. Multiple legislative initiatives which address these and related information technology issues are pending in Congress. Clearly, a national information system will greatly affect the way health care delivery is provided to the United States public. Timely and reliable information represents a critical element in any initiative to reform the health care system as well as to protect and improve the health of every person. Appropriately used, information technologies offer a vital means of improving the quality of patient care, increasing access to universal care and lowering overall costs within a national health care program. Health care reform legislation should reflect increased budgetary support and a legal mandate for the creation of a national health care information system by: (1) constructing a National Information Infrastructure; (2) building a Computer Based Patient Record System; (3) bringing the collective resources of our National Laboratories to bear in developing and implementing the NII and CBPR, as well as a security system with which to safeguard the privacy rights of patients and the physician-patient privilege; and (4) utilizing Government (e.g. DOD, DOE) capabilities (technology and human resources) to maximize resource utilization, create new jobs and accelerate technology transfer to address health care issues.

  3. High Performance Flexible Thermal Link

    Science.gov (United States)

    Sauer, Arne; Preller, Fabian

    2014-06-01

    The paper deals with the design and performance verification of a high performance and flexible carbon fibre thermal link.Project goal was to design a space qualified thermal link combining low mass, flexibility and high thermal conductivity with new approaches regarding selected materials and processes. The idea was to combine the advantages of existing metallic links regarding flexibility and the thermal performance of high conductive carbon pitch fibres. Special focus is laid on the thermal performance improvement of matrix systems by means of nano-scaled carbon materials in order to improve the thermal performance also perpendicular to the direction of the unidirectional fibres.One of the main challenges was to establish a manufacturing process which allows handling the stiff and brittle fibres, applying the matrix and performing the implementation into an interface component using unconventional process steps like thermal bonding of fibres after metallisation.This research was funded by the German Federal Ministry for Economic Affairs and Energy (BMWi).

  4. High-performance intraoperative cone-beam CT on a mobile C-arm: an integrated system for guidance of head and neck surgery

    Science.gov (United States)

    Siewerdsen, J. H.; Daly, M. J.; Chan, H.; Nithiananthan, S.; Hamming, N.; Brock, K. K.; Irish, J. C.

    2009-02-01

    A system for intraoperative cone-beam CT (CBCT) surgical guidance is under development and translation to trials in head and neck surgery. The system provides 3D image updates on demand with sub-millimeter spatial resolution and soft-tissue visibility at low radiation dose, thus overcoming conventional limitations associated with preoperative imaging alone. A prototype mobile C-arm provides the imaging platform, which has been integrated with several novel subsystems for streamlined implementation in the OR, including: real-time tracking of surgical instruments and endoscopy (with automatic registration of image and world reference frames); fast 3D deformable image registration (a newly developed multi-scale Demons algorithm); 3D planning and definition of target and normal structures; and registration / visualization of intraoperative CBCT with the surgical plan, preoperative images, and endoscopic video. Quantitative evaluation of surgical performance demonstrates a significant advantage in achieving complete tumor excision in challenging sinus and skull base ablation tasks. The ability to visualize the surgical plan in the context of intraoperative image data delineating residual tumor and neighboring critical structures presents a significant advantage to surgical performance and evaluation of the surgical product. The system has been translated to a prospective trial involving 12 patients undergoing head and neck surgery - the first implementation of the research prototype in the clinical setting. The trial demonstrates the value of high-performance intraoperative 3D imaging and provides a valuable basis for human factors analysis and workflow studies that will greatly augment streamlined implementation of such systems in complex OR environments.

  5. High-performance sports medicine

    National Research Council Canada - National Science Library

    Speed, Cathy

    2013-01-01

    High performance sports medicine involves the medical care of athletes, who are extraordinary individuals and who are exposed to intensive physical and psychological stresses during training and competition...

  6. Effectiveness of work zone intelligent transportation systems.

    Science.gov (United States)

    2013-12-01

    In the last decade, Intelligent Transportation Systems (ITS) have increasingly been deployed in work zones by state departments of transportation. Also known as smart work zone systems they improve traffic operations and safety by providing real-time...

  7. High-performance 1024x1024 MWIR/LWIR Dual-band InAs/GaSb Type-II Superlattice-based Camera System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — High performance LWIR detectors are highly needed. In order to image from long distance, it is important that imagers have high sensitivity, high resolution, and...

  8. Modified vaporization-resection for photoselective vaporization of the prostate using a GreenLight high-performance system 120-W Laser: the Seoul technique.

    Science.gov (United States)

    Son, Hwancheol; Ro, Yun Kwan; Min, Sun Ho; Choo, Min Soo; Kim, Jung Kwon; Lee, Chang Ju

    2011-02-01

    The most popular technique of photoselective vaporization of the prostate (PVP) for benign prostatic hyperplasia (BPH) involves vaporization only. We developed a modified vaporization-resection technique that consists of vaporizing a prostate along outlined margins and retrieving the wedge-shaped prostate tissue. We report the operative procedure and clinical outcomes of our technique with the GreenLight high performance system (HPS). A total of 104 patients with a prostate volume greater than 40 mL who underwent PVP were included in this retrospective study. Forty patients were treated with the vaporization-only technique (Group non-S) and 64 patients with the Seoul technique (group S). The clinical outcomes were assessed at 1, 3, 6, and 12 months postoperatively using the International Prostate Symptom Score (IPSS), quality of life (QoL) score, maximum flow rate (Q(max.)), and postvoid residual urine volume (PVR). The Q(max.), PVR, IPSS, and QoL scores improved significantly from 1 to 12 months after the PVP compared with the baseline in both groups (P technique for PVP showed good short-term efficacy and safety for the treatment of BPH. With this technique, we can conserve on the operative time, lasing time, and energy, and obtain prostatic tissue for pathologic evaluation. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. 24 CFR 902.71 - Incentives for high performers.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Incentives for high performers. 902... DEVELOPMENT PUBLIC HOUSING ASSESSMENT SYSTEM PHAS Incentives and Remedies § 902.71 Incentives for high performers. (a) Incentives for high performer PHAs. A PHA that is designated a high performer will be...

  10. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  11. Literacy: Exploring working memory systems.

    Science.gov (United States)

    Silva, Catarina; Faísca, Luís; Ingvar, Martin; Petersson, Karl Magnus; Reis, Alexandra

    2012-01-01

    Previous research showed an important association between reading and writing skills (literacy) and the phonological loop. However, the effects of literacy on other working memory components remain unclear. In this study, we investigated performance of illiterate subjects and their matched literate controls on verbal and nonverbal working memory tasks. Results revealed that the phonological loop is significantly influenced by literacy, while the visuospatial sketchpad appears to be less affected or not at all. Results also suggest that the central executive might be influenced by literacy, possibly as an expression of cognitive reserve.

  12. THE ANALYSIS OF WORKING SYSTEM ON DOFFER

    Directory of Open Access Journals (Sweden)

    Resul FETTAHOV

    2003-01-01

    Full Text Available On this paper, the analysis of working system of doffer used on OE spinning and packaging machines are studied. In addition, the mathematics model of relation between working parameters of the doffer and technological parameters of machines were obtained. The evaluating "useful time coefficient" of performance on doffer is determined. During the research, the two working systems obtained saving of electrical energy as improving the performance of doffer such as "waiting-working" and "working by signals" was suggested.

  13. Risk Factors for Reoperation After Photoselective Vaporization of the Prostate Using a 120 W GreenLight High Performance System Laser for the Treatment of Benign Prostatic Hyperplasia.

    Science.gov (United States)

    Kim, Kang Sup; Choi, Jin Bong; Bae, Woong Jin; Kim, Su Jin; Cho, Hyuk Jin; Hong, Sung-Hoo; Lee, Ji Youl; Kim, Sae Woong; Han, Dong-Seok

    2016-03-01

    We investigated risk factors in a large cohort of patients who underwent reoperation after photoselective vaporization of the prostate using the 120 W GreenLight High Performance System laser for treatment of benign prostatic hyperplasia. Complications such as recurrent/residual adenoma, urethral stricture, or bladder neck might occur after photoselective vaporization of the prostate for treatment of benign prostatic hyperplasia. We reviewed the data of 1040 patients who underwent photoselective vaporization of the prostate between April 2009 and December 2014, and analyzed the clinical data of 630 patients who completed >12 months of follow-up. Patients were evaluated for perioperative and late complications. Reoperation was defined as the necessity for any surgical intervention to resolve recurrent/residual adenoma, urethral stricture, or bladder neck contracture. Patients with recurrent/residual adenoma, urethral stricture, or bladder neck contracture were compared with those without complications to identify the risk factors for reoperation. Logistic regression analysis was conducted to estimate the risk of reoperation. Reoperation was performed in 25 of 630 patients (3.9%) at 35.5 months mean follow-up: 12 had recurrent/residual adenoma, 5 had urethral stricture, and 8 had bladder neck contracture. Multivariate analysis revealed that a higher prostate-specific antigen (PSA) (OR, 1.129; p = 0.023) and longer lasing time (OR, 0.883; p = 0.024) were predictors of recurrent/residual adenoma. Urethral stricture was associated with a history of transurethral surgery (OR, 1.321; p = 0.042). Preoperative small prostate volume was a risk factor for bladder neck contracture (OR, 0.901; p = 0.011). In our study, the significant factors related to recurrent/residual adenoma were a high preoperative PSA and longer lasing time. A history of transurethral surgery was significantly associated with urethral stricture, whereas preoperative small prostate volume

  14. Magnetic ionic liquid aqueous two-phase system coupled with high performance liquid chromatography: A rapid approach for determination of chloramphenicol in water environment.

    Science.gov (United States)

    Yao, Tian; Yao, Shun

    2017-01-20

    A novel organic magnetic ionic liquid based on guanidinium cation was synthesized and characterized. A new method of magnetic ionic liquid aqueous two-phase system (MILATPs) coupled with high-performance liquid chromatography (HPLC) was established to preconcentrate and determine trace amount of chloramphenicol (CAP) in water environment for the first time. In the absence of volatile organic solvents, MILATPs not only has the excellent properties of rapid extraction, but also exhibits a response to an external magnetic field which can be applied to assist phase separation. The phase behavior of MILATPs was investigated and phase equilibrium data were correlated by Merchuk equation. Various influencing factors on CAP recovery were systematically investigated and optimized. Under the optimal conditions, the preconcentration factor was 147.2 with the precision values (RSD%) of 2.42% and 4.45% for intra-day (n=6) and inter-day (n=6), respectively. The limit of detection (LOD) and limit of quantitation (LOQ) were 0.14ngmL(-1) and 0.42ngmL(-1), respectively. Fine linear range of 12.25ngmL(-1)-2200ngmL(-1) was obtained. Finally, the validated method was successfully applied for the analysis of CAP in some environmental waters with the recoveries for the spiked samples in the acceptable range of 94.6%-99.72%. Hopefully, MILATPs is showing great potential to promote new development in the field of extraction, separation and pretreatment of various biochemical samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Participatory simulation in hospital work system design

    DEFF Research Database (Denmark)

    Andersen, Simone Nyholm

    When ergonomic considerations are integrated into the design of work systems, both overall system performance and employee well-being improve. A central part of integrating ergonomics in work system design is to benefit from emplo y-ees’ knowledge of existing work systems. Participatory simulation...... (PS) is a method to access employee knowledge; namely employees are involved in the simulation and design of their own future work systems through the exploration of models representing work system designs. However, only a few studies have investigated PS and the elements of the method. Yet...... understanding the elements is essential when analyzing and planning PS in research and practice. This PhD study investigates PS and the method elements in the context of the Danish hospital sector, where PS is applied in the renewal and design of public hospitals and the work systems within the hospitals...

  16. Significant vertical phase separation in solvent-vapor-annealed poly(3,4-ethylenedioxythiophene):poly(styrene sulfonate) composite films leading to better conductivity and work function for high-performance indium tin oxide-free optoelectronics.

    Science.gov (United States)

    Yeo, Jun-Seok; Yun, Jin-Mun; Kim, Dong-Yu; Park, Sungjun; Kim, Seok-Soon; Yoon, Myung-Han; Kim, Tae-Wook; Na, Seok-In

    2012-05-01

    In the present study, a novel polar-solvent vapor annealing (PSVA) was used to induce a significant structural rearrangement in poly(3,4-ethylenedioxythiophene):poly(styrene sulfonate) (PEDOT:PSS) films in order to improve their electrical conductivity and work function. The effects of polar-solvent vapor annealing on PEDOT:PSS were systematically compared with those of a conventional solvent additive method (SAM) and investigated in detail by analyzing the changes in conductivity, morphology, top and bottom surface composition, conformational PEDOT chains, and work function. The results confirmed that PSVA induces significant phase separation between excess PSS and PEDOT chains and a spontaneous formation of a highly enriched PSS layer on the top surface of the PEDOT:PSS polymer blend, which in turn leads to better 3-dimensional connections between the conducting PEDOT chains and higher work function. The resultant PSVA-treated PEDOT:PSS anode films exhibited a significantly enhanced conductivity of up to 1057 S cm(-1) and a tunable high work function of up to 5.35 eV. The PSVA-treated PEDOT:PSS films were employed as transparent anodes in polymer light-emitting diodes (PLEDs) and polymer solar cells (PSCs). The cell performances of organic optoelectronic devices with the PSVA-treated PEDOT:PSS anodes were further improved due to the significant vertical phase separation and the self-organized PSS top surface in PSVA-treated PEDOT:PSS films, which can increase the anode conductivity and work function and allow the direct formation of a functional buffer layer between the active layer and the polymeric electrode. The results of the present study will allow better use and understanding of polymeric-blend materials and will further advance the realization of high-performance indium tin oxide (ITO)-free organic electronics.

  17. High Performance Space Pump Project

    Data.gov (United States)

    National Aeronautics and Space Administration — PDT is proposing a High Performance Space Pump based upon an innovative design using several technologies. The design will use a two-stage impeller, high temperature...

  18. CLUPI, a high-performance imaging system on the ESA-NASA rover of the 2018 ExoMars mission to discover biofabrics on Mars

    Science.gov (United States)

    Josset, J.-L.; Westall, F.; Hofmann, B. A.; Spray, J. G.; Cockell, C.; Kempe, S.; Griffiths, A. D.; De Sanctis, M. C.; Colangeli, L.; Koschny, D.; Pullan, D.; Föllmi, K.; Diamond, L.; Josset, M.; Javaux, E.; Esposito, F.; Barnes, D.

    2012-04-01

    The scientific objectives of the ESA-NASA rover of the 2018 mission of the ExoMars Programme are to search for traces of past or present life and to characterise the near-sub surface. Both objectives require study of the rock/regolith materials in terms of structure, textures, mineralogy, and elemental and organic composition. The 2018 rover ExoMars payload consists of a suite of complementary instruments designed to reach these objectives. CLUPI, the high-performance colour close up imager, on board the 2018 ESA-NASA Rover plays an important role in attaining the mission objectives: it is the equivalent of the hand lens that no geologist is without when undertaking field work. CLUPI is a powerful, highly integrated miniaturized (smart assembly in titanium that can sustain a wide temperature range. The concept benefits from well-proven heritage: Proba, Rosetta, MarsExpress and Smart-1 missions… Because the main science objective of ExoMars concerns the search for life, whose traces on Mars are likely to be cryptic, close up observation of the rocks and granular regolith will be critical to the decision as whether to drill and sample the nearby underlying materials. Thus, CLUPI is the essential final step in the choice of drill site. But not only are CLUPI's observations of the rock outcrops important, but they also serve other purposes. CLUPI, could observe the placement of the drill head. It will also be able to observe the fines that come out of the drill hole, including any colour stratification linked to lithological changes with depth. Finally, CLUPI will provide detailed observation of the surface of the core drilled materials when they are in the sample drawer at a spatial resolution of 15 micrometer/pixel in color. The close-up imager CLUPI on the ESA-NASA rover of the 2018 mission will be described together with its capabilities to provide important information significantly contributing to the understanding of the geological environment and could

  19. Teacher Accountability at High Performing Charter Schools

    Science.gov (United States)

    Aguirre, Moises G.

    2016-01-01

    This study will examine the teacher accountability and evaluation policies and practices at three high performing charter schools located in San Diego County, California. Charter schools are exempted from many laws, rules, and regulations that apply to traditional school systems. By examining the teacher accountability systems at high performing…

  20. Energy Performance Testing of Asetek's RackCDU System at NREL's High Performance Computing Data Center

    Energy Technology Data Exchange (ETDEWEB)

    Sickinger, D.; Van Geet, O.; Ravenscroft, C.

    2014-11-01

    In this study, we report on the first tests of Asetek's RackCDU direct-to-chip liquid cooling system for servers at NREL's ESIF data center. The system was simple to install on the existing servers and integrated directly into the data center's existing hydronics system. The focus of this study was to explore the total cooling energy savings and potential for waste-heat recovery of this warm-water liquid cooling system. RackCDU captured up to 64% of server heat into the liquid stream at an outlet temperature of 89 degrees F, and 48% at outlet temperatures approaching 100 degrees F. This system was designed to capture heat from the CPUs only, indicating a potential for increased heat capture if memory cooling was included. Reduced temperatures inside the servers caused all fans to reduce power to the lowest possible BIOS setting, indicating further energy savings potential if additional fan control is included. Preliminary studies manually reducing fan speed (and even removing fans) validated this potential savings but could not be optimized for these working servers. The Asetek direct-to-chip liquid cooling system has been in operation with users for 16 months with no necessary maintenance and no leaks.

  1. HiPTI - High Performance Thermal Insulation, Annex 39 to IEA/ECBCS-Implementing Agreement. Vacuum insulation in the building sector. Systems and applications

    Energy Technology Data Exchange (ETDEWEB)

    Binz, A.; Moosmann, A.; Steinke, G.; Schonhardt, U.; Fregnan, F. [Fachhochschule Nordwestschweiz (FHNW), Muttenz (Switzerland); Simmler, H.; Brunner, S.; Ghazi, K.; Bundi, R. [Swiss Federal Laboratories for Materials Testing and Research (EMPA), Duebendorf (Switzerland); Heinemann, U.; Schwab, H. [ZAE Bayern, Wuerzburg (Germany); Cauberg, H.; Tenpierik, M. [Delft University of Technology, Delft (Netherlands); Johannesson, G.; Thorsell, T. [Royal Institute of Technology (KTH), Stockholm (Sweden); Erb, M.; Nussbaumer, B. [Dr. Eicher und Pauli AG, Basel and Bern (Switzerland)

    2005-07-01

    This final report on vacuum insulation panels (VIP) presents and discusses the work done under IEA/Energy Conservation in Buildings and Community Systems (ECBCS) Annex 39, subtask B on the basis of a wide selection of reports from practice. The report shows how the building trade deals with this new material today, the experience gained and the conclusions drawn from this work. As well as presenting recommendations for the practical use of VIP, the report also addresses questions regarding the effective insulation values to be expected with current VIP, whose insulation performance is stated as being a factor of five to eight times better than conventional insulation. The introduction of this novel material in the building trade is discussed. Open questions and risks are examined. The fundamentals of vacuum insulation panels are discussed and the prerequisites, risks and optimal application of these materials in the building trade are examined.

  2. High performance computing at Sandia National Labs

    Energy Technology Data Exchange (ETDEWEB)

    Cahoon, R.M.; Noe, J.P.; Vandevender, W.H.

    1995-10-01

    Sandia`s High Performance Computing Environment requires a hierarchy of resources ranging from desktop, to department, to centralized, and finally to very high-end corporate resources capable of teraflop performance linked via high-capacity Asynchronous Transfer Mode (ATM) networks. The mission of the Scientific Computing Systems Department is to provide the support infrastructure for an integrated corporate scientific computing environment that will meet Sandia`s needs in high-performance and midrange computing, network storage, operational support tools, and systems management. This paper describes current efforts at SNL/NM to expand and modernize centralized computing resources in support of this mission.

  3. A New AutomatedMethod to Analyze Urinary 8-Hydroxydeoxyguanosine by a High-performance Liquid Chromatography-Electrochemical Detector System

    OpenAIRE

    Hiroshi, Kasai; Department of Environmental Oncology, Institute of Industrial Ecological Sciences, University of Occupational and Environmental Health

    2003-01-01

    A new method was developed to analyze urinary 8-hydroxydeoxyguanosine (8-OH-dG) by high-performance liquid chromatography (HPLC) coupled to an electrochemical detector (ECD). This method is unique because (i) urine is first fractionated by anion exchange chromatography (polystyrene-type resin with quaternary ammonium group, sulfate form) before analysis by reverse phase chromatography; and (ii) the 8-OHdG fraction in the first HPLC is precisely and automatically collected based on the added r...

  4. High Performance Bulk Thermoelectric Materials

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Zhifeng [Boston College, Chestnut Hill, MA (United States)

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  5. NCI's Transdisciplinary High Performance Scientific Data Platform

    Science.gov (United States)

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  6. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  7. Strategy Guideline. Partnering for High Performance Homes

    Energy Technology Data Exchange (ETDEWEB)

    Prahl, Duncan [IBACOS, Inc., Pittsburgh, PA (United States)

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  8. Automatic Energy Schemes for High Performance Applications

    Energy Technology Data Exchange (ETDEWEB)

    Sundriyal, Vaibhav [Iowa State Univ., Ames, IA (United States)

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  9. Neo4j high performance

    CERN Document Server

    Raj, Sonal

    2015-01-01

    If you are a professional or enthusiast who has a basic understanding of graphs or has basic knowledge of Neo4j operations, this is the book for you. Although it is targeted at an advanced user base, this book can be used by beginners as it touches upon the basics. So, if you are passionate about taming complex data with the help of graphs and building high performance applications, you will be able to get valuable insights from this book.

  10. High-performance sports medicine.

    Science.gov (United States)

    Speed, Cathy

    2013-02-01

    High performance sports medicine involves the medical care of athletes, who are extraordinary individuals and who are exposed to intensive physical and psychological stresses during training and competition. The physician has a broad remit and acts as a 'medical guardian' to optimise health while minimising risks. This review describes this interesting field of medicine, its unique challenges and priorities for the physician in delivering best healthcare.

  11. High Performance Tools And Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Collette, M R; Corey, I R; Johnson, J R

    2005-01-24

    This goal of this project was to evaluate the capability and limits of current scientific simulation development tools and technologies with specific focus on their suitability for use with the next generation of scientific parallel applications and High Performance Computing (HPC) platforms. The opinions expressed in this document are those of the authors, and reflect the authors' current understanding and functionality of the many tools investigated. As a deliverable for this effort, we are presenting this report describing our findings along with an associated spreadsheet outlining current capabilities and characteristics of leading and emerging tools in the high performance computing arena. This first chapter summarizes our findings (which are detailed in the other chapters) and presents our conclusions, remarks, and anticipations for the future. In the second chapter, we detail how various teams in our local high performance community utilize HPC tools and technologies, and mention some common concerns they have about them. In the third chapter, we review the platforms currently or potentially available to utilize these tools and technologies on to help in software development. Subsequent chapters attempt to provide an exhaustive overview of the available parallel software development tools and technologies, including their strong and weak points and future concerns. We categorize them as debuggers, memory checkers, performance analysis tools, communication libraries, data visualization programs, and other parallel development aides. The last chapter contains our closing information. Included with this paper at the end is a table of the discussed development tools and their operational environment.

  12. High Performance Computing Operations Review Report

    Energy Technology Data Exchange (ETDEWEB)

    Cupps, Kimberly C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  13. SISYPHUS: A high performance seismic inversion factory

    Science.gov (United States)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    branches for the static process setup, inversion iterations, and solver runs, each branch specifying information at the event, station and channel levels. The workflow management framework is based on an embedded scripting engine that allows definition of various workflow scenarios using a high-level scripting language and provides access to all available inversion components represented as standard library functions. At present the SES3D wave propagation solver is integrated in the solution; the work is in progress for interfacing with SPECFEM3D. A separate framework is designed for interoperability with an optimization module; the workflow manager and optimization process run in parallel and cooperate by exchanging messages according to a specially designed protocol. A library of high-performance modules implementing signal pre-processing, misfit and adjoint computations according to established good practices is included. Monitoring is based on information stored in the inversion state database and at present implements a command line interface; design of a graphical user interface is in progress. The software design fits well into the common massively parallel system architecture featuring a large number of computational nodes running distributed applications under control of batch-oriented resource managers. The solution prototype has been implemented on the "Piz Daint" supercomputer provided by the Swiss Supercomputing Centre (CSCS).

  14. Design and implementation of an automated liquid-phase microextraction-chip system coupled on-line with high performance liquid chromatography

    DEFF Research Database (Denmark)

    Li, Bin; Petersen, Nickolaj J.; Payán, María D Ramos

    2014-01-01

    An automated liquid-phase microextraction (LPME) device in a chip format has been developed and coupled directly to high performance liquid chromatography (HPLC). A 10-port 2-position switching valve was used to hyphenate the LPME-chip with the HPLC autosampler, and to collect the extracted....... The composition of the supported liquid membrane (SLM) and carrier was optimized in order to achieve reasonable extraction performance of all the five alkaloids. With 1-octanol as SLM solvent and with 25mM sodium octanoate as anionic carrier, extraction recoveries for the different opium alkaloids ranged between....... The repeatability was within 5.0-10.8% (RSD). The membrane liquid in the LPME-chip was regenerated automatically between every third injection. With this procedure the liquid membrane in the LPME-chip was stable in 3-7 days depending on the complexity of sample solutions with continuous operation. With this LPME...

  15. Bidirectional Frontoparietal Oscillatory Systems Support Working Memory.

    Science.gov (United States)

    Johnson, Elizabeth L; Dewar, Callum D; Solbakk, Anne-Kristin; Endestad, Tor; Meling, Torstein R; Knight, Robert T

    2017-06-19

    The ability to represent and select information in working memory provides the neurobiological infrastructure for human cognition. For 80 years, dominant views of working memory have focused on the key role of prefrontal cortex (PFC) [1-8]. However, more recent work has implicated posterior cortical regions [9-12], suggesting that PFC engagement during working memory is dependent on the degree of executive demand. We provide evidence from neurological patients with discrete PFC damage that challenges the dominant models attributing working memory to PFC-dependent systems. We show that neural oscillations, which provide a mechanism for PFC to communicate with posterior cortical regions [13], independently subserve communications both to and from PFC-uncovering parallel oscillatory mechanisms for working memory. Fourteen PFC patients and 20 healthy, age-matched controls performed a working memory task where they encoded, maintained, and actively processed information about pairs of common shapes. In controls, the electroencephalogram (EEG) exhibited oscillatory activity in the low-theta range over PFC and directional connectivity from PFC to parieto-occipital regions commensurate with executive processing demands. Concurrent alpha-beta oscillations were observed over parieto-occipital regions, with directional connectivity from parieto-occipital regions to PFC, regardless of processing demands. Accuracy, PFC low-theta activity, and PFC → parieto-occipital connectivity were attenuated in patients, revealing a PFC-independent, alpha-beta system. The PFC patients still demonstrated task proficiency, which indicates that the posterior alpha-beta system provides sufficient resources for working memory. Taken together, our findings reveal neurologically dissociable PFC and parieto-occipital systems and suggest that parallel, bidirectional oscillatory systems form the basis of working memory. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. DOE research in utilization of high-performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure.

  17. Work Function Calculation For Hafnium- Barium System

    Directory of Open Access Journals (Sweden)

    K.A. Tursunmetov

    2015-08-01

    Full Text Available The adsorption process of barium atoms on hafnium is considered. A structural model of the system is presented and on the basis of calculation of interaction between ions dipole system the dependence of the work function on the coating.

  18. Modeling complex work systems - method meets reality

    NARCIS (Netherlands)

    van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert

    1996-01-01

    Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the

  19. High Performance Design of 100Gb/s DPSK Optical Transmitter

    DEFF Research Database (Denmark)

    Das, Bhagwan; Abdullah, M.F.L; Shah, Nor Shahihda Mohd

    2016-01-01

    High performance communication systems require high performance devices for exchanging information at a faster rate. These devices are experiencing several challenges e.g. bandwidth limitations, power limitations, design limitations and etc. The existing techniques are lacking in providing high...... performance output simultaneously by maintaining actual parameters of device. In this work, high performance 100Gb/s optical DPSK transmitter design is realized in Field Programming Gate (FPGA) using time constraint technique. Before applying the proposed technique actual FPGA’s frequency was 0.2 GHz....... This high performance design of optical transmitter has zero timing error, low timing score and high slack time due to synchronization between input data and clock frequency. It is also determined that 99% timing score is reduced in comparison with 1 GHz frequency that has high jitters, high timing error...

  20. High performance cloud auditing and applications

    CERN Document Server

    Choi, Baek-Young; Song, Sejun

    2014-01-01

    This book mainly focuses on cloud security and high performance computing for cloud auditing. The book discusses emerging challenges and techniques developed for high performance semantic cloud auditing, and presents the state of the art in cloud auditing, computing and security techniques with focus on technical aspects and feasibility of auditing issues in federated cloud computing environments.   In summer 2011, the United States Air Force Research Laboratory (AFRL) CyberBAT Cloud Security and Auditing Team initiated the exploration of the cloud security challenges and future cloud auditing research directions that are covered in this book. This work was supported by the United States government funds from the Air Force Office of Scientific Research (AFOSR), the AFOSR Summer Faculty Fellowship Program (SFFP), the Air Force Research Laboratory (AFRL) Visiting Faculty Research Program (VFRP), the National Science Foundation (NSF) and the National Institute of Health (NIH). All chapters were partially suppor...

  1. vSphere high performance cookbook

    CERN Document Server

    Sarkar, Prasenjit

    2013-01-01

    vSphere High Performance Cookbook is written in a practical, helpful style with numerous recipes focusing on answering and providing solutions to common, and not-so common, performance issues and problems.The book is primarily written for technical professionals with system administration skills and some VMware experience who wish to learn about advanced optimization and the configuration features and functions for vSphere 5.1.

  2. A Brain System for Auditory Working Memory.

    Science.gov (United States)

    Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D

    2016-04-20

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.

  3. Working Memory Systems in the Rat.

    Science.gov (United States)

    Bratch, Alexander; Kann, Spencer; Cain, Joshua A; Wu, Jie-En; Rivera-Reyes, Nilda; Dalecki, Stefan; Arman, Diana; Dunn, Austin; Cooper, Shiloh; Corbin, Hannah E; Doyle, Amanda R; Pizzo, Matthew J; Smith, Alexandra E; Crystal, Jonathon D

    2016-02-08

    A fundamental feature of memory in humans is the ability to simultaneously work with multiple types of information using independent memory systems. Working memory is conceptualized as two independent memory systems under executive control [1, 2]. Although there is a long history of using the term "working memory" to describe short-term memory in animals, it is not known whether multiple, independent memory systems exist in nonhumans. Here, we used two established short-term memory approaches to test the hypothesis that spatial and olfactory memory operate as independent working memory resources in the rat. In the olfactory memory task, rats chose a novel odor from a gradually incrementing set of old odors [3]. In the spatial memory task, rats searched for a depleting food source at multiple locations [4]. We presented rats with information to hold in memory in one domain (e.g., olfactory) while adding a memory load in the other domain (e.g., spatial). Control conditions equated the retention interval delay without adding a second memory load. In a further experiment, we used proactive interference [5-7] in the spatial domain to compromise spatial memory and evaluated the impact of adding an olfactory memory load. Olfactory and spatial memory are resistant to interference from the addition of a memory load in the other domain. Our data suggest that olfactory and spatial memory draw on independent working memory systems in the rat. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. High-performance workplace practices in nursing homes: an economic perspective.

    Science.gov (United States)

    Bishop, Christine E

    2014-02-01

    To develop implications for research, practice and policy, selected economics and human resources management research literature was reviewed to compare and contrast nursing home culture change work practices with high-performance human resource management systems in other industries. The organization of nursing home work under culture change has much in common with high-performance work systems, which are characterized by increased autonomy for front-line workers, self-managed teams, flattened supervisory hierarchy, and the aspiration that workers use specific knowledge gained on the job to enhance quality and customization. However, successful high-performance work systems also entail intensive recruitment, screening, and on-going training of workers, and compensation that supports selective hiring and worker commitment; these features are not usual in the nursing home sector. Thus despite many parallels with high-performance work systems, culture change work systems are missing essential elements: those that require higher compensation. If purchasers, including public payers, were willing to pay for customized, resident-centered care, productivity gains could be shared with workers, and the nursing home sector could move from a low-road to a high-road employment system.

  5. EDITORIAL: High performance under pressure High performance under pressure

    Science.gov (United States)

    Demming, Anna

    2011-11-01

    nanoelectromechanical systems. Researchers in China exploit the coupling between piezoelectric and semiconducting properties of ZnO in an optimised diode device design [6]. They used a Schottky rather than an ohmic contact to depress the off current. In addition they used ZnO nanobelts that have dominantly polar surfaces instead of [0001] ZnO nanowires to enhance the on current under the small applied forces obtained by using an atomic force microscopy tip. The nanobelts have potential for use in random access memory devices. Much of the success in applying piezoresistivity in device applications stems from a deepening understanding of the mechanisms behind the process. A collaboration of researchers in the USA and China have proposed a new criterion for identifying the carrier type of individual ZnO nanowires based on the piezoelectric output of a nanowire when it is mechanically deformed by a conductive atomic force microscopy tip in contact mode [7]. The p-type/n-type shell/core nanowires give positive piezoelectric outputs, while the n-type nanowires produce negative piezoelectric outputs. In this issue Zhong Lin Wang and colleagues in Italy and the US report theoretical investigations into the piezoresistive behaviour of ZnO nanowires for energy harvesting. The work develops previous research on the ability of vertically aligned ZnO nanowires under uniaxial compression to power a nanodevice, in particular a pH sensor [8]. Now the authors have used finite element simulations to study the system. Among their conclusions they find that, for typical geometries and donor concentrations, the length of the nanowire does not significantly influence the maximum output piezopotential because the potential mainly drops across the tip. This has important implications for low-cost, CMOS- and microelectromechanical-systems-compatible fabrication of nanogenerators. The simulations also reveal the influence of the dielectric surrounding the nanowire on the output piezopotential, especially for

  6. High Performance Perovskite Solar Cells

    Science.gov (United States)

    Tong, Xin; Lin, Feng; Wu, Jiang

    2015-01-01

    Perovskite solar cells fabricated from organometal halide light harvesters have captured significant attention due to their tremendously low device costs as well as unprecedented rapid progress on power conversion efficiency (PCE). A certified PCE of 20.1% was achieved in late 2014 following the first study of long‐term stable all‐solid‐state perovskite solar cell with a PCE of 9.7% in 2012, showing their promising potential towards future cost‐effective and high performance solar cells. Here, notable achievements of primary device configuration involving perovskite layer, hole‐transporting materials (HTMs) and electron‐transporting materials (ETMs) are reviewed. Numerous strategies for enhancing photovoltaic parameters of perovskite solar cells, including morphology and crystallization control of perovskite layer, HTMs design and ETMs modifications are discussed in detail. In addition, perovskite solar cells outside of HTMs and ETMs are mentioned as well, providing guidelines for further simplification of device processing and hence cost reduction. PMID:27774402

  7. High Performance Perovskite Solar Cells.

    Science.gov (United States)

    Tong, Xin; Lin, Feng; Wu, Jiang; Wang, Zhiming M

    2016-05-01

    Perovskite solar cells fabricated from organometal halide light harvesters have captured significant attention due to their tremendously low device costs as well as unprecedented rapid progress on power conversion efficiency (PCE). A certified PCE of 20.1% was achieved in late 2014 following the first study of long-term stable all-solid-state perovskite solar cell with a PCE of 9.7% in 2012, showing their promising potential towards future cost-effective and high performance solar cells. Here, notable achievements of primary device configuration involving perovskite layer, hole-transporting materials (HTMs) and electron-transporting materials (ETMs) are reviewed. Numerous strategies for enhancing photovoltaic parameters of perovskite solar cells, including morphology and crystallization control of perovskite layer, HTMs design and ETMs modifications are discussed in detail. In addition, perovskite solar cells outside of HTMs and ETMs are mentioned as well, providing guidelines for further simplification of device processing and hence cost reduction.

  8. High-performance phase-field modeling

    KAUST Repository

    Vignal, Philippe

    2015-04-27

    Many processes in engineering and sciences involve the evolution of interfaces. Among the mathematical frameworks developed to model these types of problems, the phase-field method has emerged as a possible solution. Phase-fields nonetheless lead to complex nonlinear, high-order partial differential equations, whose solution poses mathematical and computational challenges. Guaranteeing some of the physical properties of the equations has lead to the development of efficient algorithms and discretizations capable of recovering said properties by construction [2, 5]. This work builds-up on these ideas, and proposes novel discretization strategies that guarantee numerical energy dissipation for both conserved and non-conserved phase-field models. The temporal discretization is based on a novel method which relies on Taylor series and ensures strong energy stability. It is second-order accurate, and can also be rendered linear to speed-up the solution process [4]. The spatial discretization relies on Isogeometric Analysis, a finite element method that possesses the k-refinement technology and enables the generation of high-order, high-continuity basis functions. These basis functions are well suited to handle the high-order operators present in phase-field models. Two-dimensional and three dimensional results of the Allen-Cahn, Cahn-Hilliard, Swift-Hohenberg and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  9. Fundamentals of Modeling, Data Assimilation, and High-performance Computing

    Science.gov (United States)

    Rood, Richard B.

    2005-01-01

    This lecture will introduce the concepts of modeling, data assimilation and high- performance computing as it relates to the study of atmospheric composition. The lecture will work from basic definitions and will strive to provide a framework for thinking about development and application of models and data assimilation systems. It will not provide technical or algorithmic information, leaving that to textbooks, technical reports, and ultimately scientific journals. References to a number of textbooks and papers will be provided as a gateway to the literature.

  10. Charter for Systems Engineer Working Group

    Science.gov (United States)

    Suffredini, Michael T.; Grissom, Larry

    2015-01-01

    This charter establishes the International Space Station Program (ISSP) Mobile Servicing System (MSS) Systems Engineering Working Group (SEWG). The MSS SEWG is established to provide a mechanism for Systems Engineering for the end-to-end MSS function. The MSS end-to-end function includes the Space Station Remote Manipulator System (SSRMS), the Mobile Remote Servicer (MRS) Base System (MBS), Robotic Work Station (RWS), Special Purpose Dexterous Manipulator (SPDM), Video Signal Converters (VSC), and Operations Control Software (OCS), the Mobile Transporter (MT), and by interfaces between and among these elements, and United States On-Orbit Segment (USOS) distributed systems, and other International Space Station Elements and Payloads, (including the Power Data Grapple Fixtures (PDGFs), MSS Capture Attach System (MCAS) and the Mobile Transporter Capture Latch (MTCL)). This end-to-end function will be supported by the ISS and MSS ground segment facilities. This charter defines the scope and limits of the program authority and document control that is delegated to the SEWG and it also identifies the panel core membership and specific operating policies.

  11. A high speed, portable, multi-function, weigh-in-motion (WIM) sensing system and a high performance optical fiber Bragg grating (FBG) demodulator

    Science.gov (United States)

    Zhang, Hongtao; Wei, Zhanxiong; Fan, Lingling; Yang, Shangming; Wang, Pengfei; Cui, Hong-Liang

    2010-04-01

    A high speed, portable, multi-function WIM sensing system based on Fiber Bragg Grating (FBG) technology is reported in this paper. This system is developed to measure the total weight, the distribution of weight of vehicle in motion, the distance of wheel axles and the distance between left and right wheels. In this system, a temperature control system and a real-time compensation system are employed to eliminate the drifts of optical fiber Fabry-Pérot tunable filter. Carbon Fiber Laminated Composites are used in the sensor heads to obtain high reliability and sensitivity. The speed of tested vehicles is up to 20 mph, the full scope of measurement is 4000 lbs, and the static resolution of sensor head is 20 lbs. The demodulator has high speed (500 Hz) data collection, and high stability. The demodulator and the light source are packed into a 17'' rack style enclosure. The prototype has been tested respectively at Stevens' campus and Army base. Some experiences of avoiding the pitfalls in developing this system are also presented in this paper.

  12. Development of a high-performance coal-fired power generating system with pyrolysis gas and char-fired high temperature furnace (HITAF)

    Energy Technology Data Exchange (ETDEWEB)

    1992-11-01

    A concept for an advanced coal-fired combined-cycle power generating system is currently being developed. The first phase of this three-phase program consists of conducting the necessary research and development to define the system, evaluate the economic and technical feasibility of the concept, and prepare an R D plan to develop the concept further. Foster Wheeler Development Corporation is leading a team ofcompanies involved in this effort. The system proposed to meet these goals is a combined-cycle system where air for a gas turbine is indirectly heated to approximately 1800[degrees]F in furnaces fired with cool-derived fuels and then directly heated in a natural-gas-fired combustor up to about 2400[degrees]F. The system is based on a pyrolyzing process that converts the coal into a low-Btu fuel gas and char. The fuelgas is a relatively clean fuel, and it is fired to heat tube surfaces that are susceptible to corrosion and problems from ash deposition. In particular, the high-temperature air heater tubes, which will need tobe a ceramic material, will be located in a separate furnace or region of a furnace that is exposed to combustion products from the low-Btu fuel gas only. A simplified process flow diagram is shown.

  13. Cognitive System Engineering Approach to Design of Work Support Systems

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1995-01-01

    The problem of designing work support systems for flexible, dynamic work environments is discussed and a framework for analysis of work in terms of behavior shaping constraints is described. The application of 'ecological interfaces' presenting to the user a map of the relational structure...... of the work space is advocated from the thesis that a map is a better guidance of discretionary tasks than is a route instruction. For the same reason, support of system design is proposed in terms of maps of the design territory, rather than in terms of guidelines....

  14. Working with boundaries in systems psychodynamic consulting

    Directory of Open Access Journals (Sweden)

    Henk Struwig

    2012-01-01

    Full Text Available Orientation: The researcher described the systems psychodynamics of boundary management in organisations. The data showed how effective boundary management leads to good holding environments that, in turn, lead to containing difficult emotions.Research purpose: The purpose of the research was to produce a set of theoretical assumptions about organisational boundaries and boundary management in organisations and, from these, to develop a set of hypotheses as a thinking framework for practising consulting psychologists when they work with boundaries from a systems psychodynamic stance.Motivation for the study: The researcher used the belief that organisational boundaries reflect the essence of organisations. Consulting to boundary managers could facilitate a deep understanding of organisational dynamics.Research design, approach and method: The researcher followed a case study design. He used systems psychodynamic discourse analysis. It led to six working hypotheses.Main findings: The primary task of boundary management is to hold the polarities of integration and differentiation and not allow the system to become fragmented or overly integrated. Boundary management is a primary task and an ongoing activity of entire organisations.Practical/managerial implications: Organisations should work actively at effective boundary management and at balancing integration and differentiation. Leaders should become aware of how effective boundary management leads to good holding environments that, in turn, lead to containing difficult emotions in organisations.Contribution/value-add: The researcher provided a boundary-consulting framework in order to assist consultants to balance the conceptual with the practical when they consult.

  15. Ocean thermal energy conversion (OTEC) power system development utilizing advanced, high-performance heat transfer techniques. Volume 1. Conceptual design report

    Energy Technology Data Exchange (ETDEWEB)

    1978-05-12

    The objective of this project is the development of a preliminary design for a full-sized, closed cycle, ammonia power system module for the 100 MWe OTEC Demonstration Plant. In turn, this Demonstration Plant is to demonstrate, by 1984, the operation and performance of an ocean thermal power plant having sufficiently advanced heat exchanger design to project economic viability for commercial utilization in the late 1980's and beyond. Included in this power system development are the preliminary designs for a proof-of-concept pilot plant and test article heat exchangers which are scaled in such a manner as to support a logically sequential, relatively low-cost development of the full-scale power system module. The conceptual designs are presented for the Demonstration Plant power module, the proof-of-concept pilot plant, and for a pair of test article heat exchangers. Costs associated with the design, development, fabrication, checkout, delivery, installation, and operation are included. The accompanying design and producibility studies on the full-scale power system module project the performance/economics for the commercial plant. This section of the report describes the full-size power system module, and summarizes the design parameters and associated costs for the Demonstration Plant module (prototype) and projects costs for commercial plants in production. The material presented is directed primarily toward the surface platform/ship basic reference hull designated for use during conceptual design; however, other containment vessels were considered during the design effort so that the optimum power system would not be unduly influenced or restricted. (WHK)

  16. Development of a fast high performance liquid chromatographic screening system for eight antidiabetic drugs by an improved methodology of in-silico robustness simulation.

    Science.gov (United States)

    Mokhtar, Hatem I; Abdel-Salam, Randa A; Haddad, Ghada M

    2015-06-19

    Robustness of RP-HPLC methods is a crucial method quality attribute which has gained an increasing interest throughout the efforts to apply quality by design concepts in analytical methodology. Improvement to design space modeling approaches to represent method robustness was the goal of many previous works. Modeling of design spaces regarding to method robustness fulfils quality by design essence of ensuring method validity throughout the design space. The current work aimed to describe an improvement to robustness modeling of design spaces in context of RP-HPLC method development for screening of eight antidiabetic drugs. The described improvement consisted of in-silico simulation of practical robustness testing procedures thus had the advantage of modeling design spaces with higher confidence in estimated of method robustness. The proposed in-silico robustness test was performed as a full factorial design of simulated method conditions deliberate shifts for each predicted point in knowledge space with modeling error propagation. Design space was then calculated as zones exceeding a threshold probability to pass the simulated robustness testing. Potential design spaces were mapped for three different stationary phases as a function of gradient elution parameters, pH and ternary solvent ratio. A robust and fast separation for the eight compounds within less than 6 min was selected and confirmed through experimental robustness testing. The effectiveness of this approach regarding definition of design spaces with ensured robustness and desired objectives was demonstrated. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Development and application of a specially designed heating system for temperature-programmed high-performance liquid chromatography using subcritical water as the mobile phase.

    Science.gov (United States)

    Teutenberg, T; Goetze, H-J; Tuerk, J; Ploeger, J; Kiffmeyer, T K; Schmidt, K G; Kohorst, W gr; Rohe, T; Jansen, H-D; Weber, H

    2006-05-05

    A specially designed heating system for temperature-programmed HPLC was developed based on experimental measurements of eluent temperature inside a stainless steel capillary using a very thin thermocouple. The heating system can be operated at temperatures up to 225 degrees C and consists of a preheating, a column heating and a cooling unit. Fast cycle times after a temperature gradient can be realized by an internal silicone oil bath which cools down the preheating and column heating unit. Long-term thermal stability of a polybutadiene-coated zirconium dioxide column has been evaluated using a tubular oven in which the column was placed. The packing material was stable after 50h of operation at 185 degrees C. A mixture containing four steroids was separated at ambient conditions using a mobile phase of 25% acetonitrile:75% deionized water and a mobile phase of pure deionized water at 185 degrees C using the specially designed heating system and the PBD column. Analysis time could be drastically reduced from 17 min at ambient conditions and a flow rate of 1 mL/min to only 1.2 min at 185 degrees C and a flow rate of 5 mL/min. At these extreme conditions, no thermal mismatch was observed and peaks were not distorted, thus underlining the performance of the developed heating system. Temperature programming was performed by separating cytostatic and antibiotic drugs with a temperature gradient using only water as the mobile phase. In contrast to an isocratic elution of this mixture at room temperature, overall analysis time could be reduced two-fold from 20 to 10 min.

  18. Mussel-inspired Functionalization of Cotton for Nano-catalyst Support and Its Application in a Fixed-bed System with High Performance

    Science.gov (United States)

    Xi, Jiangbo; Xiao, Junwu; Xiao, Fei; Jin, Yunxia; Dong, Yue; Jing, Feng; Wang, Shuai

    2016-01-01

    Inspired by the composition of adhesive and reductive proteins secreted by marine mussels, polydopamine (PDA) was used to coat cotton microfiber (CMF), and then acted as reducing agent for the growth of Pd nanoparticles on PDA coated CMF (PDA@CMF) composites. The resultant CMF@PDA/Pd composites were then packed in a column for the further use in fixed-bed system. For the catalysis of the reduction of 4-nitrophenol, the flow rate of the 4-aminophenol solution (0.5 mM) was as high as 60 mL/min. The obtained fixed-bed system even exhibited superior performance to conventional batch reaction process because it greatly facilitated the efficiency of the catalytic fibers. Consequently, its turnover frequency (TOF) was up to 1.587 min−1, while the TOF in the conventional batch reaction was 0.643 min−1. The catalytic fibers also showed good recyclability, which can be recycled for nine successive cycles without a loss of activity. Furthermore, the catalytic system based on CMF@PDA/Pd can also be applied for Suzuki coupling reaction with the iodobenzene conversion up to 96.7%. The strategy to prepare CMF@PDA/Pd catalytic fixed bed was simple, economical and scalable, which can also be applied for coating different microfibers and loading other noble metal nanoparticles, was amenable for automated industrial processes. PMID:26902657

  19. A radio-high-performance liquid chromatography dual-flow cell gamma-detection system for on-line radiochemical purity and labeling efficiency determination

    DEFF Research Database (Denmark)

    Lindegren, S; Jensen, H; Jacobsson, L

    2014-01-01

    into the well of a NaI(Tl) detector. The radio-HPLC flow was directed from the injector to the reference cell allowing on-line detection of the total injected sample activity prior to entering the HPLC column. The radioactivity eluted from the column was then detected in the analytical cell. In this way......, the sample will act as its own standard, a feature enabling on-line quantification of the processed radioactivity passing through the system. All data were acquired on-line via an analog signal from a rate meter using chromatographic software. The radiochemical yield and recovery could be simply...

  20. Facilitating NASA's Use of GEIA-STD-0005-1, Performance Standard for Aerospace and High Performance Electronic Systems Containing Lead-Free Solder

    Science.gov (United States)

    Plante, Jeannete

    2010-01-01

    GEIA-STD-0005-1 defines the objectives of, and requirements for, documenting processes that assure customers and regulatory agencies that AHP electronic systems containing lead-free solder, piece parts, and boards will satisfy the applicable requirements for performance, reliability, airworthiness, safety, and certify-ability throughout the specified life of performance. It communicates requirements for a Lead-Free Control Plan (LFCP) to assist suppliers in the development of their own Plans. The Plan documents the Plan Owner's (supplier's) processes, that assure their customer, and all other stakeholders that the Plan owner's products will continue to meet their requirements. The presentation reviews quality assurance requirements traceability and LFCP template instructions.

  1. NASA's Advanced Solar Sail Propulsion System for Low-Cost Deep Space Exploration and Science Missions that Use High Performance Rollable Composite Booms

    Science.gov (United States)

    Fernandez, Juan M.; Rose, Geoffrey K.; Younger, Casey J.; Dean, Gregory D.; Warren, Jerry E.; Stohlman, Olive R.; Wilkie, W. Keats

    2017-01-01

    Several low-cost solar sail technology demonstrator missions are under development in the United States. However, the mass saving derived benefits that composites can offer to such a mass critical spacecraft architecture have not been realized yet. This is due to the lack of suitable composite booms that can fit inside CubeSat platforms and ultimately be readily scalable to much larger sizes, where they can fully optimize their use. With this aim, a new effort focused at developing scalable rollable composite booms for solar sails and other deployable structures has begun. Seven meter booms used to deploy a 90 m2 class solar sail that can fit inside a 6U CubeSat have already been developed. The NASA road map to low-cost solar sail capability demonstration envisioned, consists of increasing the size of these composite booms to enable sailcrafts with a reflective area of up to 2000 m2 housed aboard small satellite platforms. This paper presents a solar sail system initially conceived to serve as a risk reduction alternative to Near Earth Asteroid (NEA) Scout's baseline design but that has recently been slightly redesigned and proposed for follow-on missions. The features of the booms and various deployment mechanisms for the booms and sail, as well as ground support equipment used during testing, are introduced. The results of structural analyses predict the performance of the system under microgravity conditions. Finally, the results of the functional and environmental testing campaign carried out are shown.

  2. The monogroove high performance heat pipe

    Science.gov (United States)

    Alario, J.; Haslett, R.; Kosson, R.

    1981-06-01

    The development of the monogroove heat pipe, a fundamentally new high-performance device suitable for multi-kilowatt space radiator heat-rejection systems, is reported. The design separates heat transport and transfer functions, so that each can be separately optimized to yield heat transport capacities on the order of 25 kW/m. Test versions of the device have proven the concept of heat transport capacity control by pore dimensions and the permeability of the circumferential wall wick structure, which together render it insensitive to tilt. All cases tested were for localized, top-side heat input and cooling and produced results close to theoretical predictions.

  3. High-performance solar collector

    Science.gov (United States)

    Beekley, D. C.; Mather, G. R., Jr.

    1979-01-01

    Evacuated all-glass concentric tube collector using air or liquid transfer mediums is very efficient at high temperatures. Collector can directly drive existing heating systems that are presently driven by fossil fuel with relative ease of conversion and less expense than installation of complete solar heating systems.

  4. High Performance Database Management for Earth Sciences

    Science.gov (United States)

    Rishe, Naphtali; Barton, David; Urban, Frank; Chekmasov, Maxim; Martinez, Maria; Alvarez, Elms; Gutierrez, Martha; Pardo, Philippe

    1998-01-01

    The High Performance Database Research Center at Florida International University is completing the development of a highly parallel database system based on the semantic/object-oriented approach. This system provides exceptional usability and flexibility. It allows shorter application design and programming cycles and gives the user control via an intuitive information structure. It empowers the end-user to pose complex ad hoc decision support queries. Superior efficiency is provided through a high level of optimization, which is transparent to the user. Manifold reduction in storage size is allowed for many applications. This system allows for operability via internet browsers. The system will be used for the NASA Applications Center program to store remote sensing data, as well as for Earth Science applications.

  5. Strategy Guideline: Partnering for High Performance Homes

    Energy Technology Data Exchange (ETDEWEB)

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  6. High performance HRM: NHS employee perspectives.

    Science.gov (United States)

    Hyde, Paula; Sparrow, Paul; Boaden, Ruth; Harris, Claire

    2013-01-01

    The purpose of this paper is to examine National Health Service (NHS) employee perspectives of how high performance human resource (HR) practices contribute to their performance. The paper draws on an extensive qualitative study of the NHS. A novel two-part method was used; the first part used focus group data from managers to identify high-performance HR practices specific to the NHS. Employees then conducted a card-sort exercise where they were asked how or whether the practices related to each other and how each practice affected their work. In total, 11 high performance HR practices relevant to the NHS were identified. Also identified were four reactions to a range of HR practices, which the authors developed into a typology according to anticipated beneficiaries (personal gain, organisation gain, both gain and no-one gains). Employees were able to form their own patterns (mental models) of performance contribution for a range of HR practices (60 interviewees produced 91 groupings). These groupings indicated three bundles particular to the NHS (professional development, employee contribution and NHS deal). These mental models indicate employee perceptions about how health services are organised and delivered in the NHS and illustrate the extant mental models of health care workers. As health services are rearranged and financial pressures begin to bite, these mental models will affect employee reactions to changes both positively and negatively. The novel method allows for identification of mental models that explain how NHS workers understand service delivery. It also delineates the complex and varied relationships between HR practices and individual performance.

  7. High Performance Programmable Transceiver Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Space-to-ground communications have long been stuck in a prehistoric era of telemetry systems from the throughput and hardware availability perspective. From the...

  8. Turning High-Poverty Schools into High-Performing Schools

    Science.gov (United States)

    Parrett, William H.; Budge, Kathleen

    2012-01-01

    If some schools can overcome the powerful and pervasive effects of poverty to become high performing, shouldn't any school be able to do the same? Shouldn't we be compelled to learn from those schools? Although schools alone will never systemically eliminate poverty, high-poverty, high-performing (HP/HP) schools take control of what they can to…

  9. Work Disability in Early Systemic Sclerosis

    DEFF Research Database (Denmark)

    Sandqvist, Gunnel; Hesselstrand, Roger; Petersson, Ingemar F

    2015-01-01

    OBJECTIVE: To study work disability (WD) with reference to levels of sick leave and disability pension in early systemic sclerosis (SSc). METHODS: Patients with SSc living in the southern part of Sweden with onset of their first non-Raynaud symptom between 2003 and 2009 and with a followup of 36...... months were included in a longitudinal study. Thirty-two patients (26 women, 24 with limited SSc) with a median age of 47.5 years (interquartile range 43-53) were identified. WD was calculated in 30-day intervals from 12 months prior to disease onset until 36 months after, presented as the prevalence...

  10. High performance computing applications in neurobiological research

    Science.gov (United States)

    Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.

    1994-01-01

    The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.

  11. A Linux Workstation for High Performance Graphics

    Science.gov (United States)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  12. Indoor Air Quality in High Performance Schools

    Science.gov (United States)

    High performance schools are facilities that improve the learning environment while saving energy, resources, and money. The key is understanding the lifetime value of high performance schools and effectively managing priorities, time, and budget.

  13. High performance soft magnetic materials

    CERN Document Server

    2017-01-01

    This book provides comprehensive coverage of the current state-of-the-art in soft magnetic materials and related applications, with particular focus on amorphous and nanocrystalline magnetic wires and ribbons and sensor applications. Expert chapters cover preparation, processing, tuning of magnetic properties, modeling, and applications. Cost-effective soft magnetic materials are required in a range of industrial sectors, such as magnetic sensors and actuators, microelectronics, cell phones, security, automobiles, medicine, health monitoring, aerospace, informatics, and electrical engineering. This book presents both fundamentals and applications to enable academic and industry researchers to pursue further developments of these key materials. This highly interdisciplinary volume represents essential reading for researchers in materials science, magnetism, electrodynamics, and modeling who are interested in working with soft magnets. Covers magnetic microwires, sensor applications, amorphous and nanocrystalli...

  14. High-performance motor drives

    OpenAIRE

    Kazmierkowski, Marian P.; García Franquelo, Leopoldo; Rodríguez, José; Pérez, Marcelo; León Galván, José Ignacio

    2011-01-01

    This article reviews the present state and trends in the development of key parts of controlled induction motor drive systems: converter topologies, modulation methods, as well as control and estimation techniques. Two- and multilevel voltage-source converters, current-source converters, and direct converters are described. The main part of all the produced electric energy is used to feed electric motors, and the conversion of electrical power into mechanical power involves motors ranges from...

  15. Functional High Performance Financial IT

    DEFF Research Database (Denmark)

    Berthold, Jost; Filinski, Andrzej; Henglein, Fritz

    2011-01-01

    auditability of financial institutions, including their software systems. To top it off, increased product variety and customisation necessitates shorter software development cycles and higher development productivity. In this paper, we report about HIPERFIT, a recently etablished strategic research center...... at the University of Copenhagen that attacks this triple challenge of increased performance, transparency and productivity in the financial sector by a novel integration of financial mathematics, domain-specific language technology, parallel functional programming, and emerging massively parallel hardware. HIPERFIT...

  16. Carpet Aids Learning in High Performance Schools

    Science.gov (United States)

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  17. Incentive System in Hungarian High Performance Sport

    Directory of Open Access Journals (Sweden)

    Sterbenz Tamás

    2014-12-01

    Full Text Available This study will attempt to describe the role of existing incentives which have a significant effect on Hungarian sport's performance. The aim of the paper is to understand why a large gap has emerged between successful elite sports and the popular but underperforming spectacular sport. According to the concept of dual competition, in addition to sport results, the analyzed fields also concern competition for resources, particularly for the attention of supporters and sponsors. The methodology of the analysis is fundamentally economic in nature; however, qualitative methods are also given emphasis, as the analyzed topic has specific characteristics. Based on new institutional economics, the study presumes that the behavior of organizations is determined by the decisions of bounded rational individuals, and highlights the significance of the created mechanisms and institutions.

  18. High Performance Photocatalytic Oxidation Reactor System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Pioneer Astronautics proposes a technology program for the development of an innovative photocatalytic oxidation reactor for the removal and mineralization of...

  19. Dual support ensuring high-energy supercapacitors via high-performance NiCo2S4@Fe2O3 anode and working potential enlarged MnO2 cathode

    Science.gov (United States)

    Jia, Ruyue; Zhu, Feng; Sun, Shuo; Zhai, Teng; Xia, Hui

    2017-02-01

    Development of high-energy and high-power asymmetric supercapacitors (ASCs) is still a great challenge due to the low specific capacitance of anode materials (carbon materials of about 100-200 F g-1) and limited voltage window (window (0-1.3 V vs. SCE) for high-energy and high-power ASCs. The unique core-shell hierarchical nanoarchitecture of the hybrid NiCo2S4@Fe2O3 nanoneedle arrays not only provides large surface area for charge storage but also facilitates fast charge transport in the electrode. Moreover, the extended potential window of the MnO2 cathode can effectively increase the device voltage of the as-assembled ASC up to 2.3 V, resulting in significantly increased energy density. The obtained ASC device can deliver a high volumetric energy density of 2.29 mWh cm-3 at 196 mW cm-3 and retain 1.08 mWh cm-3 at 2063 mW cm-3, providing new opportunity for developing high-performance ASCs.

  20. How to create high-performing teams.

    Science.gov (United States)

    Lam, Samuel M

    2010-02-01

    This article is intended to discuss inspirational aspects on how to lead a high-performance team. Cogent topics discussed include how to hire staff through methods of "topgrading" with reference to Geoff Smart and "getting the right people on the bus" referencing Jim Collins' work. In addition, once the staff is hired, this article covers how to separate the "eagles from the ducks" and how to inspire one's staff by creating the right culture with suggestions for further reading by Don Miguel Ruiz (The four agreements) and John Maxwell (21 Irrefutable laws of leadership). In addition, Simon Sinek's concept of "Start with Why" is elaborated to help a leader know what the core element should be with any superior culture. Thieme Medical Publishers.

  1. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  2. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  3. Highlighting High Performance: Clearview Elementary School, Hanover, Pennsylvania

    Energy Technology Data Exchange (ETDEWEB)

    2002-08-01

    Case study on high performance building features of Clearview Elementary School in Hanover, Pennsylvania. Clearview Elementary School in Hanover, Pennsylvania, is filled with natural light, not only in classrooms but also in unexpected, and traditionally dark, places like stairwells and hallways. The result is enhanced learning. Recent scientific studies conducted by the California Board for Energy Efficiency, involving 21,000 students, show test scores were 15% to 26% higher in classrooms with daylighting. Clearview's ventilation system also helps students and teachers stay healthy, alert, and focused on learning. The school's superior learning environment comes with annual average energy savings of about 40% over a conventional school. For example, with so much daylight, the school requires about a third less energy for electric lighting than a typical school. The school's innovative geothermal heating and cooling system uses the constant temperature of the Earth to cool and heat the building. The building and landscape designs work together to enhance solar heating in the winter, summer cooling, and daylighting all year long. Students and teachers have the opportunity to learn about high-performance design by studying their own school. At Clearview, the Hanover Public School District has shown that designing a school to save energy is affordable. Even with its many innovative features, the school's $6.35 million price tag is just $150,000 higher than average for elementary schools in Pennsylvania. Projected annual energy cost savings of approximately $18,000 mean a payback in 9 years. Reasonable construction costs demonstrate that other school districts can build schools that conserve energy, protect natural resources, and provide the educational and health benefits that come with high-performance buildings.

  4. Energy savings estimates and cost benefit calculations for high performance relocatable classrooms

    Energy Technology Data Exchange (ETDEWEB)

    Rainer, Leo I.; Hoeschele, Marc A.; Apte, Michael G.; Shendell, Derek G.; Fisk, Wlliam J.

    2003-12-01

    This report addresses the results of detailed monitoring completed under Program Element 6 of Lawrence Berkeley National Laboratory's High Performance Commercial Building Systems (HPCBS) PIER program. The purpose of the Energy Simulations and Projected State-Wide Energy Savings project is to develop reasonable energy performance and cost models for high performance relocatable classrooms (RCs) across California climates. A key objective of the energy monitoring was to validate DOE2 simulations for comparison to initial DOE2 performance projections. The validated DOE2 model was then used to develop statewide savings projections by modeling base case and high performance RC operation in the 16 California climate zones. The primary objective of this phase of work was to utilize detailed field monitoring data to modify DOE2 inputs and generate performance projections based on a validated simulation model. Additional objectives include the following: (1) Obtain comparative performance data on base case and high performance HVAC systems to determine how they are operated, how they perform, and how the occupants respond to the advanced systems. This was accomplished by installing both HVAC systems side-by-side (i.e., one per module of a standard two module, 24 ft by 40 ft RC) on the study RCs and switching HVAC operating modes on a weekly basis. (2) Develop projected statewide energy and demand impacts based on the validated DOE2 model. (3) Develop cost effectiveness projections for the high performance HVAC system in the 16 California climate zones.

  5. High-Performance Ducts in Hot-Dry Climates

    Energy Technology Data Exchange (ETDEWEB)

    Hoeschele, Marc [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Chitwood, Rick [National Renewable Energy Laboratory (NREL), Golden, CO (United States); German, Alea [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Weitzel, Elizabeth [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2015-07-30

    Duct thermal losses and air leakage have long been recognized as prime culprits in the degradation of heating, ventilating, and air-conditioning (HVAC) system efficiency. Both the U.S. Department of Energy’s Zero Energy Ready Home program and California’s proposed 2016 Title 24 Residential Energy Efficiency Standards require that ducts be installed within conditioned space or that other measures be taken to provide similar improvements in delivery effectiveness (DE). Pacific Gas & Electric Company commissioned a study to evaluate ducts in conditioned space and high-performance attics (HPAs) in support of the proposed codes and standards enhancements included in California’s 2016 Title 24 Residential Energy Efficiency Standards. The goal was to work with a select group of builders to design and install high-performance duct (HPD) systems, such as ducts in conditioned space (DCS), in one or more of their homes and to obtain test data to verify the improvement in DE compared to standard practice. Davis Energy Group (DEG) helped select the builders and led a team that provided information about HPD strategies to them. DEG also observed the construction process, completed testing, and collected cost data.

  6. High performance MEAs. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-07-15

    The aim of the present project is through modeling, material and process development to obtain significantly better MEA performance and to attain the technology necessary to fabricate stable catalyst materials thereby providing a viable alternative to current industry standard. This project primarily focused on the development and characterization of novel catalyst materials for the use in high temperature (HT) and low temperature (LT) proton-exchange membrane fuel cells (PEMFC). New catalysts are needed in order to improve fuel cell performance and reduce the cost of fuel cell systems. Additional tasks were the development of new, durable sealing materials to be used in PEMFC as well as the computational modeling of heat and mass transfer processes, predominantly in LT PEMFC, in order to improve fundamental understanding of the multi-phase flow issues and liquid water management in fuel cells. An improved fundamental understanding of these processes will lead to improved fuel cell performance and hence will also result in a reduced catalyst loading to achieve the same performance. The consortium have obtained significant research results and progress for new catalyst materials and substrates with promising enhanced performance and fabrication of the materials using novel methods. However, the new materials and synthesis methods explored are still in the early research and development phase. The project has contributed to improved MEA performance using less precious metal and has been demonstrated for both LT-PEM, DMFC and HT-PEM applications. New novel approach and progress of the modelling activities has been extremely satisfactory with numerous conference and journal publications along with two potential inventions concerning the catalyst layer. (LN)

  7. Innovative Deep Throttling, High Performance Injector Concept Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Science and Technology Applications, LLC's (STA) vision for a versatile space propulsion system is a highly throttleable, high performance, and cost effective Liquid...

  8. Architecture and Programming Models for High Performance Intensive Computation

    Science.gov (United States)

    2016-06-29

    AFRL-AFOSR-VA-TR-2016-0230 Architecture and Programming Models for High Performance Intensive Computation XiaoMing Li UNIVERSITY OF DELAWARE Final...TITLE AND SUBTITLE Architecture and Programming Models for High Performance Intensive Computation 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-13-1-0213...developing an efficient system architecture and software tools for building and running Dynamic Data Driven Application Systems (DDDAS). The foremost

  9. High-Performance, Low Environmental Impact Refrigerants

    Science.gov (United States)

    McCullough, E. T.; Dhooge, P. M.; Glass, S. M.; Nimitz, J. S.

    2001-01-01

    Refrigerants used in process and facilities systems in the US include R-12, R-22, R-123, R-134a, R-404A, R-410A, R-500, and R-502. All but R-134a, R-404A, and R-410A contain ozone-depleting substances that will be phased out under the Montreal Protocol. Some of the substitutes do not perform as well as the refrigerants they are replacing, require new equipment, and have relatively high global warming potentials (GWPs). New refrigerants are needed that addresses environmental, safety, and performance issues simultaneously. In efforts sponsored by Ikon Corporation, NASA Kennedy Space Center (KSC), and the US Environmental Protection Agency (EPA), ETEC has developed and tested a new class of refrigerants, the Ikon (registered) refrigerants, based on iodofluorocarbons (IFCs). These refrigerants are nonflammable, have essentially zero ozone-depletion potential (ODP), low GWP, high performance (energy efficiency and capacity), and can be dropped into much existing equipment.

  10. High performance anode for advanced Li batteries

    Energy Technology Data Exchange (ETDEWEB)

    Lake, Carla [Applied Sciences, Inc., Cedarville, OH (United States)

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  11. PREFACE: High Performance Computing Symposium 2011

    Science.gov (United States)

    Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod

    2012-02-01

    HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.

  12. Understanding and Improving High-Performance I/O Subsystems

    Science.gov (United States)

    El-Ghazawi, Tarek A.; Frieder, Gideon; Clark, A. James

    1996-01-01

    This research program has been conducted in the framework of the NASA Earth and Space Science (ESS) evaluations led by Dr. Thomas Sterling. In addition to the many important research findings for NASA and the prestigious publications, the program has helped orienting the doctoral research program of two students towards parallel input/output in high-performance computing. Further, the experimental results in the case of the MasPar were very useful and helpful to MasPar with which the P.I. has had many interactions with the technical management. The contributions of this program are drawn from three experimental studies conducted on different high-performance computing testbeds/platforms, and therefore presented in 3 different segments as follows: 1. Evaluating the parallel input/output subsystem of a NASA high-performance computing testbeds, namely the MasPar MP- 1 and MP-2; 2. Characterizing the physical input/output request patterns for NASA ESS applications, which used the Beowulf platform; and 3. Dynamic scheduling techniques for hiding I/O latency in parallel applications such as sparse matrix computations. This study also has been conducted on the Intel Paragon and has also provided an experimental evaluation for the Parallel File System (PFS) and parallel input/output on the Paragon. This report is organized as follows. The summary of findings discusses the results of each of the aforementioned 3 studies. Three appendices, each containing a key scholarly research paper that details the work in one of the studies are included.

  13. High performance computing and communications program

    Science.gov (United States)

    Holcomb, Lee

    1992-01-01

    A review of the High Performance Computing and Communications (HPCC) program is provided in vugraph format. The goals and objectives of this federal program are as follows: extend U.S. leadership in high performance computing and computer communications; disseminate the technologies to speed innovation and to serve national goals; and spur gains in industrial competitiveness by making high performance computing integral to design and production.

  14. High Performance Spaceflight Computing (HPSC) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In 2012, the NASA Game Changing Development Program (GCDP), residing in the NASA Space Technology Mission Directorate (STMD), commissioned a High Performance...

  15. High performance carbon nanocomposites for ultracapacitors

    Science.gov (United States)

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  16. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2015-01-01

    A continuation of Contemporary High Performance Computing: From Petascale toward Exascale, this second volume continues the discussion of HPC flagship systems, major application workloads, facilities, and sponsors. The book includes of figures and pictures that capture the state of existing systems: pictures of buildings, systems in production, floorplans, and many block diagrams and charts to illustrate system design and performance.

  17. Wearable Accelerometers in High Performance Jet Aircraft.

    Science.gov (United States)

    Rice, G Merrill; VanBrunt, Thomas B; Snider, Dallas H; Hoyt, Robert E

    2016-02-01

    Wearable accelerometers have become ubiquitous in the fields of exercise physiology and ambulatory hospital settings. However, these devices have yet to be validated in extreme operational environments. The objective of this study was to correlate the gravitational forces (G forces) detected by wearable accelerometers with the G forces detected by high performance aircraft. We compared the in-flight G forces detected by the two commercially available portable accelerometers to the F/A-18 Carrier Aircraft Inertial Navigation System (CAINS-2) during 20 flights performed by the Navy's Flight Demonstration Squadron (Blue Angels). Postflight questionnaires were also used to assess the perception of distractibility during flight. Of the 20 flights analyzed, 10 complete in-flight comparisons were made, accounting for 25,700 s of correlation between the CAINS-2 and the two tested accelerometers. Both accelerometers had strong correlations with that of the F/A-18 Gz axis, averaging r = 0.92 and r = 0.93, respectively, over 10 flights. Comparison of both portable accelerometer's average vector magnitude to each other yielded an average correlation of r = 0.93. Both accelerometers were found to be minimally distracting. These results suggest the use of wearable accelerometers is a valid means of detecting G forces during high performance aircraft flight. Future studies using this surrogate method of detecting accelerative forces combined with physiological information may yield valuable in-flight normative data that heretofore has been technically difficult to obtain and hence holds the promise of opening the door for a new golden age of aeromedical research.

  18. An automatic versatile system integrating solid-phase extraction with ultra-high performance liquid chromatography-tandem mass spectrometry using a dual-dilution strategy for direct analysis of auxins in plant extracts.

    Science.gov (United States)

    Zhong, Qisheng; Qiu, Xiongxiong; Lin, Caiyong; Shen, Lingling; Huo, Yin; Zhan, Song; Yao, Jinting; Huang, Taohong; Kawano, Shin-ichi; Hashi, Yuki; Xiao, Langtao; Zhou, Ting

    2014-09-12

    An automatic versatile system which integrated solid phase extraction (SPE) with ultra-high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) was developed. Diverse commercial SPE columns can be used under an ambient pressure in this online system realized by a dual-dilution strategy. The first dilution enabled the direct injection of complex samples with minimal pretreatment, and the second dilution realized direct introduction of large volume of strong eluent into the UHPLC column without causing peak broadening or distortion. In addition, a post-column compensation mode was also designed for the matrix-effects evaluation. The features of the online system were systematically investigated, including the dilution effect, the capture of desorption solution, the column-head stacking effect and the system recovery. Compared with the offline UHPLC system, this online system showed significant advantages such as larger injection volume, higher sensitivity, shorter analysis time and better repeatability. The feasibility of the system was demonstrated by the direct analysis of three auxins from different plant tissues, including leaves of Dracaena sanderiana, buds and petals of Bauhinia. Under the optimized conditions, the whole analysis procedure took only 7min. All the correlation coefficients were greater than 0.9987, the limits of detection and the limits of quantitation were in the range of 0.560-0.800ng/g and 1.80-2.60ng/g, respectively. The recoveries of the real samples ranged from 61.0 to 117%. Finally, the post-column compensation mode was applied and no matrix-effects were observed under the analysis conditions. The automatic versatile system was rapid, sensitive and reliable. We expect this system could be extended to other target analytes in complex samples utilizing diverse SPE columns. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Solving Problems in Various Domains by Hybrid Models of High Performance Computations

    Directory of Open Access Journals (Sweden)

    Yurii Rogozhin

    2014-03-01

    Full Text Available This work presents a hybrid model of high performance computations. The model is based on membrane system (P~system where some membranes may contain quantum device that is triggered by the data entering the membrane. This model is supposed to take advantages of both biomolecular and quantum paradigms and to overcome some of their inherent limitations. The proposed approach is demonstrated through two selected problems: SAT, and image retrieving.

  20. An Associate Degree in High Performance Manufacturing.

    Science.gov (United States)

    Packer, Arnold

    In order for more individuals to enter higher paying jobs, employers must create a sufficient number of high-performance positions (the demand side), and workers must acquire the skills needed to perform in these restructured workplaces (the supply side). Creating an associate degree in High Performance Manufacturing (HPM) will help address four…

  1. Energy Design Guidelines for High Performance Schools: Tropical Island Climates

    Energy Technology Data Exchange (ETDEWEB)

    2004-11-01

    The Energy Design Guidelines for High Performance Schools--Tropical Island Climates provides school boards, administrators, and design staff with guidance to help them make informed decisions about energy and environmental issues important to school systems and communities. These design guidelines outline high performance principles for the new or retrofit design of your K-12 school in tropical island climates. By incorporating energy improvements into their construction or renovation plans, schools can significantly reduce energy consumption and costs.

  2. Highlighting High Performance: Whitman Hanson Regional High School; Whitman, Massachusetts

    Energy Technology Data Exchange (ETDEWEB)

    2006-06-01

    This brochure describes the key high-performance building features of the Whitman-Hanson Regional High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar and wind energy, building envelope, heating and cooling systems, water conservation, and acoustics. Energy cost savings are also discussed.

  3. Developing Flexible, High Performance Polymers with Self-Healing Capabilities

    Science.gov (United States)

    Jolley, Scott T.; Williams, Martha K.; Gibson, Tracy L.; Caraccio, Anne J.

    2011-01-01

    Flexible, high performance polymers such as polyimides are often employed in aerospace applications. They typically find uses in areas where improved physical characteristics such as fire resistance, long term thermal stability, and solvent resistance are required. It is anticipated that such polymers could find uses in future long duration exploration missions as well. Their use would be even more advantageous if self-healing capability or mechanisms could be incorporated into these polymers. Such innovative approaches are currently being studied at the NASA Kennedy Space Center for use in high performance wiring systems or inflatable and habitation structures. Self-healing or self-sealing capability would significantly reduce maintenance requirements, and increase the safety and reliability performance of the systems into which these polymers would be incorporated. Many unique challenges need to be overcome in order to incorporate a self-healing mechanism into flexible, high performance polymers. Significant research into the incorporation of a self-healing mechanism into structural composites has been carried out over the past decade by a number of groups, notable among them being the University of I1linois [I]. Various mechanisms for the introduction of self-healing have been investigated. Examples of these are: 1) Microcapsule-based healant delivery. 2) Vascular network delivery. 3) Damage induced triggering of latent substrate properties. Successful self-healing has been demonstrated in structural epoxy systems with almost complete reestablishment of composite strength being achieved through the use of microcapsulation technology. However, the incorporation of a self-healing mechanism into a system in which the material is flexible, or a thin film, is much more challenging. In the case of using microencapsulation, healant core content must be small enough to reside in films less than 0.1 millimeters thick, and must overcome significant capillary and surface

  4. High-performance simulations for atmospheric pressure plasma reactor

    Science.gov (United States)

    Chugunov, Svyatoslav

    Plasma-assisted processing and deposition of materials is an important component of modern industrial applications, with plasma reactors sharing 30% to 40% of manufacturing steps in microelectronics production. Development of new flexible electronics increases demands for efficient high-throughput deposition methods and roll-to-roll processing of materials. The current work represents an attempt of practical design and numerical modeling of a plasma enhanced chemical vapor deposition system. The system utilizes plasma at standard pressure and temperature to activate a chemical precursor for protective coatings. A specially designed linear plasma head, that consists of two parallel plates with electrodes placed in the parallel arrangement, is used to resolve clogging issues of currently available commercial plasma heads, as well as to increase the flow-rate of the processed chemicals and to enhance the uniformity of the deposition. A test system is build and discussed in this work. In order to improve operating conditions of the setup and quality of the deposited material, we perform numerical modeling of the plasma system. The theoretical and numerical models presented in this work comprehensively describe plasma generation, recombination, and advection in a channel of arbitrary geometry. Number density of plasma species, their energy content, electric field, and rate parameters are accurately calculated and analyzed in this work. Some interesting engineering outcomes are discussed with a connection to the proposed setup. The numerical model is implemented with the help of high-performance parallel technique and evaluated at a cluster for parallel calculations. A typical performance increase, calculation speed-up, parallel fraction of the code and overall efficiency of the parallel implementation are discussed in details.

  5. High Performance Graphene Oxide Based Rubber Composites

    Science.gov (United States)

    Mao, Yingyan; Wen, Shipeng; Chen, Yulong; Zhang, Fazhong; Panine, Pierre; Chan, Tung W.; Zhang, Liqun; Liang, Yongri; Liu, Li

    2013-08-01

    In this paper, graphene oxide/styrene-butadiene rubber (GO/SBR) composites with complete exfoliation of GO sheets were prepared by aqueous-phase mixing of GO colloid with SBR latex and a small loading of butadiene-styrene-vinyl-pyridine rubber (VPR) latex, followed by their co-coagulation. During co-coagulation, VPR not only plays a key role in the prevention of aggregation of GO sheets but also acts as an interface-bridge between GO and SBR. The results demonstrated that the mechanical properties of the GO/SBR composite with 2.0 vol.% GO is comparable with those of the SBR composite reinforced with 13.1 vol.% of carbon black (CB), with a low mass density and a good gas barrier ability to boot. The present work also showed that GO-silica/SBR composite exhibited outstanding wear resistance and low-rolling resistance which make GO-silica/SBR very competitive for the green tire application, opening up enormous opportunities to prepare high performance rubber composites for future engineering applications.

  6. High Performance Graphene Oxide Based Rubber Composites

    Science.gov (United States)

    Mao, Yingyan; Wen, Shipeng; Chen, Yulong; Zhang, Fazhong; Panine, Pierre; Chan, Tung W.; Zhang, Liqun; Liang, Yongri; Liu, Li

    2013-01-01

    In this paper, graphene oxide/styrene-butadiene rubber (GO/SBR) composites with complete exfoliation of GO sheets were prepared by aqueous-phase mixing of GO colloid with SBR latex and a small loading of butadiene-styrene-vinyl-pyridine rubber (VPR) latex, followed by their co-coagulation. During co-coagulation, VPR not only plays a key role in the prevention of aggregation of GO sheets but also acts as an interface-bridge between GO and SBR. The results demonstrated that the mechanical properties of the GO/SBR composite with 2.0 vol.% GO is comparable with those of the SBR composite reinforced with 13.1 vol.% of carbon black (CB), with a low mass density and a good gas barrier ability to boot. The present work also showed that GO-silica/SBR composite exhibited outstanding wear resistance and low-rolling resistance which make GO-silica/SBR very competitive for the green tire application, opening up enormous opportunities to prepare high performance rubber composites for future engineering applications. PMID:23974435

  7. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  8. A High Performance Backend for Array-Oriented Programming on Next-Generation Processing Units

    DEFF Research Database (Denmark)

    Lund, Simon Andreas Frimann

    of benchmarks applications, implemented in these languages, demonstrate the high-level declarative form of the programming model. Performance studies show that the high-level declarative programming model can be used to not only match but also exceed the performance of hand-coded implementations in low......The financial crisis, which started in 2008, spawned the HIPERFIT research center as a preventive measure against future financial crises. The goal of prevention is to be met by improving mathematical models for finance, the verifiable description of them in domain-specific languages...... and the efficient execution of them on high performance systems. This work investigates the requirements for, and the implementation of, a high performance backend supporting these goals. This involves an outline of the hardware available today, in the near future and how to program it for high performance...

  9. High Performance Regenerated Cellulose Membranes from Trimethylsilyl Cellulose

    KAUST Repository

    Ali, Ola

    2013-05-01

    Regenerated cellulose (RC) membranes are extensively used in medical and pharmaceutical separation processes due to their biocompatibility, low fouling tendency and solvent resistant properties. They typically possess ultrafiltration and microfiltration separation characteristics, but recently, there have been attempts to widen their pool of applications in nanofiltration processes. In this work, a novel method for preparing high performance composite RC membranes was developed. These membranes reveal molecular weight cut-offs (MWCO) of less than 250 daltons, which possibly put them ahead of all commercial RC membranes and in competition with high performance nanofiltration membranes. The membranes were prepared by acidic hydrolysis of dip-coated trimethylsilyl cellulose (TMSC) films. TMSC, with a degree of silylation (DS) of 2.8, was prepared from microcrystalline cellulose by reaction with hexamethyldisilazane under the homogeneous conditions of LiCl/DMAC solvent system. Effects of parameters, such as coating solution concentration and drying rates, were investigated. It was concluded that higher TMSC concentrations as well as higher solvent evaporation rates favor better MWCOs, mainly due to increase in the selective layer thickness. Successful cross-linking of prepared membranes with glyoxal solutions, in the presence of boric acid as a catalyst, resulted in MWCOs less than 250 daltons. The suitability of this crosslinking reaction for large scale productions was already proven in the manufacturing of durable-press fabrics. For us, the inexpensive raw materials as well as the low reaction times and temperatures were of interest. Moreover, the non-toxic nature of glyoxal is a key advantage in medical and pharmaceutical applications. The membranes prepared in this work are strong candidates for separation of small organic solutes from organic solvents streams in pharmaceutical industries. Their hydrophilicity, compared to typical nanofiltration membranes, offer

  10. Strategy Guideline: High Performance Residential Lighting

    Energy Technology Data Exchange (ETDEWEB)

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  11. High Performance Grinding and Advanced Cutting Tools

    CERN Document Server

    Jackson, Mark J

    2013-01-01

    High Performance Grinding and Advanced Cutting Tools discusses the fundamentals and advances in high performance grinding processes, and provides a complete overview of newly-developing areas in the field. Topics covered are grinding tool formulation and structure, grinding wheel design and conditioning and applications using high performance grinding wheels. Also included are heat treatment strategies for grinding tools, using grinding tools for high speed applications, laser-based and diamond dressing techniques, high-efficiency deep grinding, VIPER grinding, and new grinding wheels.

  12. Radiation Hard High Performance Optoelectronic Devices Project

    Data.gov (United States)

    National Aeronautics and Space Administration — High-performance, radiation-hard, widely-tunable integrated laser/modulator chip and large-area avalanche photodetectors (APDs) are key components of optical...

  13. High Performance Liquid Chromatography Method for the ...

    African Journals Online (AJOL)

    High Performance Liquid Chromatography Method for the Determination of Anethole in Rat Plasma. ... Journal Home > Vol 13, No 5 (2014) > ... Results: GC determination showed that anethole in the essential oil of star anise exhibited a ...

  14. Analog circuit design designing high performance amplifiers

    CERN Document Server

    Feucht, Dennis

    2010-01-01

    The third volume Designing High Performance Amplifiers applies the concepts from the first two volumes. It is an advanced treatment of amplifier design/analysis emphasizing both wideband and precision amplification.

  15. Scalable resource management in high performance computers.

    Energy Technology Data Exchange (ETDEWEB)

    Frachtenberg, E. (Eitan); Petrini, F. (Fabrizio); Fernandez Peinador, J. (Juan); Coll, S. (Salvador)

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  16. High performance hand-held gas chromatograph

    Energy Technology Data Exchange (ETDEWEB)

    Yu, C.M.

    1998-04-28

    The Microtechnology Center of Lawrence Livermore National Laboratory has developed a high performance hand-held, real time detection gas chromatograph (HHGC) by Micro-Electro-Mechanical-System (MEMS) technology. The total weight of this hand-held gas chromatograph is about five lbs., with a physical size of 8{close_quotes} x 5{close_quotes} x 3{close_quotes} including carrier gas and battery. It consumes about 12 watts of electrical power with a response time on the order of one to two minutes. This HHGC has an average effective theoretical plate of about 40k. Presently, its sensitivity is limited by its thermal sensitive detector at PPM. Like a conventional G.C., this HHGC consists mainly of three major components: (1) the sample injector, (2) the column, and (3) the detector with related electronics. The present HHGC injector is a modified version of the conventional injector. Its separation column is fabricated completely on silicon wafers by means of MEMS technology. This separation column has a circular cross section with a diameter of 100 pm. The detector developed for this hand-held GC is a thermal conductivity detector fabricated on a silicon nitride window by MEMS technology. A normal Wheatstone bridge is used. The signal is fed into a PC and displayed through LabView software.

  17. Employee Perception on Commitment Oriented Work Systems

    NARCIS (Netherlands)

    J.P.P.E.F. Boselie (Paul); M. Hesselink; J. Paauwe (Jaap); A. van der Wiele (Ton)

    2001-01-01

    textabstractHuman resource management (HRM) does matter! Prior empirical research, summarized and classified in the work of Delery and Doty (1996), Guest (1997) and Boselie et al. (2000), suggests significant impact of HRM on the competitive advantage of organizations. The mainstream research on

  18. High performance computing in Windows Azure cloud

    OpenAIRE

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  19. Hospital Quality Systems - working mechanisms unraveled.

    NARCIS (Netherlands)

    Schoten, S. van; Groenewegen, P.; Wagner, C.

    2015-01-01

    Context: Quality systems were implemented in healthcare institutions to assure and improve the quality of care. Despite the fact that all Dutch hospitals have implemented a quality system, incidents persist to surface. How could this be explained? The current research was set out to gain thorough

  20. Systems security management in forestry work

    Directory of Open Access Journals (Sweden)

    Carbone F

    2011-11-01

    Full Text Available Safety and health at work is a relevant ethical good. National Government and other international and national institutions have adopted measures to against this social evil, in the forestry sector too. In Italy, for the period 2003-2005 the domestic forest sector registered just less than 1 fatal accident for millions of cubic meter, nevertheless a more consistent data should be need for comparing this data at international level. After explaining the wide range of works in the forest, the contribution analyzes the discipline introduced by Legislative Decree no. 81/2008. This has introduced new professionalism, new procedures, new tools and new type of cost in the budgets of forestry activities. In the conclusion the Author suggests that the inclusion of these type of expenditures on forest management accounting are very significant from many points of view. Safety and health costs must be included sistematically and not occasionally on volontary basis of the forester consultant.

  1. Integrating advanced facades into high performance buildings

    Energy Technology Data Exchange (ETDEWEB)

    Selkowitz, Stephen E.

    2001-05-01

    Glass is a remarkable material but its functionality is significantly enhanced when it is processed or altered to provide added intrinsic capabilities. The overall performance of glass elements in a building can be further enhanced when they are designed to be part of a complete facade system. Finally the facade system delivers the greatest performance to the building owner and occupants when it becomes an essential element of a fully integrated building design. This presentation examines the growing interest in incorporating advanced glazing elements into more comprehensive facade and building systems in a manner that increases comfort, productivity and amenity for occupants, reduces operating costs for building owners, and contributes to improving the health of the planet by reducing overall energy use and negative environmental impacts. We explore the role of glazing systems in dynamic and responsive facades that provide the following functionality: Enhanced sun protection and cooling load control while improving thermal comfort and providing most of the light needed with daylighting; Enhanced air quality and reduced cooling loads using natural ventilation schemes employing the facade as an active air control element; Reduced operating costs by minimizing lighting, cooling and heating energy use by optimizing the daylighting-thermal tradeoffs; Net positive contributions to the energy balance of the building using integrated photovoltaic systems; Improved indoor environments leading to enhanced occupant health, comfort and performance. In addressing these issues facade system solutions must, of course, respect the constraints of latitude, location, solar orientation, acoustics, earthquake and fire safety, etc. Since climate and occupant needs are dynamic variables, in a high performance building the facade solution have the capacity to respond and adapt to these variable exterior conditions and to changing occupant needs. This responsive performance capability

  2. Development of High Performance Thin Layer Chromatography for ...

    African Journals Online (AJOL)

    and validation, a high performance thin layer chromatography (HPTLC) system with WinCATS software was used. Freshly prepared ... recommended in routine analysis of pharmaceutical products containing lamivudine and tenofovir disoproxil fumarate. Introduction ... A strong system of quality control and quality assurance ...

  3. Determination of sulfonamides in swine muscle after salting-out assisted liquid extraction with acetonitrile coupled with back-extraction by a water/acetonitrile/dichloromethane ternary component system prior to high-performance liquid chromatography.

    Science.gov (United States)

    Tsai, Wen-Hsien; Huang, Tzou-Chi; Chen, Ho-Hsien; Wu, Yuh-Wern; Huang, Joh-Jong; Chuang, Hung-Yi

    2010-01-15

    A salting-out assisted liquid extraction coupled with back-extraction by a water/acetonitrile/dichloromethane ternary component system combined with high-performance liquid chromatography with diode-array detection (HPLC-DAD) was developed for the extraction and determination of sulfonamides in solid tissue samples. After the homogenization of the swine muscle with acetonitrile and salt-promoted partitioning, an aliquot of 1 mL of the acetonitrile extract containing a small amount of dichloromethane (250-400 microL) was alkalinized with diethylamine. The clear organic extract obtained by centrifugation was used as a donor phase and then a small amount of water (40-55 microL) could be used as an acceptor phase to back-extract the analytes in the water/acetonitrile/dichloromethane ternary component system. In the back-extraction procedure, after mixing and centrifuging, the sedimented phase would be water and could be withdrawn easily into a microsyringe and directly injected into the HPLC system. Under the optimal conditions, recoveries were determined for swine muscle fortified at 10 ng/g and quantification was achieved by matrix-matched calibration. The calibration curves of five sulfonamides showed linearity with the coefficient of estimation above 0.998. Relative recoveries for the analytes were all from 96.5 to 109.2% with relative standard deviation of 2.7-4.0%. Preconcentration factors ranged from 16.8 to 30.6 for 1 mL of the acetonitrile extract. Limits of detection ranged from 0.2 to 1.0 ng/g. 2009 Elsevier B.V. All rights reserved.

  4. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  5. THE WORK SYSTEM AND ITS IMPLICATIONS FOR THE PREVENTION OF PSYCHOSOCIAL RISKS AT WORK

    Directory of Open Access Journals (Sweden)

    JOSÉ M. PEIRÓ

    2004-07-01

    Full Text Available This paper offers an analysis of the implications of the work system for the analysis and prevention ofrisks at work and health promotion in companies. The work system is perhaps the central aspect ofproductive organizations and the one that most directly determines the characteristics of work activityand its potential psychosocial risks. It is an organization-related component, thus psychosocial, fruit ofthe design and can be improved and adapted to the workers' basic needs. Nevertheless, in many occasionsit is taken for granted in organizations, something which requires the adaptation of workers. Anadequate understanding of the work system and its different components is basic for the psychosocialintervention towards the prevention of work risks. At work, specific characteristics of the work systemin service organizations are given attention, because in this context, psychosocial emerging risks areidentified, in which the interventions of psychologists can be appropriate and accurate.

  6. Semiotic systems of works of visual art: Signs, connotations, signals

    OpenAIRE

    Somov, Georgij Y.-U.

    2005-01-01

    The analysis of works of visual art illustrates typical groups of elements and interrelations, which form semiotic systems of these works. Specific systems of connotations and their relations with semantic structures, paradigmatics, and typical signal structures are described. Works of Andrey Rublev, Vasiliy I. Surikov and Kuzma Petrov-Vodkin are analyzed as examples.

  7. Public Works Department Maintenance Management Information System

    Science.gov (United States)

    1976-06-01

    Construction (MILCON) d. Operation and Maintenance (O&MN) e. Procurement (APN, SCN, OPN , WPN) These categories can be viewed in a matrix with the ten...hierarchy of information systems. Ansoff describes management decision-making in three categories as strategic, administrative and operating decisions [Re...involve the firm’s goals, objectives, diversification, product-mix, markets, and growth. Ansoff also notes these other differences: 76 (1) operating

  8. Geocomputation over Hybrid Computer Architecture and Systems: Prior Works and On-going Initiatives at UARK

    Science.gov (United States)

    Shi, X.

    2015-12-01

    As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced

  9. The additional work of breathing imposed by Mapleson A systems.

    Science.gov (United States)

    Ooi, R; Pattison, J; Soni, N

    1993-07-01

    The additional work attributable to breathing through five Mapleson A anaesthetic breathing systems (Magill, Lack, Parallel Lack, Humphrey ADE and Enclosed Magill) was studied using a lung model. With all five systems, the additional work was found to be a function of fresh gas flow, respiratory flow as well as system geometry. Within the range of fresh gas flow and respiratory flow studied, the additional work ranged between 80 mJ.l-1 and 182 mJ.l-1. Expiratory work was always greater than the inspiratory workload. Increasing fresh gas inflow into the system increases expiratory work, both resistive and elastic components. The Magill system posed the least work expenditure. The values for the additional work obtained with the lung model were of the same order of magnitude when measurements were taken in volunteers.

  10. High Performance Building Mockup in FLEXLAB

    Energy Technology Data Exchange (ETDEWEB)

    McNeil, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kohler, Christian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lee, Eleanor S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Selkowitz, Stephen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-08-30

    Genentech has ambitious energy and indoor environmental quality performance goals for Building 35 (B35) being constructed by Webcor at the South San Francisco campus. Genentech and Webcor contracted with the Lawrence Berkeley National Laboratory (LBNL) to test building systems including lighting, lighting controls, shade fabric, and automated shading controls in LBNL’s new FLEXLAB facility. The goal of the testing is to ensure that the systems installed in the new office building will function in a way that reduces energy consumption and provides a comfortable work environment for employees.

  11. On-line two-dimensional countercurrent chromatography×high performance liquid chromatography system with a novel fragmentary dilution and turbulent mixing interface for preparation of coumarins from Cnidium monnieri.

    Science.gov (United States)

    Wang, Dong; Chen, Long-Jiang; Liu, Jing-Lan; Wang, Xin-Yuan; Wu, Yun-Long; Fang, Mei-Juan; Wu, Zhen; Qiu, Ying-Kun

    2015-08-07

    This study describes a novel on-line two-dimensional countercurrent chromatography×high performance liquid chromatography (2D CCC×HPLC) system for one-step preparative isolation of coumarins from the fruits of Cnidium monnieri. An optimal biphasic solvent system composed of n-heptane/acetone/water (31:50:19, v/v) with suitable Kd values and a higher retention of the stationary phase was chosen to separate target compounds. In order to address the solvent incompatibility problem between CCC and RP-HPLC, a novel fragmentary dilution and turbulent mixing (FD-TM) interface was successfully developed. In detail, the eluent from the first dimensional CCC column was divided into fractions to form 'sample-dilution' stripes in the two switching sample loops, by the dilution water from the makeup pump. Following this, a long, thin tube was applied to mix the CCC eluent with water by in-tube turbulence, to reduce the solvent effect. Each CCC fraction was alternately trapped on the two holding columns for further preparative HPLC separation. This nationally designed FD-TM strategy effectively reduced post-column pressure and allowed a higher water dilution ratio at the post end of CCC, leading to improved sample recovery and a robust 2D CCC×HPLC isolation system. As a result, in a single 2D separation run (6.5h), eight target compounds (1-8) were isolated from 0.5g crude extract of C. monnieri, in overall yields of 1.3, 2.0, 0.5, 0.5, 0.8, 1.5, 8.2, and 15.0%, with HPLC purity of 90.1, 91.1, 94.7, 99.1, 99.2, 98.2, 97.9, and 91.9%, respectively. We anticipate that this improved 2D CCC×HPLC system, based on the novel FD-TM interface, has broad application for simultaneous isolation and purification of multiple components from other complex plant-derived natural products. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Realizing High-Performance Buildings; NREL (National Renewable Energy Laboratory)

    Energy Technology Data Exchange (ETDEWEB)

    None

    2015-03-02

    High-performance buildings (HPBs) are exceptional examples of both design and practice. Their energy footprints are small, and these are buildings that people want to work in because of their intelligent structure, operations, and coincident comfort. However, the operation of most buildings, even ones that are properly constructed and commissioned at the start, can deviate significantly from the original design intent over time, particularly due to control system overrides and growing plug and data center loads. With early planning for systems such as submetering and occupant engagement tools, operators can identify and remedy the problems. This guide is a primer for owners and owners’ representatives who are pursuing HPBs. It describes processes that have been successful in the planning, procurement, and operation of HPBs with exceptional energy efficiency. Much of the guidance offered results from a series of semi-structured conference calls with a technical advisory group of 15 owners and operators of prominent HPBs in the United States. The guide provides a prescription for planning, achieving, and maintaining an HPB. Although the guide focuses on the operations stage of buildings, many of the operations practices are specified during the planning stage.

  13. Working with Systems and Thinking Systemically--Disentangling the Crossed Wires

    Science.gov (United States)

    Fox, Mark

    2009-01-01

    This article explores two separate traditions that educational psychologists (EPs) in the UK have for working with systems. One of these is "systems work" with organisations such as schools. The other is "systemic thinking" for working with families. Over the years these two traditions, systems work and systemic thinking, have…

  14. Synchronized separation, concentration and determination of trace sulfadiazine and sulfamethazine in food and environment by using polyoxyethylene lauryl ether-salt aqueous two-phase system coupled to high-performance liquid chromatography.

    Science.gov (United States)

    Lu, Yang; Cong, Biao; Tan, Zhenjiang; Yan, Yongsheng

    2016-11-01

    Polyoxyethylene lauryl ether (POELE10)-Na2C4H4O6 aqueous two-phase extraction system (ATPES) is a novel and green pretreatment technique to trace samples. ATPES coupled with high-performance liquid chromatography (HPLC) is used to analyze synchronously sulfadiazine (SDZ) and sulfamethazine (SMT) in animal by-products (i.e., egg and milk) and environmental water sample. It was found that the extraction efficiency (E%) and the enrichment factor (F) of SDZ and SMT were influenced by the types of salts, the concentration of salt, the concentration of POELE10 and the temperature. The orthogonal experimental design (OED) was adopted in the multi-factor experiment to determine the optimized conditions. The final optimal condition was as following: the concentration of POELE10 is 0.027gmL(-1), the concentration of Na2C4H4O6 is 0.180gmL(-1) and the temperature is 35°C. This POELE10-Na2C4H4O6 ATPS was applied to separate and enrich SDZ and SMT in real samples (i.e., water, egg and milk) under the optimal conditions, and it was found that the recovery of SDZ and SMT was 96.20-99.52% with RSD of 0.35-3.41%. The limit of detection (LOD) of this method for the SDZ and SMT in spiked samples was 2.52-3.64pgmL(-1), and the limit of quantitation (LOQ) of this method for the SDZ and SMT in spiked samples was 8.41-12.15pgmL(-1). Copyright © 2016 Elsevier Inc. All rights reserved.

  15. High Performance Networks for High Impact Science

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  16. An Introduction to High Performance Fortran

    Directory of Open Access Journals (Sweden)

    John Merlin

    1995-01-01

    Full Text Available High Performance Fortran (HPF is an informal standard for extensions to Fortran 90 to assist its implementation on parallel architectures, particularly for data-parallel computation. Among other things, it includes directives for specifying data distribution across multiple memories, and concurrent execution features. This article provides a tutorial introduction to the main features of HPF.

  17. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  18. Optimization and validation of high performance liquid ...

    African Journals Online (AJOL)

    Optimization and validation of high performance liquid chromatography-ultra violet method for quantitation of metoprolol in rabbit plasma: application to ... Methods: Mobile phase of methanol and 50 mM ammonium dihydrogen phosphate solution (50:50) at pH 3.05 was used for separation of metoprolol on BDS hypersil ...

  19. Project materials [Commercial High Performance Buildings Project

    Energy Technology Data Exchange (ETDEWEB)

    None

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  20. Comparing Dutch and British high performing managers

    NARCIS (Netherlands)

    Waal, A.A. de; Heijden, B.I.J.M. van der; Selvarajah, C.; Meyer, D.

    2016-01-01

    National cultures have a strong influence on the performance of organizations and should be taken into account when studying the traits of high performing managers. At the same time, many studies that focus upon the attributes of successful managers show that there are attributes that are similar

  1. Gradient High Performance Liquid Chromatography Method ...

    African Journals Online (AJOL)

    Purpose: To develop a gradient high performance liquid chromatography (HPLC) method for the simultaneous determination of phenylephrine (PHE) and ibuprofen (IBU) in solid dosage form. Methods: HPLC determination was carried out on an Agilent XDB C-18 column (4.6 x 150mm, 5 μ particle size) with a gradient ...

  2. Technology Leadership in Malaysia's High Performance School

    Science.gov (United States)

    Yieng, Wong Ai; Daud, Khadijah Binti

    2017-01-01

    Headmaster as leader of the school also plays a role as a technology leader. This applies to the high performance schools (HPS) headmaster as well. The HPS excel in all aspects of education. In this study, researcher is interested in examining the role of the headmaster as a technology leader through interviews with three headmasters of high…

  3. High Performance Computing and Communications Panel Report.

    Science.gov (United States)

    President's Council of Advisors on Science and Technology, Washington, DC.

    This report offers advice on the strengths and weaknesses of the High Performance Computing and Communications (HPCC) initiative, one of five presidential initiatives launched in 1992 and coordinated by the Federal Coordinating Council for Science, Engineering, and Technology. The HPCC program has the following objectives: (1) to extend U.S.…

  4. High Performance Liquid Chromatographic Determination of ...

    African Journals Online (AJOL)

    Purpose: To develop a simple, precise and rapid high-performance liquid chromatographic technique coupled with photodiode array detection (DAD) method for the simultaneous determination of rutin, quercetin, luteolin, genistein, galangin and curcumin in propolis. Methods: Ultrasound-assisted extraction was applied to ...

  5. Rapid high performance liquid chromatographic determination of ...

    African Journals Online (AJOL)

    Rapid high performance liquid chromatographic determination of chlorpropamide in human plasma. MTB Odunola, IS Enemali, M Garba, OO Obodozie. Abstract. Samples were extracted with dichloromethane and the organic layer evaporated to dryness. The residue was dissolved in methanol, and 25 ìl aliquot injected ...

  6. High Performance Liquid Chromatography Method for the ...

    African Journals Online (AJOL)

    chromatography (HPLC) technique with UV-VIS detection method was developed for the determination of the compound in rat ... Keywords: Anethole, High performance liguid chromatography, Star anise, Essential oil, Rat plasma,. Illicium verum Hook. .... solution of anethole. Plasma proteins were precipitated by adding 0.3.

  7. High-performance computing reveals missing genes

    OpenAIRE

    Whyte, Barry James

    2010-01-01

    Scientists at the Virginia Bioinformatics Institute and the Department of Computer Science at Virginia Tech have used high-performance computing to locate small genes that have been missed by scientists in their quest to define the microbial DNA sequences of life.

  8. Employment of High-Performance Thin-Layer Chromatography for the Quantification of Oleuropein in Olive Leaves and the Selection of a Suitable Solvent System for Its Isolation with Centrifugal Partition Chromatography.

    Science.gov (United States)

    Boka, Vasiliki-Ioanna; Argyropoulou, Aikaterini; Gikas, Evangelos; Angelis, Apostolis; Aligiannis, Nektarios; Skaltsounis, Alexios-Leandros

    2015-11-01

    A high-performance thin-layer chromatographic methodology was developed and validated for the isolation and quantitative determination of oleuropein in two extracts of Olea europaea leaves. OLE_A was a crude acetone extract, while OLE_AA was its defatted residue. Initially, high-performance thin-layer chromatography was employed for the purification process of oleuropein with fast centrifugal partition chromatography, replacing high-performance liquid-chromatography, in the stage of the determination of the distribution coefficient and the retention volume. A densitometric method was developed for the determination of the distribution coefficients, KC = CS/CM. The total concentrations of the target compound in the stationary phase (CS) and in the mobile phase (CM) were calculated by the area measured in the high-performance thin-layer chromatogram. The estimated Kc was also used for the calculation of the retention volume, VR, with a chromatographic retention equation. The obtained data were successfully applied for the purification of oleuropein and the experimental results confirmed the theoretical predictions, indicating that high-performance thin-layer chromatography could be an important counterpart in the phytochemical study of natural products. The isolated oleuropein (purity > 95%) was subsequently used for the estimation of its content in each extract with a simple, sensitive and accurate high-performance thin-layer chromatography method. The best fit calibration curve from 1.0 µg/track to 6.0 µg/track of oleuropein was polynomial and the quantification was achieved by UV detection at λ 240 nm. The method was validated giving rise to an efficient and high-throughput procedure, with the relative standard deviation % of repeatability and intermediate precision not exceeding 4.9% and accuracy between 92% and 98% (recovery rates). Moreover, the method was validated for robustness, limit of quantitation, and limit of detection. The amount of oleuropein for

  9. Systems of pillarless working of adjacent, sloped and inclined seams

    Energy Technology Data Exchange (ETDEWEB)

    Batmanov, Yu.K.; Bakhtin, A.F.; Bulavka, E.I.

    1979-01-01

    An analysis is made (advantages and disadvantages) of existing and recommended (pillarless) systems of working adjacent, sloped, and inclined seams. The economic benefits, area and extent of those systems are indicated. 8 references, 4 figures.

  10. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa

    2012-10-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a representative HPC application. © 2012 IEEE.

  11. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  12. Intelligent Facades for High Performance Green Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Dyson, Anna [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2017-03-01

    Progress Towards Net-Zero and Net-Positive-Energy Commercial Buildings and Urban Districts Through Intelligent Building Envelope Strategies Previous research and development of intelligent facades systems has been limited in their contribution towards national goals for achieving on-site net zero buildings, because this R&D has failed to couple the many qualitative requirements of building envelopes such as the provision of daylighting, access to exterior views, satisfying aesthetic and cultural characteristics, with the quantitative metrics of energy harvesting, storage and redistribution. To achieve energy self-sufficiency from on-site solar resources, building envelopes can and must address this gamut of concerns simultaneously. With this project, we have undertaken a high-performance building integrated combined-heat and power concentrating photovoltaic system with high temperature thermal capture, storage and transport towards multiple applications (BICPV/T). The critical contribution we are offering with the Integrated Concentrating Solar Façade (ICSF) is conceived to improve daylighting quality for improved health of occupants and mitigate solar heat gain while maximally capturing and transferring onsite solar energy. The ICSF accomplishes this multi-functionality by intercepting only the direct-normal component of solar energy (which is responsible for elevated cooling loads) thereby transforming a previously problematic source of energy into a high quality resource that can be applied to building demands such as heating, cooling, dehumidification, domestic hot water, and possible further augmentation of electrical generation through organic Rankine cycles. With the ICSF technology, our team is addressing the global challenge in transitioning commercial and residential building stock towards on-site clean energy self-sufficiency, by fully integrating innovative environmental control systems strategies within an intelligent and responsively dynamic building

  13. Resource estimation in high performance medical image computing.

    Science.gov (United States)

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D M; Landman, Bennett A

    2014-10-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of 'jobs' requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources.

  14. Resource Estimation in High Performance Medical Image Computing

    Science.gov (United States)

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D.M.

    2015-01-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of ‘jobs’ requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources. PMID:24906466

  15. Toward high performance in Powder Metallurgy

    Directory of Open Access Journals (Sweden)

    Torralba, José M.

    2014-06-01

    Full Text Available Powder Metallurgy (PM is technology well known for mass production of parts at low cost but usually with worse mechanical properties than same parts obtained by alternative routes. But using this technology, high performance materials can be obtained, depending of the processing route and the type and amount of porosity. In this paper, a brief review of the capabilities of powder technology is made with the objective of attaining the highest level of mechanical and physical properties. For this purpose, different strategies over the processing can be chosen: to act over the density/porosity level and properties of the pores, to act over strengthening mechanisms apart from the density of the material (the alloying system, the microstructure, the grain size,.., to improve the sintering activity by different routes and to use techniques that avoid the grain growth during sintering.La Pulvimetalurgia es una tecnología bien conocida por su faceta de producir piezas de forma masiva a bajo coste, pero habitualmente con una pérdida de propiedades mecánicas si se la compara con tecnologías alternativas para obtener las mismas piezas. Sin embargo, mediante esta tecnología, también se pueden obtener piezas de altas prestaciones, dependiendo de la ruta de procesado y del nivel de porosidad. En este trabajo, se realiza una sucinta revisión de las posibilidades de la tecnología de polvos que permitirían obtener los mayores niveles de prestaciones en cuanto a propiedades mecánicas y físicas. Se pueden elegir distintas estrategias en el procesado: actuar sobre el nivel de densidad/porosidad y las propiedades de los poros, actuar sobre mecanismos de endurecimiento distintos a la densidad (el sistema de aleación, la microestructura, el tamaño de grano,…, mejorar la activación durante la sinterización y utilizar técnicas que inhiban el tamaño de grano durante la sinterización.

  16. Work exchange between quantum systems: the spin-oscillator model.

    Science.gov (United States)

    Schröder, Heiko; Mahler, Günter

    2010-02-01

    With the development of quantum thermodynamics it has been shown that relaxation to thermal equilibrium and with it the concept of heat flux may emerge directly from quantum mechanics. This happens for a large class of quantum systems if embedded into another quantum environment. In this paper, we discuss the complementary question of the emergence of work flux from quantum mechanics. We introduce and discuss two different methods to assess the work source quality of a system, one based on the generalized factorization approximation, the other based on generalized definitions of work and heat. By means of those methods, we show that small quantum systems can, indeed, act as work reservoirs. We illustrate this behavior for a simple system consisting of a spin coupled to an oscillator and investigate the effects of two different interactions on the work source quality. One case will be shown to allow for a work source functionality of arbitrarily high quality.

  17. Development of an Information System for Diploma Works Management

    Science.gov (United States)

    Georgieva-Trifonova, Tsvetanka

    2011-01-01

    In this paper, a client/server information system for the management of data and its extraction from a database containing information for diploma works of students is proposed. The developed system provides users the possibility of accessing information about different characteristics of the diploma works, according to their specific interests.…

  18. Work Ethics and Productivity in Local Government System in Nigeria ...

    African Journals Online (AJOL)

    The main thrust of this paper is motivated by the desire to examine the implications of the negative work attitudes that is prevalent among the employees of the local government system in Nigeria. The paper argued that the Nigeria local government system is engulfed in Negative work tendencies characterized by such ...

  19. 77 FR 24494 - Office of Federal High-Performance Green Buildings; Green Building Advisory Committee...

    Science.gov (United States)

    2012-04-24

    ... certification system review report High Performance Green Building Demonstration project at Fort Carson, Colorado Updates on other current priority projects of GSA's Office of Federal High-Performance Green... ADMINISTRATION Office of Federal High-Performance Green Buildings; Green Building Advisory Committee...

  20. Control switching in high performance and fault tolerant control

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2010-01-01

    The problem of reliability in high performance control and in fault tolerant control is considered in this paper. A feedback controller architecture for high performance and fault tolerance is considered. The architecture is based on the Youla-Jabr-Bongiorno-Kucera (YJBK) parameterization. By usi....... The architecture will also allow changing the applied sensors and/or actuators when switching between different controllers. This switchingget particular simple for open-loop stable systems.......The problem of reliability in high performance control and in fault tolerant control is considered in this paper. A feedback controller architecture for high performance and fault tolerance is considered. The architecture is based on the Youla-Jabr-Bongiorno-Kucera (YJBK) parameterization. By using...... the nominal controller in the architecture as a simple and robust controller, it is possible to use the YJBK transfer function for optimization of the closed-loop performance. This can be done both in connections with normal operation of the system as well as in connection with faults in the system...

  1. High-Performance Management Practices and Employee Outcomes in Denmark

    DEFF Research Database (Denmark)

    Cristini, Annalisa; Eriksson, Tor; Pozzoli, Dario

    High-performance work practices are frequently considered to have positive effects on corporate performance, but what do they do for employees? After showing that organizational innovation is indeed positively associated with firm performance, we investigate whether high-involvement work practices...... are associ- ated with higher wages, changes in wage inequality and workforce composition, using data from a survey directed at Danish private sector firms matched with linked employer-employee data. We also examine whether the relationship be- tween high-involvement work practices and employee outcomes...

  2. High-Performance Management Practices and Employee Outcomes in Denmark

    DEFF Research Database (Denmark)

    Cristini, Annalisa; Eriksson, Tor; Pozzoli, Dario

    2013-01-01

    High-performance work practices are frequently considered to have positive effects on corporate performance, but what do they do for employees? After assessing the correlation between organizational innovation and firm performance, this article investigates whether high-involvement work practices...... affect workers in terms of wages, wage inequality and workforce composition. The analysis is based on a survey directed at Danish firms matched with linked employer–employee data and also examines whether the relationship between high-involvement work practices and employee outcomes is affected...

  3. Architecting Web Sites for High Performance

    Directory of Open Access Journals (Sweden)

    Arun Iyengar

    2002-01-01

    Full Text Available Web site applications are some of the most challenging high-performance applications currently being developed and deployed. The challenges emerge from the specific combination of high variability in workload characteristics and of high performance demands regarding the service level, scalability, availability, and costs. In recent years, a large body of research has addressed the Web site application domain, and a host of innovative software and hardware solutions have been proposed and deployed. This paper is an overview of recent solutions concerning the architectures and the software infrastructures used in building Web site applications. The presentation emphasizes three of the main functions in a complex Web site: the processing of client requests, the control of service levels, and the interaction with remote network caches.

  4. Brain inspired high performance electronics on flexible silicon

    KAUST Repository

    Sevilla, Galo T.

    2014-06-01

    Brain\\'s stunning speed, energy efficiency and massive parallelism makes it the role model for upcoming high performance computation systems. Although human brain components are a million times slower than state of the art silicon industry components [1], they can perform 1016 operations per second while consuming less power than an electrical light bulb. In order to perform the same amount of computation with today\\'s most advanced computers, the output of an entire power station would be needed. In that sense, to obtain brain like computation, ultra-fast devices with ultra-low power consumption will have to be integrated in extremely reduced areas, achievable only if brain folded structure is mimicked. Therefore, to allow brain-inspired computation, flexible and transparent platform will be needed to achieve foldable structures and their integration on asymmetric surfaces. In this work, we show a new method to fabricate 3D and planar FET architectures in flexible and semitransparent silicon fabric without comprising performance and maintaining cost/yield advantage offered by silicon-based electronics.

  5. High Performance Building Facade Solutions - PIER Final Project Report

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eleanor; Selkowitz, Stephen

    2009-12-31

    Building facades directly influence heating and cooling loads and indirectly influence lighting loads when daylighting is considered, and are therefore a major determinant of annual energy use and peak electric demand. Facades also significantly influence occupant comfort and satisfaction, making the design optimization challenge more complex than many other building systems.This work focused on addressing significant near-term opportunities to reduce energy use in California commercial building stock by a) targeting voluntary, design-based opportunities derived from the use of better design guidelines and tools, and b) developing and deploying more efficient glazings, shading systems, daylighting systems, facade systems and integrated controls. This two-year project, supported by the California Energy Commission PIER program and the US Department of Energy, initiated a collaborative effort between The Lawrence Berkeley National Laboratory (LBNL) and major stakeholders in the facades industry to develop, evaluate, and accelerate market deployment of emerging, high-performance, integrated facade solutions. The LBNL Windows Testbed Facility acted as the primary catalyst and mediator on both sides of the building industry supply-user business transaction by a) aiding component suppliers to create and optimize cost effective, integrated systems that work, and b) demonstrating and verifying to the owner, designer, and specifier community that these integrated systems reliably deliver required energy performance. An industry consortium was initiated amongst approximately seventy disparate stakeholders, who unlike the HVAC or lighting industry, has no single representative, multi-disciplinary body or organized means of communicating and collaborating. The consortium provided guidance on the project and more importantly, began to mutually work out and agree on the goals, criteria, and pathways needed to attain the ambitious net zero energy goals defined by California and

  6. Failure analysis of high performance ballistic fibers

    OpenAIRE

    Spatola, Jennifer S

    2015-01-01

    High performance fibers have a high tensile strength and modulus, good wear resistance, and a low density, making them ideal for applications in ballistic impact resistance, such as body armor. However, the observed ballistic performance of these fibers is much lower than the predicted values. Since the predictions assume only tensile stress failure, it is safe to assume that the stress state is affecting fiber performance. The purpose of this research was to determine if there are failure mo...

  7. Nanoparticles for high performance concrete (HPC)

    OpenAIRE

    Torgal, Fernando Pacheco; Miraldo, Sérgio; Ding, Yining; J.A. Labrincha

    2013-01-01

    According to the 2011 ERMCO statistics, only 11% of the production of ready-mixed concrete relates to the high performance concrete (HPC) target. This percentage has remained unchanged since at least 2001 and appears a strange choice on the part of the construction industry, as HPC offers several advantages over normal-strength concrete, specifically those of high strength and durability. It allows for concrete structures requiring less steel reinforcement and offers a longer serviceable life...

  8. Robust High Performance Aquaporin based Biomimetic Membranes

    DEFF Research Database (Denmark)

    Helix Nielsen, Claus; Zhao, Yichun; Qiu, C.

    2013-01-01

    Aquaporins are water channel proteins with high water permeability and solute rejection, which makes them promising for preparing high-performance biomimetic membranes. Despite the growing interest in aquaporin-based biomimetic membranes (ABMs), it is challenging to produce robust and defect......% rejection for urea and a water permeability around 10 L/(m2h) with 2M NaCl as draw solution. Our results demonstrate the feasibility of using aquaporin proteins in biomimetic membranes for technological applications....

  9. Supervising the highly performing general practice registrar.

    Science.gov (United States)

    Morgan, Simon

    2014-02-01

    There is extensive literature on the poorly performing learner. In contrast, there is very little written on supervising the highly performing registrar. Outstanding trainees with high-level knowledge and skills can be a challenge for supervisors to supervise and teach. Narrative review and discussion. As with all learners, a learning-needs analysis is fundamental to successful supervision. The key to effective teaching of the highly performing registrar is to contextualise clinical knowledge and skills with the wisdom of accumulated experience. Moreover, supervisors must provide a stimulating learning environment, with regular opportunities for intellectual challenge. The provision of specific, constructive feedback is essential. There are potential opportunities to extend the highly performing registrar in all domains of general practice, namely communication skills and patient-centred care, applied knowledge and skills, population health, professionalism, and organisation and legal issues. Specific teaching strategies include role-play, video-consultation review, random case analysis, posing hypothetical clinical scenarios, role modelling and teaching other learners. © 2014 John Wiley & Sons Ltd.

  10. High Performance with Prescriptive Optimization and Debugging

    DEFF Research Database (Denmark)

    Jensen, Nicklas Bo

    Parallel programming is the dominant approach to achieve high performance in computing today. Correctly writing efficient and fast parallel programs is a big challenge mostly carried out by experts. We investigate optimization and debugging of parallel programs. We argue that automatic paralleliz......Parallel programming is the dominant approach to achieve high performance in computing today. Correctly writing efficient and fast parallel programs is a big challenge mostly carried out by experts. We investigate optimization and debugging of parallel programs. We argue that automatic...... analysis and vectorizer in GCC. Automatic optimizations often fail for theoretical and practical reasons. When they fail we argue that a hybrid approach can be effective. Using compiler feedback, we propose to use the programmer’s intuition and insight to achieve high performance. Compiler feedback...... the prescriptive debugging model, which is a user-guided model that allows the programmer to use his intuition to diagnose bugs in parallel programs. The model is scalable, yet capable enough, to be general-purpose. In our evaluation we demonstrate low run time overhead and logarithmic scalability. This enable...

  11. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  12. Scientific data storage solutions: Meeting the high-performance challenge

    Energy Technology Data Exchange (ETDEWEB)

    Krantz, D.; Jones, L.; Kluegel, L.; Ramsey, C.; Collins, W.

    1994-04-01

    The Los Alamos High-Performance Data System (HPDS) has been developed to meet data storage and data access requirements of Grand Challenge and National Security problems running in a high-performance computing environment. HPDS is a fourth-generation data storage system in which storage devices are directly connected to a network, data is transferred directly between client machines and storage devices, and software distributed on workstations provides system management and control capabilities. Essential to the success of HPDS is the ability to effectively use HIPPI networks and HIPPI-attached storage devices for high-speed data transfer. This paper focuses on the performance of the HPDS storage systems in a Cray Supercomputer environment.

  13. Scalable High Performance Computing: Direct and Large-Eddy Turbulent Flow Simulations Using Massively Parallel Computers

    Science.gov (United States)

    Morgan, Philip E.

    2004-01-01

    This final report contains reports of research related to the tasks "Scalable High Performance Computing: Direct and Lark-Eddy Turbulent FLow Simulations Using Massively Parallel Computers" and "Devleop High-Performance Time-Domain Computational Electromagnetics Capability for RCS Prediction, Wave Propagation in Dispersive Media, and Dual-Use Applications. The discussion of Scalable High Performance Computing reports on three objectives: validate, access scalability, and apply two parallel flow solvers for three-dimensional Navier-Stokes flows; develop and validate a high-order parallel solver for Direct Numerical Simulations (DNS) and Large Eddy Simulation (LES) problems; and Investigate and develop a high-order Reynolds averaged Navier-Stokes turbulence model. The discussion of High-Performance Time-Domain Computational Electromagnetics reports on five objectives: enhancement of an electromagnetics code (CHARGE) to be able to effectively model antenna problems; utilize lessons learned in high-order/spectral solution of swirling 3D jets to apply to solving electromagnetics project; transition a high-order fluids code, FDL3DI, to be able to solve Maxwell's Equations using compact-differencing; develop and demonstrate improved radiation absorbing boundary conditions for high-order CEM; and extend high-order CEM solver to address variable material properties. The report also contains a review of work done by the systems engineer.

  14. Greenlight high-performance system (HPS) 120-W laser vaporization versus transurethral resection of the prostate for the treatment of benign prostatic hyperplasia: a meta-analysis of the published results of randomized controlled trials.

    Science.gov (United States)

    Zhou, Yan; Xue, Boxin; Mohammad, Nadeem Ahmed; Chen, Dong; Sun, Xiaofei; Yang, Jinhui; Dai, Guangcheng

    2016-04-01

    To assess the efficacy and the safety of Greenlight(TM) high-performance system (HPS) 120-W laser photoselective vaporization of the prostate (PVP) compared with transurethral resection of the prostate (TURP) for treatment of benign prostatic hyperplasia (BPH). The related original studies only including randomized controlled trials were searched by databases MEDLINE, EMBASE, Google Scholar, and the Cochrane Controlled Trial Register. The databases were updated till July 2014. The risk ratio, mean difference, and their corresponding 95% confidence intervals were calculated. Risk of bias of the enrolled trials were assessed according to Cochrane Handbook. A total of four trials involving 559 patients were enrolled. Statistical analysis was performed by software Review Manager (V5.3.3). There was no significant difference in International Prostate Symptom Score (IPSS) and maximum flow rate (Qmax) between PVP and TURP at 6-, 12-, and 24-month follow-up. Patients in the PVP group were associated with significantly lower risk of capsule perforation (risk ratio (RR) = 0.06, 95% confidence interval (95%CI) = 0.01 to 0.46; p = 0.007), significantly lower transfusion requirements (RR = 0.12, 95%CI = 0.03 to 0.43; p = 0.001), a shorter catheterization time (mean difference (MD) = -41.93, 95%CI = -54.87 to -28.99; p < 0.00001), and a shorter duration of hospital stay (MD = -2.09, 95%CI = -2.58 to -1.59; p < 0.00001) than that in the TURP group. In the TURP group, the patients were associated with a lower risk of re-operation (RR = 3.68, 95%CI = 1.04 to 13.00; p = 0.04) and a shorter operative time (MD = 9.28, 95%CI = 2.80 to 15.75; p = 0.005) than those in the PVP group. In addition, no statistically significant differences were detected between groups in terms of the rates of transurethral resection syndrome, urethral stricture, bladder neck contracture, incontinence, and infection. Greenlight(TM) 120-W

  15. Efficacy of a vaporization-resection of the prostate median lobe enlargement and vaporization of the prostate lateral lobe for benign prostatic hyperplasia using a 120-W GreenLight high-performance system laser: the effect on storage symptoms.

    Science.gov (United States)

    Kim, Kang Sup; Choi, Sae Woong; Bae, Woong Jin; Kim, Su Jin; Cho, Hyuk Jin; Hong, Sung-Hoo; Lee, Ji Youl; Hwang, Tae-Kon; Kim, Sae Woong

    2015-05-01

    GreenLight laser photoselective vaporization of the prostate (PVP) was established as a minimally invasive procedure to treat patients with benign prostatic hyperplasia (BPH). However, it may be difficult to achieve adequate tissue removal from a large prostate, particularly those with an enlarged median lobe. The purpose of this study was to investigate the feasibility and clinical effect of a 120-W GreenLight high-performance system laser vaporization-resection for an enlarged prostate median lobe compared with those of only vaporization. A total of 126 patients from January 2010 to January 2014 had an enlarged prostate median lobe and were included in this study. Ninety-six patients underwent vaporization only (VP group), and 30 patients underwent vaporization-resection for an enlarged median lobe (VR group). The clinical outcomes were International Prostate Symptoms Score (IPSS), quality of life (QOL), maximum flow rate (Q max), and post-void residual urine volume (PVR) assessed at 1, 3, 6, and 12 months postoperatively between the two groups. The parameters were not significantly different preoperatively between the two groups, except for PVR. Operative time and laser time were shorter in the VR group than those in the VP group. (74.1 vs. 61.9 min and 46.7 vs. 37.8 min; P = 0.020 and 0.013, respectively) and used less energy (218.2 vs. 171.8 kJ, P = 0.025). Improved IPSS values, increased Q max, and a reduced PVR were seen in the two groups. In particular, improved storage IPSS values were higher at 1 and 3 months in the VR group than those in the VP group (P = 0.030 and 0.022, respectively). No significant complications were detected in either group. Median lobe tissue vaporization-resection was complete, and good voiding results were achieved. Although changes in urinary symptoms were similar between patients who received the two techniques, shorter operating time and lower energy were superior with the vaporization-resection technique. In

  16. Low cost high performance uncertainty quantification

    KAUST Repository

    Bekas, C.

    2009-01-01

    Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost which quickly becomes intractable with the current explosion of data sizes. In this work we reduce this complexity to quadratic with the synergy of two algorithms that gracefully complement each other and lead to a radically different approach. First, we turned to stochastic estimation of the diagonal. This allowed us to cast the problem as a linear system with a relatively small number of multiple right hand sides. Second, for this linear system we developed a novel, mixed precision, iterative refinement scheme, which uses iterative solvers instead of matrix factorizations. We demonstrate that the new framework not only achieves the much needed quadratic cost but in addition offers excellent opportunities for scaling at massively parallel environments. We based our implementation on BLAS 3 kernels that ensure very high processor performance. We achieved a peak performance of 730 TFlops on 72 BG/P racks, with a sustained performance 73% of theoretical peak. We stress that the techniques presented in this work are quite general and applicable to several other important applications. Copyright © 2009 ACM.

  17. Views from the Outside: How the Nonprofit Community Characterizes High Performance Nonprofit Organizations

    Directory of Open Access Journals (Sweden)

    Claudia PETRESCU

    2006-10-01

    Full Text Available What high performance means for nonprofit organizations and how to define high performance is still a discussed issue. Building on Paul Light’s work and the Standards of Excellence Codes of Conduct developed by the Maryland Association of Nonprofit Organizations, this research presents how the community, respectively the nonprofit organizations themselves, views high performance and define the characteristics of high performing organizations. The paper also analysis how the community’s perception of high performance compares with the Standards of Excellence Codes of Conduct.

  18. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  19. How usability work informed development of an insurance sales system

    DEFF Research Database (Denmark)

    Uldall-Espersen, Tobias

    2007-01-01

    This paper reports a case study of a software development project where an insurance sales system was developed. Two key persons in the project enforced usability work into the development process and usability work became a key success factor. The usability work was comprehensive and became a si...... a significant and integrated part of the development project, and it informed both the end product quality and the organization in which the system was implemented. The case study is based on interviews with six key persons in the project.......This paper reports a case study of a software development project where an insurance sales system was developed. Two key persons in the project enforced usability work into the development process and usability work became a key success factor. The usability work was comprehensive and became...

  20. High-performance commercial building facades

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eleanor; Selkowitz, Stephen; Bazjanac, Vladimir; Inkarojrit, Vorapat; Kohler, Christian

    2002-06-01

    This study focuses on advanced building facades that use daylighting, sun control, ventilation systems, and dynamic systems. A quick perusal of the leading architectural magazines, or a discussion in most architectural firms today will eventually lead to mention of some of the innovative new buildings that are being constructed with all-glass facades. Most of these buildings are appearing in Europe, although interestingly U.S. A/E firms often have a leading role in their design. This ''emerging technology'' of heavily glazed fagades is often associated with buildings whose design goals include energy efficiency, sustainability, and a ''green'' image. While there are a number of new books on the subject with impressive photos and drawings, there is little critical examination of the actual performance of such buildings, and a generally poor understanding as to whether they achieve their performance goals, or even what those goals might be. Even if the building ''works'' it is often dangerous to take a design solution from one climate and location and transport it to a new one without a good causal understanding of how the systems work. In addition, there is a wide range of existing and emerging glazing and fenestration technologies in use in these buildings, many of which break new ground with respect to innovative structural use of glass. It is unclear as to how well many of these designs would work as currently formulated in California locations dominated by intense sunlight and seismic events. Finally, the costs of these systems are higher than normal facades, but claims of energy and productivity savings are used to justify some of them. Once again these claims, while plausible, are largely unsupported. There have been major advances in glazing and facade technology over the past 30 years and we expect to see continued innovation and product development. It is critical in this process to be able to