WorldWideScience

Sample records for high model performance

  1. High-performance phase-field modeling

    KAUST Repository

    Vignal, Philippe

    2015-04-27

    Many processes in engineering and sciences involve the evolution of interfaces. Among the mathematical frameworks developed to model these types of problems, the phase-field method has emerged as a possible solution. Phase-fields nonetheless lead to complex nonlinear, high-order partial differential equations, whose solution poses mathematical and computational challenges. Guaranteeing some of the physical properties of the equations has lead to the development of efficient algorithms and discretizations capable of recovering said properties by construction [2, 5]. This work builds-up on these ideas, and proposes novel discretization strategies that guarantee numerical energy dissipation for both conserved and non-conserved phase-field models. The temporal discretization is based on a novel method which relies on Taylor series and ensures strong energy stability. It is second-order accurate, and can also be rendered linear to speed-up the solution process [4]. The spatial discretization relies on Isogeometric Analysis, a finite element method that possesses the k-refinement technology and enables the generation of high-order, high-continuity basis functions. These basis functions are well suited to handle the high-order operators present in phase-field models. Two-dimensional and three dimensional results of the Allen-Cahn, Cahn-Hilliard, Swift-Hohenberg and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  2. Utilities for high performance dispersion model PHYSIC

    International Nuclear Information System (INIS)

    Yamazawa, Hiromi

    1992-09-01

    The description and usage of the utilities for the dispersion calculation model PHYSIC were summarized. The model was developed in the study of developing high performance SPEEDI with the purpose of introducing meteorological forecast function into the environmental emergency response system. The procedure of PHYSIC calculation consists of three steps; preparation of relevant files, creation and submission of JCL, and graphic output of results. A user can carry out the above procedure with the help of the Geographical Data Processing Utility, the Model Control Utility, and the Graphic Output Utility. (author)

  3. Comparative performance of high-fidelity training models for flexible ureteroscopy: Are all models effective?

    Directory of Open Access Journals (Sweden)

    Shashikant Mishra

    2011-01-01

    Full Text Available Objective: We performed a comparative study of high-fidelity training models for flexible ureteroscopy (URS. Our objective was to determine whether high-fidelity non-virtual reality (VR models are as effective as the VR model in teaching flexible URS skills. Materials and Methods: Twenty-one trained urologists without clinical experience of flexible URS underwent dry lab simulation practice. After a warm-up period of 2 h, tasks were performed on a high-fidelity non-VR (Uro-scopic Trainer TM ; Endo-Urologie-Modell TM and a high-fidelity VR model (URO Mentor TM . The participants were divided equally into three batches with rotation on each of the three stations for 30 min. Performance of the trainees was evaluated by an expert ureteroscopist using pass rating and global rating score (GRS. The participants rated a face validity questionnaire at the end of each session. Results: The GRS improved statistically at evaluation performed after second rotation (P<0.001 for batches 1, 2 and 3. Pass ratings also improved significantly for all training models when the third and first rotations were compared (P<0.05. The batch that was trained on the VR-based model had more improvement on pass ratings on second rotation but could not achieve statistical significance. Most of the realistic domains were higher for a VR model as compared with the non-VR model, except the realism of the flexible endoscope. Conclusions: All the models used for training flexible URS were effective in increasing the GRS and pass ratings irrespective of the VR status.

  4. Using High-Dimensional Image Models to Perform Highly Undetectable Steganography

    Science.gov (United States)

    Pevný, Tomáš; Filler, Tomáš; Bas, Patrick

    This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.

  5. The contribution of high-performance computing and modelling for industrial development

    CSIR Research Space (South Africa)

    Sithole, Happy

    2017-10-01

    Full Text Available Performance Computing and Modelling for Industrial Development Dr Happy Sithole and Dr Onno Ubbink 2 Strategic context • High-performance computing (HPC) combined with machine Learning and artificial intelligence present opportunities to non...

  6. Kinetic Hydration Heat Modeling for High-Performance Concrete Containing Limestone Powder

    Directory of Open Access Journals (Sweden)

    Xiao-Yong Wang

    2017-01-01

    Full Text Available Limestone powder is increasingly used in producing high-performance concrete in the modern concrete industry. Limestone powder blended concrete has many advantages, such as increasing the early-age strength, reducing the setting time, improving the workability, and reducing the heat of hydration. This study presents a kinetic model for modeling the hydration heat of limestone blended concrete. First, an improved hydration model is proposed which considers the dilution effect and nucleation effect due to limestone powder addition. A degree of hydration is calculated using this improved hydration model. Second, hydration heat is calculated using the degree of hydration. The effects of water to binder ratio and limestone replacement ratio on hydration heat are clarified. Third, the temperature history and temperature distribution of hardening limestone blended concrete are calculated by combining hydration model with finite element method. The analysis results generally agree with experimental results of high-performance concrete with various mixing proportions.

  7. A new rate-dependent model for high-frequency tracking performance enhancement of piezoactuator system

    Science.gov (United States)

    Tian, Lizhi; Xiong, Zhenhua; Wu, Jianhua; Ding, Han

    2017-05-01

    Feedforward-feedback control is widely used in motion control of piezoactuator systems. Due to the phase lag caused by incomplete dynamics compensation, the performance of the composite controller is greatly limited at high frequency. This paper proposes a new rate-dependent model to improve the high-frequency tracking performance by reducing dynamics compensation error. The rate-dependent model is designed as a function of the input and input variation rate to describe the input-output relationship of the residual system dynamics which mainly performs as phase lag in a wide frequency band. Then the direct inversion of the proposed rate-dependent model is used to compensate the residual system dynamics. Using the proposed rate-dependent model as feedforward term, the open loop performance can be improved significantly at medium-high frequency. Then, combining the with feedback controller, the composite controller can provide enhanced close loop performance from low frequency to high frequency. At the frequency of 1 Hz, the proposed controller presents the same performance as previous methods. However, at the frequency of 900 Hz, the tracking error is reduced to be 30.7% of the decoupled approach.

  8. Performance Modeling and Optimization of a High Energy CollidingBeam Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Shan, Hongzhang; Strohmaier, Erich; Qiang, Ji; Bailey, David H.; Yelick, Kathy

    2006-06-01

    An accurate modeling of the beam-beam interaction is essential to maximizing the luminosity in existing and future colliders. BeamBeam3D was the first parallel code that can be used to study this interaction fully self-consistently on high-performance computing platforms. Various all-to-all personalized communication (AAPC) algorithms dominate its communication patterns, for which we developed a sequence of performance models using a series of micro-benchmarks. We find that for SMP based systems the most important performance constraint is node-adapter contention, while for 3D-Torus topologies good performance models are not possible without considering link contention. The best average model prediction error is very low on SMP based systems with of 3% to 7%. On torus based systems errors of 29% are higher but optimized performance can again be predicted within 8% in some cases. These excellent results across five different systems indicate that this methodology for performance modeling can be applied to a large class of algorithms.

  9. Performance Modeling and Optimization of a High Energy Colliding Beam Simulation Code

    International Nuclear Information System (INIS)

    Shan, Hongzhang; Strohmaier, Erich; Qiang, Ji; Bailey, David H.; Yelick, Kathy

    2006-01-01

    An accurate modeling of the beam-beam interaction is essential to maximizing the luminosity in existing and future colliders. BeamBeam3D was the first parallel code that can be used to study this interaction fully self-consistently on high-performance computing platforms. Various all-to-all personalized communication (AAPC) algorithms dominate its communication patterns, for which we developed a sequence of performance models using a series of micro-benchmarks. We find that for SMP based systems the most important performance constraint is node-adapter contention, while for 3D-Torus topologies good performance models are not possible without considering link contention. The best average model prediction error is very low on SMP based systems with of 3% to 7%. On torus based systems errors of 29% are higher but optimized performance can again be predicted within 8% in some cases. These excellent results across five different systems indicate that this methodology for performance modeling can be applied to a large class of algorithms

  10. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda; Yokota, Rio; Keyes, David E.

    2016-01-01

    model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization

  11. Fracture modelling of a high performance armour steel

    Science.gov (United States)

    Skoglund, P.; Nilsson, M.; Tjernberg, A.

    2006-08-01

    The fracture characteristics of the high performance armour steel Armox 500T is investigated. Tensile mechanical experiments using samples with different notch geometries are used to investigate the effect of multi-axial stress states on the strain to fracture. The experiments are numerically simulated and from the simulation the stress at the point of fracture initiation is determined as a function of strain and these data are then used to extract parameters for fracture models. A fracture model based on quasi-static experiments is suggested and the model is tested against independent experiments done at both static and dynamic loading. The result show that the fracture model give reasonable good agreement between simulations and experiments at both static and dynamic loading condition. This indicates that multi-axial loading is more important to the strain to fracture than the deformation rate in the investigated loading range. However on-going work will further characterise the fracture behaviour of Armox 500T.

  12. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda

    2016-03-04

    Exascale systems are predicted to have approximately 1 billion cores, assuming gigahertz cores. Limitations on affordable network topologies for distributed memory systems of such massive scale bring new challenges to the currently dominant parallel programing model. Currently, there are many efforts to evaluate the hardware and software bottlenecks of exascale designs. It is therefore of interest to model application performance and to understand what changes need to be made to ensure extrapolated scalability. The fast multipole method (FMM) was originally developed for accelerating N-body problems in astrophysics and molecular dynamics but has recently been extended to a wider range of problems. Its high arithmetic intensity combined with its linear complexity and asynchronous communication patterns make it a promising algorithm for exascale systems. In this paper, we discuss the challenges for FMM on current parallel computers and future exascale architectures, with a focus on internode communication. We focus on the communication part only; the efficiency of the computational kernels are beyond the scope of the present study. We develop a performance model that considers the communication patterns of the FMM and observe a good match between our model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization of internode communication in FMM that validates the model against actual measurements of communication time. The ultimate communication model is predictive in an absolute sense; however, on complex systems, this objective is often out of reach or of a difficulty out of proportion to its benefit when there exists a simpler model that is inexpensive and sufficient to guide coding decisions leading to improved scaling. The current model provides such guidance.

  13. Modeling of high-density U-MO dispersion fuel plate performance

    International Nuclear Information System (INIS)

    Hayes, S.L.; Meyer, M.K.; Hofman, G.L.; Rest, J.; Snelgrove, J.L.

    2002-01-01

    Results from postirradiation examinations (PIE) of highly loaded U-Mo/Al dispersion fuel plates over the past several years have shown that the interaction between the metallic fuel particles and the matrix aluminum can be extensive, reducing the volume of the high-conductivity matrix phase and producing a significant volume of low-conductivity reaction-product phase. This phenomenon results in a significant decrease in fuel meat thermal conductivity during irradiation. PIE has further shown that the fuel-matrix interaction rate is a sensitive function of irradiation temperature. The interplay between fuel temperature and fuel-matrix interaction makes the development of a simple empirical correlation between the two difficult. For this reason a comprehensive thermal model has been developed to calculate temperatures throughout the fuel plate over its lifetime, taking into account the changing volume fractions of fuel, matrix and reaction-product phases within the fuel meat owing to fuel-matrix interaction; this thermal model has been incorporated into the dispersion fuel performance code designated PLATE. Other phenomena important to fuel thermal performance that are also treated in PLATE include: gas generation and swelling in the fuel and reaction-product phases, incorporation of matrix aluminum into solid solution with the unreacted metallic fuel particles, matrix extrusion resulting from fuel swelling, and cladding corrosion. The phenomena modeled also make possible a prediction of fuel plate swelling. This paper presents a description of the models and empirical correlations employed within PLATE as well as validation of code predictions against fuel performance data for U-Mo experimental fuel plates from the RERTR-3 irradiation test. (author)

  14. High-Level Performance Modeling of SAR Systems

    Science.gov (United States)

    Chen, Curtis

    2006-01-01

    SAUSAGE (Still Another Utility for SAR Analysis that s General and Extensible) is a computer program for modeling (see figure) the performance of synthetic- aperture radar (SAR) or interferometric synthetic-aperture radar (InSAR or IFSAR) systems. The user is assumed to be familiar with the basic principles of SAR imaging and interferometry. Given design parameters (e.g., altitude, power, and bandwidth) that characterize a radar system, the software predicts various performance metrics (e.g., signal-to-noise ratio and resolution). SAUSAGE is intended to be a general software tool for quick, high-level evaluation of radar designs; it is not meant to capture all the subtleties, nuances, and particulars of specific systems. SAUSAGE was written to facilitate the exploration of engineering tradeoffs within the multidimensional space of design parameters. Typically, this space is examined through an iterative process of adjusting the values of the design parameters and examining the effects of the adjustments on the overall performance of the system at each iteration. The software is designed to be modular and extensible to enable consideration of a variety of operating modes and antenna beam patterns, including, for example, strip-map and spotlight SAR acquisitions, polarimetry, burst modes, and squinted geometries.

  15. Modeling and design of a high-performance hybrid actuator

    Science.gov (United States)

    Aloufi, Badr; Behdinan, Kamran; Zu, Jean

    2016-12-01

    This paper presents the model and design of a novel hybrid piezoelectric actuator which provides high active and passive performances for smart structural systems. The actuator is composed of a pair of curved pre-stressed piezoelectric actuators, so-called commercially THUNDER actuators, installed opposite each other using two clamping mechanisms constructed of in-plane fixable hinges, grippers and solid links. A fully mathematical model is developed to describe the active and passive dynamics of the actuator and investigate the effects of its geometrical parameters on the dynamic stiffness, free displacement and blocked force properties. Among the literature that deals with piezoelectric actuators in which THUNDER elements are used as a source of electromechanical power, the proposed study is unique in that it presents a mathematical model that has the ability to predict the actuator characteristics and achieve other phenomena, such as resonances, mode shapes, phase shifts, dips, etc. For model validation, the measurements of the free dynamic response per unit voltage and passive acceleration transmissibility of a particular actuator design are used to check the accuracy of the results predicted by the model. The results reveal that there is a good agreement between the model and experiment. Another experiment is performed to teste the linearity of the actuator system by examining the variation of the output dynamic responses with varying forces and voltages at different frequencies. From the results, it can be concluded that the actuator acts approximately as a linear system at frequencies up to 1000 Hz. A parametric study is achieved here by applying the developed model to analyze the influence of the geometrical parameters of the fixable hinges on the active and passive actuator properties. The model predictions in the frequency range of 0-1000 Hz show that the hinge thickness, radius, and opening angle parameters have great effects on the frequency dynamic

  16. High-performance phase-field modeling

    KAUST Repository

    Vignal, Philippe; Sarmiento, Adel; Cortes, Adriano Mauricio; Dalcin, L.; Collier, N.; Calo, Victor M.

    2015-01-01

    and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  17. Solving Problems in Various Domains by Hybrid Models of High Performance Computations

    Directory of Open Access Journals (Sweden)

    Yurii Rogozhin

    2014-03-01

    Full Text Available This work presents a hybrid model of high performance computations. The model is based on membrane system (P~system where some membranes may contain quantum device that is triggered by the data entering the membrane. This model is supposed to take advantages of both biomolecular and quantum paradigms and to overcome some of their inherent limitations. The proposed approach is demonstrated through two selected problems: SAT, and image retrieving.

  18. Modeling Phase-transitions Using a High-performance, Isogeometric Analysis Framework

    KAUST Repository

    Vignal, Philippe

    2014-06-06

    In this paper, we present a high-performance framework for solving partial differential equations using Isogeometric Analysis, called PetIGA, and show how it can be used to solve phase-field problems. We specifically chose the Cahn-Hilliard equation, and the phase-field crystal equation as test cases. These two models allow us to highlight some of the main advantages that we have access to while using PetIGA for scientific computing.

  19. Comprehensive Simulation Lifecycle Management for High Performance Computing Modeling and Simulation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — There are significant logistical barriers to entry-level high performance computing (HPC) modeling and simulation (M IllinoisRocstar) sets up the infrastructure for...

  20. DiamondTorre Algorithm for High-Performance Wave Modeling

    Directory of Open Access Journals (Sweden)

    Vadim Levchenko

    2016-08-01

    Full Text Available Effective algorithms of physical media numerical modeling problems’ solution are discussed. The computation rate of such problems is limited by memory bandwidth if implemented with traditional algorithms. The numerical solution of the wave equation is considered. A finite difference scheme with a cross stencil and a high order of approximation is used. The DiamondTorre algorithm is constructed, with regard to the specifics of the GPGPU’s (general purpose graphical processing unit memory hierarchy and parallelism. The advantages of these algorithms are a high level of data localization, as well as the property of asynchrony, which allows one to effectively utilize all levels of GPGPU parallelism. The computational intensity of the algorithm is greater than the one for the best traditional algorithms with stepwise synchronization. As a consequence, it becomes possible to overcome the above-mentioned limitation. The algorithm is implemented with CUDA. For the scheme with the second order of approximation, the calculation performance of 50 billion cells per second is achieved. This exceeds the result of the best traditional algorithm by a factor of five.

  1. Impact of high-performance work systems on individual- and branch-level performance: test of a multilevel model of intermediate linkages.

    Science.gov (United States)

    Aryee, Samuel; Walumbwa, Fred O; Seidu, Emmanuel Y M; Otaye, Lilian E

    2012-03-01

    We proposed and tested a multilevel model, underpinned by empowerment theory, that examines the processes linking high-performance work systems (HPWS) and performance outcomes at the individual and organizational levels of analyses. Data were obtained from 37 branches of 2 banking institutions in Ghana. Results of hierarchical regression analysis revealed that branch-level HPWS relates to empowerment climate. Additionally, results of hierarchical linear modeling that examined the hypothesized cross-level relationships revealed 3 salient findings. First, experienced HPWS and empowerment climate partially mediate the influence of branch-level HPWS on psychological empowerment. Second, psychological empowerment partially mediates the influence of empowerment climate and experienced HPWS on service performance. Third, service orientation moderates the psychological empowerment-service performance relationship such that the relationship is stronger for those high rather than low in service orientation. Last, ordinary least squares regression results revealed that branch-level HPWS influences branch-level market performance through cross-level and individual-level influences on service performance that emerges at the branch level as aggregated service performance.

  2. High-Performance Computer Modeling of the Cosmos-Iridium Collision

    Energy Technology Data Exchange (ETDEWEB)

    Olivier, S; Cook, K; Fasenfest, B; Jefferson, D; Jiang, M; Leek, J; Levatin, J; Nikolaev, S; Pertica, A; Phillion, D; Springer, K; De Vries, W

    2009-08-28

    This paper describes the application of a new, integrated modeling and simulation framework, encompassing the space situational awareness (SSA) enterprise, to the recent Cosmos-Iridium collision. This framework is based on a flexible, scalable architecture to enable efficient simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel, high-performance computer systems available, for example, at Lawrence Livermore National Laboratory. We will describe the application of this framework to the recent collision of the Cosmos and Iridium satellites, including (1) detailed hydrodynamic modeling of the satellite collision and resulting debris generation, (2) orbital propagation of the simulated debris and analysis of the increased risk to other satellites (3) calculation of the radar and optical signatures of the simulated debris and modeling of debris detection with space surveillance radar and optical systems (4) determination of simulated debris orbits from modeled space surveillance observations and analysis of the resulting orbital accuracy, (5) comparison of these modeling and simulation results with Space Surveillance Network observations. We will also discuss the use of this integrated modeling and simulation framework to analyze the risks and consequences of future satellite collisions and to assess strategies for mitigating or avoiding future incidents, including the addition of new sensor systems, used in conjunction with the Space Surveillance Network, for improving space situational awareness.

  3. Effects of Modeling and Tempo Patterns as Practice Techniques on the Performance of High School Instrumentalists.

    Science.gov (United States)

    Henley, Paul T.

    2001-01-01

    Examines the effect of modeling conditions and tempo patterns on high school instrumentalists' performance. Focuses on high school students (n=60) who play wind instruments. Reports that the with-model condition was superior in rhythm and tempo percentage gain when compared to the no-model condition. Includes references. (CMK)

  4. A Family of High-Performance Solvers for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Frison, Gianluca; Sokoler, Leo Emil; Jørgensen, John Bagterp

    2014-01-01

    In Model Predictive Control (MPC), an optimization problem has to be solved at each sampling time, and this has traditionally limited the use of MPC to systems with slow dynamic. In this paper, we propose an e_cient solution strategy for the unconstrained sub-problems that give the search......-direction in Interior-Point (IP) methods for MPC, and that usually are the computational bottle-neck. This strategy combines a Riccati-like solver with the use of high-performance computing techniques: in particular, in this paper we explore the performance boost given by the use of single precision computation...

  5. Behavioral Model of High Performance Camera for NIF Optics Inspection

    International Nuclear Information System (INIS)

    Hackel, B M

    2007-01-01

    The purpose of this project was to develop software that will model the behavior of the high performance Spectral Instruments 1000 series Charge-Coupled Device (CCD) camera located in the Final Optics Damage Inspection (FODI) system on the National Ignition Facility. NIF's target chamber will be mounted with 48 Final Optics Assemblies (FOAs) to convert the laser light from infrared to ultraviolet and focus it precisely on the target. Following a NIF shot, the optical components of each FOA must be carefully inspected for damage by the FODI to ensure proper laser performance during subsequent experiments. Rapid image capture and complex image processing (to locate damage sites) will reduce shot turnaround time; thus increasing the total number of experiments NIF can conduct during its 30 year lifetime. Development of these rapid processes necessitates extensive offline software automation -- especially after the device has been deployed in the facility. Without access to the unique real device or an exact behavioral model, offline software testing is difficult. Furthermore, a software-based behavioral model allows for many instances to be running concurrently; this allows multiple developers to test their software at the same time. Thus it is beneficial to construct separate software that will exactly mimic the behavior and response of the real SI-1000 camera

  6. High Performance Electrical Modeling and Simulation Verification Test Suite - Tier I; TOPICAL

    International Nuclear Information System (INIS)

    SCHELLS, REGINA L.; BOGDAN, CAROLYN W.; WIX, STEVEN D.

    2001-01-01

    This document describes the High Performance Electrical Modeling and Simulation (HPEMS) Global Verification Test Suite (VERTS). The VERTS is a regression test suite used for verification of the electrical circuit simulation codes currently being developed by the HPEMS code development team. This document contains descriptions of the Tier I test cases

  7. Simulation model of a twin-tail, high performance airplane

    Science.gov (United States)

    Buttrill, Carey S.; Arbuckle, P. Douglas; Hoffler, Keith D.

    1992-01-01

    The mathematical model and associated computer program to simulate a twin-tailed high performance fighter airplane (McDonnell Douglas F/A-18) are described. The simulation program is written in the Advanced Continuous Simulation Language. The simulation math model includes the nonlinear six degree-of-freedom rigid-body equations, an engine model, sensors, and first order actuators with rate and position limiting. A simplified form of the F/A-18 digital control laws (version 8.3.3) are implemented. The simulated control law includes only inner loop augmentation in the up and away flight mode. The aerodynamic forces and moments are calculated from a wind-tunnel-derived database using table look-ups with linear interpolation. The aerodynamic database has an angle-of-attack range of -10 to +90 and a sideslip range of -20 to +20 degrees. The effects of elastic deformation are incorporated in a quasi-static-elastic manner. Elastic degrees of freedom are not actively simulated. In the engine model, the throttle-commanded steady-state thrust level and the dynamic response characteristics of the engine are based on airflow rate as determined from a table look-up. Afterburner dynamics are switched in at a threshold based on the engine airflow and commanded thrust.

  8. FRAPCON-3: Modifications to fuel rod material properties and performance models for high-burnup application

    International Nuclear Information System (INIS)

    Lanning, D.D.; Beyer, C.E.; Painter, C.L.

    1997-12-01

    This volume describes the fuel rod material and performance models that were updated for the FRAPCON-3 steady-state fuel rod performance code. The property and performance models were changed to account for behavior at extended burnup levels up to 65 Gwd/MTU. The property and performance models updated were the fission gas release, fuel thermal conductivity, fuel swelling, fuel relocation, radial power distribution, solid-solid contact gap conductance, cladding corrosion and hydriding, cladding mechanical properties, and cladding axial growth. Each updated property and model was compared to well characterized data up to high burnup levels. The installation of these properties and models in the FRAPCON-3 code along with input instructions are provided in Volume 2 of this report and Volume 3 provides a code assessment based on comparison to integral performance data. The updated FRAPCON-3 code is intended to replace the earlier codes FRAPCON-2 and GAPCON-THERMAL-2. 94 refs., 61 figs., 9 tabs

  9. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing.

    Science.gov (United States)

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2014-10-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA's CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream . Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels.

  10. High-performance speech recognition using consistency modeling

    Science.gov (United States)

    Digalakis, Vassilios; Murveit, Hy; Monaco, Peter; Neumeyer, Leo; Sankar, Ananth

    1994-12-01

    The goal of SRI's consistency modeling project is to improve the raw acoustic modeling component of SRI's DECIPHER speech recognition system and develop consistency modeling technology. Consistency modeling aims to reduce the number of improper independence assumptions used in traditional speech recognition algorithms so that the resulting speech recognition hypotheses are more self-consistent and, therefore, more accurate. At the initial stages of this effort, SRI focused on developing the appropriate base technologies for consistency modeling. We first developed the Progressive Search technology that allowed us to perform large-vocabulary continuous speech recognition (LVCSR) experiments. Since its conception and development at SRI, this technique has been adopted by most laboratories, including other ARPA contracting sites, doing research on LVSR. Another goal of the consistency modeling project is to attack difficult modeling problems, when there is a mismatch between the training and testing phases. Such mismatches may include outlier speakers, different microphones and additive noise. We were able to either develop new, or transfer and evaluate existing, technologies that adapted our baseline genonic HMM recognizer to such difficult conditions.

  11. SUMO, System performance assessment for a high-level nuclear waste repository: Mathematical models

    International Nuclear Information System (INIS)

    Eslinger, P.W.; Miley, T.B.; Engel, D.W.; Chamberlain, P.J. II.

    1992-09-01

    Following completion of the preliminary risk assessment of the potential Yucca Mountain Site by Pacific Northwest Laboratory (PNL) in 1988, the Office of Civilian Radioactive Waste Management (OCRWM) of the US Department of Energy (DOE) requested the Performance Assessment Scientific Support (PASS) Program at PNL to develop an integrated system model and computer code that provides performance and risk assessment analysis capabilities for a potential high-level nuclear waste repository. The system model that has been developed addresses the cumulative radionuclide release criteria established by the US Environmental Protection Agency (EPA) and estimates population risks in terms of dose to humans. The system model embodied in the SUMO (System Unsaturated Model) code will also allow benchmarking of other models being developed for the Yucca Mountain Project. The system model has three natural divisions: (1) source term, (2) far-field transport, and (3) dose to humans. This document gives a detailed description of the mathematics of each of these three divisions. Each of the governing equations employed is based on modeling assumptions that are widely accepted within the scientific community

  12. Corrosion models for predictions of performance of high-level radioactive-waste containers

    Energy Technology Data Exchange (ETDEWEB)

    Farmer, J.C.; McCright, R.D. [Lawrence Livermore National Lab., CA (United States); Gdowski, G.E. [KMI Energy Services, Livermore, CA (United States)

    1991-11-01

    The present plan for disposal of high-level radioactive waste in the US is to seal it in containers before emplacement in a geologic repository. A proposed site at Yucca Mountain, Nevada, is being evaluated for its suitability as a geologic repository. The containers will probably be made of either an austenitic or a copper-based alloy. Models of alloy degradation are being used to predict the long-term performance of the containers under repository conditions. The models are of uniform oxidation and corrosion, localized corrosion, and stress corrosion cracking, and are applicable to worst-case scenarios of container degradation. This paper reviews several of the models.

  13. Micromechanical Models of Mechanical Response of High Performance Fibre Reinforced Cement Composites

    DEFF Research Database (Denmark)

    Li, V. C.; Mihashi, H.; Alwan, J.

    1996-01-01

    generation of FRC with high performance and economical viability, is in sight. However, utilization of micromechanical models for a more comprehensive set of important HPFRCC properties awaits further investigations into fundamental mechanisms governing composite properties, as well as intergrative efforts......The state-of-the-art in micromechanical modeling of the mechanical response of HPFRCC is reviewed. Much advances in modeling has been made over the last decade to the point that certain properties of composites can be carefully designed using the models as analytic tools. As a result, a new...... across responses to different load types. Further, micromechanical models for HPFRCC behavior under complex loading histories, including those in fracture, fatigue and multuaxial loading are urgently needed in order to optimize HPFRCC microstrcuctures and enable predictions of such material in structures...

  14. Challenges and opportunities of modeling plasma–surface interactions in tungsten using high-performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Wirth, Brian D., E-mail: bdwirth@utk.edu [Department of Nuclear Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Nuclear Science and Engineering Directorate, Oak Ridge National Laboratory, Oak Ridge, TN (United States); Hammond, K.D. [Department of Nuclear Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Krasheninnikov, S.I. [University of California, San Diego, La Jolla, CA (United States); Maroudas, D. [University of Massachusetts, Amherst, Amherst, MA 01003 (United States)

    2015-08-15

    The performance of plasma facing components (PFCs) is critical for ITER and future magnetic fusion reactors. The ITER divertor will be tungsten, which is the primary candidate material for future reactors. Recent experiments involving tungsten exposure to low-energy helium plasmas reveal significant surface modification, including the growth of nanometer-scale tendrils of “fuzz” and formation of nanometer-sized bubbles in the near-surface region. The large span of spatial and temporal scales governing plasma surface interactions are among the challenges to modeling divertor performance. Fortunately, recent innovations in computational modeling, increasingly powerful high-performance computers, and improved experimental characterization tools provide a path toward self-consistent, experimentally validated models of PFC and divertor performance. Recent advances in understanding tungsten–helium interactions are reviewed, including such processes as helium clustering, which serve as nuclei for gas bubbles; and trap mutation, dislocation loop punching and bubble bursting; which together initiate surface morphological modification.

  15. Challenges and opportunities of modeling plasma–surface interactions in tungsten using high-performance computing

    International Nuclear Information System (INIS)

    Wirth, Brian D.; Hammond, K.D.; Krasheninnikov, S.I.; Maroudas, D.

    2015-01-01

    The performance of plasma facing components (PFCs) is critical for ITER and future magnetic fusion reactors. The ITER divertor will be tungsten, which is the primary candidate material for future reactors. Recent experiments involving tungsten exposure to low-energy helium plasmas reveal significant surface modification, including the growth of nanometer-scale tendrils of “fuzz” and formation of nanometer-sized bubbles in the near-surface region. The large span of spatial and temporal scales governing plasma surface interactions are among the challenges to modeling divertor performance. Fortunately, recent innovations in computational modeling, increasingly powerful high-performance computers, and improved experimental characterization tools provide a path toward self-consistent, experimentally validated models of PFC and divertor performance. Recent advances in understanding tungsten–helium interactions are reviewed, including such processes as helium clustering, which serve as nuclei for gas bubbles; and trap mutation, dislocation loop punching and bubble bursting; which together initiate surface morphological modification

  16. Development of a GPU-based high-performance radiative transfer model for the Infrared Atmospheric Sounding Interferometer (IASI)

    International Nuclear Information System (INIS)

    Huang Bormin; Mielikainen, Jarno; Oh, Hyunjong; Allen Huang, Hung-Lung

    2011-01-01

    Satellite-observed radiance is a nonlinear functional of surface properties and atmospheric temperature and absorbing gas profiles as described by the radiative transfer equation (RTE). In the era of hyperspectral sounders with thousands of high-resolution channels, the computation of the radiative transfer model becomes more time-consuming. The radiative transfer model performance in operational numerical weather prediction systems still limits the number of channels we can use in hyperspectral sounders to only a few hundreds. To take the full advantage of such high-resolution infrared observations, a computationally efficient radiative transfer model is needed to facilitate satellite data assimilation. In recent years the programmable commodity graphics processing unit (GPU) has evolved into a highly parallel, multi-threaded, many-core processor with tremendous computational speed and very high memory bandwidth. The radiative transfer model is very suitable for the GPU implementation to take advantage of the hardware's efficiency and parallelism where radiances of many channels can be calculated in parallel in GPUs. In this paper, we develop a GPU-based high-performance radiative transfer model for the Infrared Atmospheric Sounding Interferometer (IASI) launched in 2006 onboard the first European meteorological polar-orbiting satellites, METOP-A. Each IASI spectrum has 8461 spectral channels. The IASI radiative transfer model consists of three modules. The first module for computing the regression predictors takes less than 0.004% of CPU time, while the second module for transmittance computation and the third module for radiance computation take approximately 92.5% and 7.5%, respectively. Our GPU-based IASI radiative transfer model is developed to run on a low-cost personal supercomputer with four GPUs with total 960 compute cores, delivering near 4 TFlops theoretical peak performance. By massively parallelizing the second and third modules, we reached 364x

  17. High Performance Programming Using Explicit Shared Memory Model on Cray T3D1

    Science.gov (United States)

    Simon, Horst D.; Saini, Subhash; Grassi, Charles

    1994-01-01

    The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.

  18. A novel high-performance self-powered ultraviolet photodetector: Concept, analytical modeling and analysis

    Science.gov (United States)

    Ferhati, H.; Djeffal, F.

    2017-12-01

    In this paper, a new MSM-UV-photodetector (PD) based on dual wide band-gap material (DM) engineering aspect is proposed to achieve high-performance self-powered device. Comprehensive analytical models for the proposed sensor photocurrent and the device properties are developed incorporating the impact of DM aspect on the device photoelectrical behavior. The obtained results are validated with the numerical data using commercial TCAD software. Our investigation demonstrates that the adopted design amendment modulates the electric field in the device, which provides the possibility to drive appropriate photo-generated carriers without an external applied voltage. This phenomenon suggests achieving the dual role of effective carriers' separation and an efficient reduce of the dark current. Moreover, a new hybrid approach based on analytical modeling and Particle Swarm Optimization (PSO) is proposed to achieve improved photoelectric behavior at zero bias that can ensure favorable self-powered MSM-based UV-PD. It is found that the proposed design methodology has succeeded in identifying the optimized design that offers a self-powered device with high-responsivity (98 mA/W) and superior ION/IOFF ratio (480 dB). These results make the optimized MSM-UV-DM-PD suitable for providing low cost self-powered devices for high-performance optical communication and monitoring applications.

  19. Performance of five surface energy balance models for estimating daily evapotranspiration in high biomass sorghum

    Science.gov (United States)

    Wagle, Pradeep; Bhattarai, Nishan; Gowda, Prasanna H.; Kakani, Vijaya G.

    2017-06-01

    Robust evapotranspiration (ET) models are required to predict water usage in a variety of terrestrial ecosystems under different geographical and agrometeorological conditions. As a result, several remote sensing-based surface energy balance (SEB) models have been developed to estimate ET over large regions. However, comparison of the performance of several SEB models at the same site is limited. In addition, none of the SEB models have been evaluated for their ability to predict ET in rain-fed high biomass sorghum grown for biofuel production. In this paper, we evaluated the performance of five widely used single-source SEB models, namely Surface Energy Balance Algorithm for Land (SEBAL), Mapping ET with Internalized Calibration (METRIC), Surface Energy Balance System (SEBS), Simplified Surface Energy Balance Index (S-SEBI), and operational Simplified Surface Energy Balance (SSEBop), for estimating ET over a high biomass sorghum field during the 2012 and 2013 growing seasons. The predicted ET values were compared against eddy covariance (EC) measured ET (ETEC) for 19 cloud-free Landsat image. In general, S-SEBI, SEBAL, and SEBS performed reasonably well for the study period, while METRIC and SSEBop performed poorly. All SEB models substantially overestimated ET under extremely dry conditions as they underestimated sensible heat (H) and overestimated latent heat (LE) fluxes under dry conditions during the partitioning of available energy. METRIC, SEBAL, and SEBS overestimated LE regardless of wet or dry periods. Consequently, predicted seasonal cumulative ET by METRIC, SEBAL, and SEBS were higher than seasonal cumulative ETEC in both seasons. In contrast, S-SEBI and SSEBop substantially underestimated ET under too wet conditions, and predicted seasonal cumulative ET by S-SEBI and SSEBop were lower than seasonal cumulative ETEC in the relatively wetter 2013 growing season. Our results indicate the necessity of inclusion of soil moisture or plant water stress

  20. Photons, photosynthesis, and high-performance computing: challenges, progress, and promise of modeling metabolism in green algae

    International Nuclear Information System (INIS)

    Chang, C H; Graf, P; Alber, D M; Kim, K; Murray, G; Posewitz, M; Seibert, M

    2008-01-01

    The complexity associated with biological metabolism considered at a kinetic level presents a challenge to quantitative modeling. In particular, the relatively sparse knowledge of parameters for enzymes with known kinetic responses is problematic. The possible space of these parameters is of high-dimension, and sampling of such a space typifies a combinatorial explosion of possible dynamic states. However, with sufficient quantitative transcriptomics, proteomics, and metabolomics data at hand, these challenges could be met by high-performance software with sampling, fitting, and optimization capabilities. With this in mind, we present the High-Performance Systems Biology Toolkit HiPer SBTK, an evolving software package to simulate, fit, and optimize metabolite concentrations and fluxes within the space of rate and binding parameters associated with detailed enzyme kinetic models. We present our chosen modeling paradigm for the formulation of metabolic pathway models, the means to address the challenge of representing such models in a precise and persistent fashion using the standardized Systems Biology Markup Language, and our second-generation model of H2-associated Chlamydomonas metabolism. Processing of such models for hierarchically parallelized simulation and optimization, job specification by the user through a GUI interface, software capabilities and initial scaling data, and the mapping of the computation to biological questions is also discussed. Moreover, we present near-term future software and model development goals

  1. Simulink models for performance analysis of high speed DQPSK modulated optical link

    International Nuclear Information System (INIS)

    Sharan, Lucky; Rupanshi,; Chaubey, V. K.

    2016-01-01

    This paper attempts to present the design approach for development of simulation models to study and analyze the transmission of 10 Gbps DQPSK signal over a single channel Peer to Peer link using Matlab Simulink. The simulation model considers the different optical components used in link design with their behavior represented initially by theoretical interpretation, including the transmitter topology, Mach Zehnder Modulator(MZM) module and, the propagation model for optical fibers etc. thus allowing scope for direct realization in experimental configurations. It provides the flexibility to incorporate the various photonic components as either user-defined or fixed and, can also be enhanced or removed from the model as per the design requirements. We describe the detailed operation and need of every component model and its representation in Simulink blocksets. Moreover the developed model can be extended in future to support Dense Wavelength Division Multiplexing (DWDM) system, thereby allowing high speed transmission with N × 40 Gbps systems. The various compensation techniques and their influence on system performance can be easily investigated by using such models.

  2. Simulink models for performance analysis of high speed DQPSK modulated optical link

    Energy Technology Data Exchange (ETDEWEB)

    Sharan, Lucky, E-mail: luckysharan@pilani.bits-pilani.ac.in; Rupanshi,, E-mail: f2011222@pilani.bits-pilani.ac.in; Chaubey, V. K., E-mail: vkc@pilani.bits-pilani.ac.in [EEE Department, BITS-Pilani, Rajasthan, 333031 (India)

    2016-03-09

    This paper attempts to present the design approach for development of simulation models to study and analyze the transmission of 10 Gbps DQPSK signal over a single channel Peer to Peer link using Matlab Simulink. The simulation model considers the different optical components used in link design with their behavior represented initially by theoretical interpretation, including the transmitter topology, Mach Zehnder Modulator(MZM) module and, the propagation model for optical fibers etc. thus allowing scope for direct realization in experimental configurations. It provides the flexibility to incorporate the various photonic components as either user-defined or fixed and, can also be enhanced or removed from the model as per the design requirements. We describe the detailed operation and need of every component model and its representation in Simulink blocksets. Moreover the developed model can be extended in future to support Dense Wavelength Division Multiplexing (DWDM) system, thereby allowing high speed transmission with N × 40 Gbps systems. The various compensation techniques and their influence on system performance can be easily investigated by using such models.

  3. Ion thruster performance model

    International Nuclear Information System (INIS)

    Brophy, J.R.

    1984-01-01

    A model of ion thruster performance is developed for high flux density cusped magnetic field thruster designs. This model is formulated in terms of the average energy required to produce an ion in the discharge chamber plasma and the fraction of these ions that are extracted to form the beam. The direct loss of high energy (primary) electrons from the plasma to the anode is shown to have a major effect on thruster performance. The model provides simple algebraic equations enabling one to calculate the beam ion energy cost, the average discharge chamber plasma ion energy cost, the primary electron density, the primary-to-Maxwellian electron density ratio and the Maxwellian electron temperature. Experiments indicate that the model correctly predicts the variation in plasma ion energy cost for changes in propellant gas (Ar, Kr, and Xe), grid transparency to neutral atoms, beam extraction area, discharge voltage, and discharge chamber wall temperature

  4. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  5. Heat transfer modeling in exhaust systems of high-performance two-stroke engines

    OpenAIRE

    Lujan Martinez, José Manuel; Climent Puchades, Héctor; Olmeda González, Pablo Cesar; JIMENEZ MACEDO, VICTOR DANIEL

    2014-01-01

    Heat transfer from the hot gases to the wall in exhaust systems of high-performance two-stroke engines is underestimated using steady state with fully developed flow empirical correlations. This fact is detected when comparing measured and modeled pressure pulses in different positions in the exhaust system. This can be explained taking into account that classical expressions have been validated for fully developed flows, a situation that is far from the flow behavior in reciprocating interna...

  6. Modelling and Development of a High Performance Milling Process with Monolithic Cutting Tools

    International Nuclear Information System (INIS)

    Ozturk, E.; Taylor, C. M.; Turner, S.; Devey, M.

    2011-01-01

    Critical aerospace components usually require difficult to machine workpiece materials like nickel based alloys. Moreover; there is a pressing need to maximize the productivity of machining operations. This need can be satisfied by selection of higher feed velocity, axial and radial depths. But there may be several problems during machining in this case. Due to high cutting speeds in high performance machining, the tool life may be unacceptably low. If magnitudes of cutting forces are high, out of tolerance static form errors may result; moreover in the extreme cases, the cutting tool may break apart. Forced vibrations may deteriorate the surface quality. Chatter vibrations may develop if the selected parameters result in instability. In this study, in order to deal with the tool life issue, several experimental cuts are made with different tool geometries, and the best combination in terms of tool life is selected. A force model is developed and the results of the force model are verified by experimental results. The force model is used in predicting the effect of process parameters on cutting forces. In order to account for the other concerns such as static form errors, forced and chatter vibrations, additional process models are currently under development.

  7. Novel Complete Probabilistic Models of Random Variation in High Frequency Performance of Nanoscale MOSFET

    Directory of Open Access Journals (Sweden)

    Rawid Banchuin

    2013-01-01

    Full Text Available The novel probabilistic models of the random variations in nanoscale MOSFET's high frequency performance defined in terms of gate capacitance and transition frequency have been proposed. As the transition frequency variation has also been considered, the proposed models are considered as complete unlike the previous one which take only the gate capacitance variation into account. The proposed models have been found to be both analytic and physical level oriented as they are the precise mathematical expressions in terms of physical parameters. Since the up-to-date model of variation in MOSFET's characteristic induced by physical level fluctuation has been used, part of the proposed models for gate capacitance is more accurate and physical level oriented than its predecessor. The proposed models have been verified based on the 65 nm CMOS technology by using the Monte-Carlo SPICE simulations of benchmark circuits and Kolmogorov-Smirnov tests as highly accurate since they fit the Monte-Carlo-based analysis results with 99% confidence. Hence, these novel models have been found to be versatile for the statistical/variability aware analysis/design of nanoscale MOSFET-based analog/mixed signal circuits and systems.

  8. A unified tool for performance modelling and prediction

    International Nuclear Information System (INIS)

    Gilmore, Stephen; Kloul, Leila

    2005-01-01

    We describe a novel performability modelling approach, which facilitates the efficient solution of performance models extracted from high-level descriptions of systems. The notation which we use for our high-level designs is the Unified Modelling Language (UML) graphical modelling language. The technology which provides the efficient representation capability for the underlying performance model is the multi-terminal binary decision diagram (MTBDD)-based PRISM probabilistic model checker. The UML models are compiled through an intermediate language, the stochastic process algebra PEPA, before translation into MTBDDs for solution. We illustrate our approach on a real-world analysis problem from the domain of mobile telephony

  9. High burnup issues and modelling strategies

    International Nuclear Information System (INIS)

    Dutta, B.K.

    2005-01-01

    The performance of high burnup fuel is affected by a number of phenomena, such as, conductivity degradation, modified radial flux profile, fission gas release from high burnup structures, PCMI, burnup dependent thermo-mechanical properties, etc. The modelling strategies of some of these phenomena are available in literature. These can be readily incorporated in a fuel modelling performance code. The computer code FAIR has been developed in BARC over the years to evaluate the fuel performance at extended burnup and modelling of the fuel rods for advanced fuel cycles. The present paper deals with the high burnup issues in the fuel pins, their modelling strategies and results of the case studies specifically involving high burnup fuel. (author)

  10. Hagfish slime threads as a biomimetic model for high performance protein fibres

    International Nuclear Information System (INIS)

    Fudge, Douglas S; Hillis, Sonja; Levy, Nimrod; Gosline, John M

    2010-01-01

    Textile manufacturing is one of the largest industries in the world, and synthetic fibres represent two-thirds of the global textile market. Synthetic fibres are manufactured from petroleum-based feedstocks, which are becoming increasingly expensive as demand for finite petroleum reserves continues to rise. For the last three decades, spider silks have been held up as a model that could inspire the production of protein fibres exhibiting high performance and ecological sustainability, but unfortunately, artificial spider silks have yet to fulfil this promise. Previous work on the biomechanics of protein fibres from the slime of hagfishes suggests that these fibres might be a superior biomimetic model to spider silks. Based on the fact that the proteins within these 'slime threads' adopt conformations that are similar to those in spider silks when they are stretched, we hypothesized that draw processing of slime threads should yield fibres that are comparable to spider dragline silk in their mechanical performance. Here we show that draw-processed slime threads are indeed exceptionally strong and tough. We also show that post-drawing steps such as annealing, dehydration and covalent cross-linking can dramatically improve the long-term dimensional stability of the threads. The data presented here suggest that hagfish slime threads are a model that should be pursued in the quest to produce fibres that are ecologically sustainable and economically viable.

  11. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  12. Choosing processor array configuration by performance modeling for a highly parallel linear algebra algorithm

    International Nuclear Information System (INIS)

    Littlefield, R.J.; Maschhoff, K.J.

    1991-04-01

    Many linear algebra algorithms utilize an array of processors across which matrices are distributed. Given a particular matrix size and a maximum number of processors, what configuration of processors, i.e., what size and shape array, will execute the fastest? The answer to this question depends on tradeoffs between load balancing, communication startup and transfer costs, and computational overhead. In this paper we analyze in detail one algorithm: the blocked factored Jacobi method for solving dense eigensystems. A performance model is developed to predict execution time as a function of the processor array and matrix sizes, plus the basic computation and communication speeds of the underlying computer system. In experiments on a large hypercube (up to 512 processors), this model has been found to be highly accurate (mean error ∼ 2%) over a wide range of matrix sizes (10 x 10 through 200 x 200) and processor counts (1 to 512). The model reveals, and direct experiment confirms, that the tradeoffs mentioned above can be surprisingly complex and counterintuitive. We propose decision procedures based directly on the performance model to choose configurations for fastest execution. The model-based decision procedures are compared to a heuristic strategy and shown to be significantly better. 7 refs., 8 figs., 1 tab

  13. Neutronic and Thermal-hydraulic Modelling of High Performance Light Water Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Seppaelae, Malla [VTT Technical Research Centre of Finland, P.O.Box 1000, FI02044 VTT (Finland)

    2008-07-01

    High Performance Light Water Reactor (HPLWR), which is studied in EU project 'HPLWR2', uses water at supercritical pressures as coolant and moderator to achieve higher core outlet temperature and thus higher efficiency compared to present reactors. At VTT Technical Research Centre of Finland, functionality of the thermal-hydraulics in the coupled reactor dynamics code TRAB3D/ SMABRE was extended to supercritical pressures for the analyses of HPLWR. Input models for neutronics and thermal-hydraulics were made for TRAB3D/ SMABRE according to the latest HPLWR design. A preliminary analysis was performed in which the capability of SMABRE in the transition from supercritical pressures to subcritical pressures was demonstrated. Parameterized two-group cross sections for TRAB3D neutronics were received from Hungarian Academy of Sciences KFKI Atomic Energy Research Institute together with a subroutine for handling them. PSG, a new Monte Carlo transport code developed at VTT, was also used to generate two-group constants for HPLWR and comparisons were made with the KFKI cross sections and MCNP calculations. (author)

  14. Neutronic and Thermal-hydraulic Modelling of High Performance Light Water Reactor

    International Nuclear Information System (INIS)

    Seppaelae, Malla

    2008-01-01

    High Performance Light Water Reactor (HPLWR), which is studied in EU project 'HPLWR2', uses water at supercritical pressures as coolant and moderator to achieve higher core outlet temperature and thus higher efficiency compared to present reactors. At VTT Technical Research Centre of Finland, functionality of the thermal-hydraulics in the coupled reactor dynamics code TRAB3D/ SMABRE was extended to supercritical pressures for the analyses of HPLWR. Input models for neutronics and thermal-hydraulics were made for TRAB3D/ SMABRE according to the latest HPLWR design. A preliminary analysis was performed in which the capability of SMABRE in the transition from supercritical pressures to subcritical pressures was demonstrated. Parameterized two-group cross sections for TRAB3D neutronics were received from Hungarian Academy of Sciences KFKI Atomic Energy Research Institute together with a subroutine for handling them. PSG, a new Monte Carlo transport code developed at VTT, was also used to generate two-group constants for HPLWR and comparisons were made with the KFKI cross sections and MCNP calculations. (author)

  15. Explaining high and low performers in complex intervention trials: a new model based on diffusion of innovations theory.

    Science.gov (United States)

    McMullen, Heather; Griffiths, Chris; Leber, Werner; Greenhalgh, Trisha

    2015-05-31

    Complex intervention trials may require health care organisations to implement new service models. In a recent cluster randomised controlled trial, some participating organisations achieved high recruitment, whereas others found it difficult to assimilate the intervention and were low recruiters. We sought to explain this variation and develop a model to inform organisational participation in future complex intervention trials. The trial included 40 general practices in a London borough with high HIV prevalence. The intervention was offering a rapid HIV test as part of the New Patient Health Check. The primary outcome was mean CD4 cell count at diagnosis. The process evaluation consisted of several hundred hours of ethnographic observation, 21 semi-structured interviews and analysis of routine documents (e.g., patient leaflets, clinical protocols) and trial documents (e.g., inclusion criteria, recruitment statistics). Qualitative data were analysed thematically using--and, where necessary, extending--Greenhalgh et al.'s model of diffusion of innovations. Narrative synthesis was used to prepare case studies of four practices representing maximum variety in clinicians' interest in HIV (assessed by level of serological testing prior to the trial) and performance in the trial (high vs. low recruiters). High-recruiting practices were, in general though not invariably, also innovative practices. They were characterised by strong leadership, good managerial relations, readiness for change, a culture of staff training and available staff time ('slack resources'). Their front-line staff believed that patients might benefit from the rapid HIV test ('relative advantage'), were emotionally comfortable administering it ('compatibility'), skilled in performing it ('task issues') and made creative adaptations to embed the test in local working practices ('reinvention'). Early experience of a positive HIV test ('observability') appeared to reinforce staff commitment to recruiting

  16. Identifying High Performance ERP Projects

    OpenAIRE

    Stensrud, Erik; Myrtveit, Ingunn

    2002-01-01

    Learning from high performance projects is crucial for software process improvement. Therefore, we need to identify outstanding projects that may serve as role models. It is common to measure productivity as an indicator of performance. It is vital that productivity measurements deal correctly with variable returns to scale and multivariate data. Software projects generally exhibit variable returns to scale, and the output from ERP projects is multivariate. We propose to use Data Envelopment ...

  17. Meteorological conditions associated to high sublimation amounts in semiarid high-elevation Andes decrease the performance of empirical melt models

    Science.gov (United States)

    Ayala, Alvaro; Pellicciotti, Francesca; MacDonell, Shelley; McPhee, James; Burlando, Paolo

    2015-04-01

    Empirical melt (EM) models are often preferred to surface energy balance (SEB) models to calculate melt amounts of snow and ice in hydrological modelling of high-elevation catchments. The most common reasons to support this decision are that, in comparison to SEB models, EM models require lower levels of meteorological data, complexity and computational costs. However, EM models assume that melt can be characterized by means of a few index variables only, and their results strongly depend on the transferability in space and time of the calibrated empirical parameters. In addition, they are intrinsically limited in accounting for specific process components, the complexity of which cannot be easily reconciled with the empirical nature of the model. As an example of an EM model, in this study we use the Enhanced Temperature Index (ETI) model, which calculates melt amounts using air temperature and the shortwave radiation balance as index variables. We evaluate the performance of the ETI model on dry high-elevation sites where sublimation amounts - that are not explicitly accounted for the EM model - represent a relevant percentage of total ablation (1.1 to 8.7%). We analyse a data set of four Automatic Weather Stations (AWS), which were collected during the ablation season 2013-14, at elevations between 3466 and 4775 m asl, on the glaciers El Tapado, San Francisco, Bello and El Yeso, which are located in the semiarid Andes of central Chile. We complement our analysis using data from past studies in Juncal Norte Glacier (Chile) and Haut Glacier d'Arolla (Switzerland), during the ablation seasons 2008-09 and 2006, respectively. We use the results of a SEB model, applied to each study site, along the entire season, to calibrate the ETI model. The ETI model was not designed to calculate sublimation amounts, however, results show that their ability is low also to simulate melt amounts at sites where sublimation represents larger percentages of total ablation. In fact, we

  18. LL13-MatModelRadDetect-PD2Jf Final Report: Materials Modeling for High-Performance Radiation Detectors

    Energy Technology Data Exchange (ETDEWEB)

    Lordi, Vincenzo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-12-11

    The aims of this project are to enable rational materials design for select high-payoff challenges in radiation detection materials by using state-of-the-art predictive atomistic modeling techniques. Three specific high-impact challenges are addressed: (i) design and optimization of electrical contact stacks for TlBr detectors to stabilize temporal response at room-temperature; (ii) identification of chemical design principles of host glass materials for large-volume, low-cost, highperformance glass scintillators; and (iii) determination of the electrical impacts of dislocation networks in Cd1-xZnxTe (CZT) that limit its performance and usable single-crystal volume. The specific goals are to establish design and process strategies to achieve improved materials for high performance detectors. Each of the major tasks is discussed below in three sections, which include the goals for the task and a summary of the major results, followed by a listing of publications that contain the full details, including details of the methodologies used. The appendix lists 12 conference presentations given for this project, including 1 invited talk and 1 invited poster.

  19. DOE research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  20. High performance modeling of atmospheric re-entry vehicles

    International Nuclear Information System (INIS)

    Martin, Alexandre; Scalabrin, Leonardo C; Boyd, Iain D

    2012-01-01

    Re-entry vehicles designed for space exploration are usually equipped with thermal protection systems made of ablative material. In order to properly model and predict the aerothermal environment of the vehicle, it is imperative to account for the gases produced by ablation processes. In the case of charring ablators, where an inner resin is pyrolyzed at a relatively low temperature, the composition of the gas expelled into the boundary layer is complex and may lead to thermal chemical reactions that cannot be captured with simple flow chemistry models. In order to obtain better predictions, an appropriate gas flow chemistry model needs to be included in the CFD calculations. Using a recently developed chemistry model for ablating carbon-phenolic-in-air species, a CFD calculation of the Stardust re-entry at 71 km is presented. The code used for that purpose has been designed to take advantage of the nature of the problem and therefore remains very efficient when a high number of chemical species are involved. The CFD result demonstrates the need for such chemistry model when modeling the flow field around an ablative material. Modeling of the nonequilibrium radiation spectra is also presented, and compared to the experimental data obtained during Stardust re-entry by the Echelle instrument. The predicted emission from the CN lines compares quite well with the experimental results, demonstrating the validity of the current approach.

  1. Predicting High-Power Performance in Professional Cyclists.

    Science.gov (United States)

    Sanders, Dajo; Heijboer, Mathieu; Akubat, Ibrahim; Meijer, Kenneth; Hesselink, Matthijs K

    2017-03-01

    To assess if short-duration (5 to ~300 s) high-power performance can accurately be predicted using the anaerobic power reserve (APR) model in professional cyclists. Data from 4 professional cyclists from a World Tour cycling team were used. Using the maximal aerobic power, sprint peak power output, and an exponential constant describing the decrement in power over time, a power-duration relationship was established for each participant. To test the predictive accuracy of the model, several all-out field trials of different durations were performed by each cyclist. The power output achieved during the all-out trials was compared with the predicted power output by the APR model. The power output predicted by the model showed very large to nearly perfect correlations to the actual power output obtained during the all-out trials for each cyclist (r = .88 ± .21, .92 ± .17, .95 ± .13, and .97 ± .09). Power output during the all-out trials remained within an average of 6.6% (53 W) of the predicted power output by the model. This preliminary pilot study presents 4 case studies on the applicability of the APR model in professional cyclists using a field-based approach. The decrement in all-out performance during high-intensity exercise seems to conform to a general relationship with a single exponential-decay model describing the decrement in power vs increasing duration. These results are in line with previous studies using the APR model to predict performance during brief all-out trials. Future research should evaluate the APR model with a larger sample size of elite cyclists.

  2. The performance of a new Geant4 Bertini intra-nuclear cascade model in high throughput computing (HTC) cluster architecture

    Energy Technology Data Exchange (ETDEWEB)

    Aatos, Heikkinen; Andi, Hektor; Veikko, Karimaki; Tomas, Linden [Helsinki Univ., Institute of Physics (Finland)

    2003-07-01

    We study the performance of a new Bertini intra-nuclear cascade model implemented in the general detector simulation tool-kit Geant4 with a High Throughput Computing (HTC) cluster architecture. A 60 node Pentium III open-Mosix cluster is used with the Mosix kernel performing automatic process load-balancing across several CPUs. The Mosix cluster consists of several computer classes equipped with Windows NT workstations that automatically boot, daily and become nodes of the Mosix cluster. The models included in our study are a Bertini intra-nuclear cascade model with excitons, consisting of a pre-equilibrium model, a nucleus explosion model, a fission model and an evaporation model. The speed and accuracy obtained for these models is presented. (authors)

  3. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  4. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    Energy Technology Data Exchange (ETDEWEB)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K. [Cray Inc., St. Paul, MN 55101 (United States); Porter, D. [Minnesota Supercomputing Institute for Advanced Computational Research, Minneapolis, MN USA (United States); O’Neill, B. J.; Nolting, C.; Donnert, J. M. F.; Jones, T. W. [School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455 (United States); Edmon, P., E-mail: pjm@cray.com, E-mail: nradclif@cray.com, E-mail: kkandalla@cray.com, E-mail: oneill@astro.umn.edu, E-mail: nolt0040@umn.edu, E-mail: donnert@ira.inaf.it, E-mail: twj@umn.edu, E-mail: dhp@umn.edu, E-mail: pedmon@cfa.harvard.edu [Institute for Theory and Computation, Center for Astrophysics, Harvard University, Cambridge, MA 02138 (United States)

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  5. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    International Nuclear Information System (INIS)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.; Porter, D.; O’Neill, B. J.; Nolting, C.; Donnert, J. M. F.; Jones, T. W.; Edmon, P.

    2017-01-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  6. NONLINEAR-REGRESSION METHODS FOR MODELING OF HETEROSCEDASTIC RETENTION DATA IN REVERSED-PHASE HIGH-PERFORMANCE LIQUID-CHROMATOGRAPHY

    NARCIS (Netherlands)

    HENDRIKS, MMWB; COENEGRACHT, PMJ; DOORNBOS, DA

    1994-01-01

    New models have been developed that accurately describe the response surfaces of capacity factors that are a function of changes in the pH and the fraction of organic modifier in reversed-phase high-performance liquid chromatography (RP-HPLC). The purpose of this article is to illustrate one of the

  7. High performance in software development

    CERN Multimedia

    CERN. Geneva; Haapio, Petri; Liukkonen, Juha-Matti

    2015-01-01

    What are the ingredients of high-performing software? Software development, especially for large high-performance systems, is one the most complex tasks mankind has ever tried. Technological change leads to huge opportunities but challenges our old ways of working. Processing large data sets, possibly in real time or with other tight computational constraints, requires an efficient solution architecture. Efficiency requirements span from the distributed storage and large-scale organization of computation and data onto the lowest level of processor and data bus behavior. Integrating performance behavior over these levels is especially important when the computation is resource-bounded, as it is in numerics: physical simulation, machine learning, estimation of statistical models, etc. For example, memory locality and utilization of vector processing are essential for harnessing the computing power of modern processor architectures due to the deep memory hierarchies of modern general-purpose computers. As a r...

  8. High Performance Electrical Modeling and Simulation Software Normal Environment Verification and Validation Plan, Version 1.0; TOPICAL

    International Nuclear Information System (INIS)

    WIX, STEVEN D.; BOGDAN, CAROLYN W.; MARCHIONDO JR., JULIO P.; DEVENEY, MICHAEL F.; NUNEZ, ALBERT V.

    2002-01-01

    The requirements in modeling and simulation are driven by two fundamental changes in the nuclear weapons landscape: (1) The Comprehensive Test Ban Treaty and (2) The Stockpile Life Extension Program which extends weapon lifetimes well beyond their originally anticipated field lifetimes. The move from confidence based on nuclear testing to confidence based on predictive simulation forces a profound change in the performance asked of codes. The scope of this document is to improve the confidence in the computational results by demonstration and documentation of the predictive capability of electrical circuit codes and the underlying conceptual, mathematical and numerical models as applied to a specific stockpile driver. This document describes the High Performance Electrical Modeling and Simulation software normal environment Verification and Validation Plan

  9. A high performance finite element model for wind farm modeling in forested areas

    Science.gov (United States)

    Owen, Herbert; Avila, Matias; Folch, Arnau; Cosculluela, Luis; Prieto, Luis

    2015-04-01

    Wind energy has grown significantly during the past decade and is expected to continue growing in the fight against climate change. In the search for new land where the impact of the wind turbines is small several wind farms are currently being installed in forested areas. In order to optimize the distribution of the wind turbines within the wind farm the Reynolds Averaged Navier Stokes equations are solved over the domain of interest using either commercial or in house codes. The existence of a canopy alters the Atmospheric Boundary Layer wind profile close to the ground. Therefore in order to obtain a more accurate representation of the flow in forested areas modification to both the Navier Stokes and turbulence variables equations need to be introduced. Several existing canopy models have been tested in an academic problem showing that the one proposed by Sogachev et. al gives the best results. This model has been implemented in an in house CFD solver named Alya. It is a high performance unstructured finite element code that has been designed from scratch to be able to run in the world's biggest supercomputers. Its scalabililty has recently been tested up to 100000 processors in both American and European supercomputers. During the past three years the code has been tuned and tested for wind energy problems. Recent efforts have focused on the canopy model following industry needs. In this work we shall benchmark our results in a wind farm that is currently being designed by Scottish Power and Iberdrola in Scotland. This is a very interesting real case with extensive experimental data from five different masts with anemometers at several heights. It is used to benchmark both the wind profiles and the speed up obtained between different masts. Sixteen different wind directions are simulated. The numerical model provides very satisfactory results for both the masts that are affected by the canopy and those that are not influenced by it.

  10. Nonlinear Multivariate Spline-Based Control Allocation for High-Performance Aircraft

    OpenAIRE

    Tol, H.J.; De Visser, C.C.; Van Kampen, E.; Chu, Q.P.

    2014-01-01

    High performance flight control systems based on the nonlinear dynamic inversion (NDI) principle require highly accurate models of aircraft aerodynamics. In general, the accuracy of the internal model determines to what degree the system nonlinearities can be canceled; the more accurate the model, the better the cancellation, and with that, the higher the performance of the controller. In this paper a new control system is presented that combines NDI with multivariate simplex spline based con...

  11. Nonlinear Multivariate Spline-Based Control Allocation for High-Performance Aircraft

    NARCIS (Netherlands)

    Tol, H.J.; De Visser, C.C.; Van Kampen, E.; Chu, Q.P.

    2014-01-01

    High performance flight control systems based on the nonlinear dynamic inversion (NDI) principle require highly accurate models of aircraft aerodynamics. In general, the accuracy of the internal model determines to what degree the system nonlinearities can be canceled; the more accurate the model,

  12. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC): gap analysis for high fidelity and performance assessment code development

    International Nuclear Information System (INIS)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-01-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  13. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  14. Temporal diagnostic analysis of the SWAT model to detect dominant periods of poor model performance

    Science.gov (United States)

    Guse, Björn; Reusser, Dominik E.; Fohrer, Nicola

    2013-04-01

    Hydrological models generally include thresholds and non-linearities, such as snow-rain-temperature thresholds, non-linear reservoirs, infiltration thresholds and the like. When relating observed variables to modelling results, formal methods often calculate performance metrics over long periods, reporting model performance with only few numbers. Such approaches are not well suited to compare dominating processes between reality and model and to better understand when thresholds and non-linearities are driving model results. We present a combination of two temporally resolved model diagnostic tools to answer when a model is performing (not so) well and what the dominant processes are during these periods. We look at the temporal dynamics of parameter sensitivities and model performance to answer this question. For this, the eco-hydrological SWAT model is applied in the Treene lowland catchment in Northern Germany. As a first step, temporal dynamics of parameter sensitivities are analyzed using the Fourier Amplitude Sensitivity test (FAST). The sensitivities of the eight model parameters investigated show strong temporal variations. High sensitivities were detected for two groundwater (GW_DELAY, ALPHA_BF) and one evaporation parameters (ESCO) most of the time. The periods of high parameter sensitivity can be related to different phases of the hydrograph with dominances of the groundwater parameters in the recession phases and of ESCO in baseflow and resaturation periods. Surface runoff parameters show high parameter sensitivities in phases of a precipitation event in combination with high soil water contents. The dominant parameters give indication for the controlling processes during a given period for the hydrological catchment. The second step included the temporal analysis of model performance. For each time step, model performance was characterized with a "finger print" consisting of a large set of performance measures. These finger prints were clustered into

  15. Analysis and Modeling of Social In uence in High Performance Computing Workloads

    KAUST Repository

    Zheng, Shuai

    2011-06-01

    High Performance Computing (HPC) is becoming a common tool in many research areas. Social influence (e.g., project collaboration) among increasing users of HPC systems creates bursty behavior in underlying workloads. This bursty behavior is increasingly common with the advent of grid computing and cloud computing. Mining the user bursty behavior is important for HPC workloads prediction and scheduling, which has direct impact on overall HPC computing performance. A representative work in this area is the Mixed User Group Model (MUGM), which clusters users according to the resource demand features of their submissions, such as duration time and parallelism. However, MUGM has some difficulties when implemented in real-world system. First, representing user behaviors by the features of their resource demand is usually difficult. Second, these features are not always available. Third, measuring the similarities among users is not a well-defined problem. In this work, we propose a Social Influence Model (SIM) to identify, analyze, and quantify the level of social influence across HPC users. The advantage of the SIM model is that it finds HPC communities by analyzing user job submission time, thereby avoiding the difficulties of MUGM. An offline algorithm and a fast-converging, computationally-efficient online learning algorithm for identifying social groups are proposed. Both offline and online algorithms are applied on several HPC and grid workloads, including Grid 5000, EGEE 2005 and 2007, and KAUST Supercomputing Lab (KSL) BGP data. From the experimental results, we show the existence of a social graph, which is characterized by a pattern of dominant users and followers. In order to evaluate the effectiveness of identified user groups, we show the pattern discovered by the offline algorithm follows a power-law distribution, which is consistent with those observed in mainstream social networks. We finally conclude the thesis and discuss future directions of our work.

  16. Thermal modelling of PV module performance under high ambient temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Diarra, D.C.; Harrison, S.J. [Queen' s Univ., Kingston, ON (Canada). Dept. of Mechanical and Materials Engineering Solar Calorimetry Lab; Akuffo, F.O. [Kwame Nkrumah Univ. of Science and Technology, Kumasi (Ghana). Dept. of Mechanical Engineering

    2005-07-01

    When predicting the performance of photovoltaic (PV) generators, the actual performance is typically lower than test results conducted under standard test conditions because the radiant energy absorbed in the module under normal operation raises the temperature of the cell and other multilayer components. The increase in temperature translates to a lower conversion efficiency of the solar cells. In order to address these discrepancies, a thermal model of a characteristic PV module was developed to assess and predict its performance under real field-conditions. The PV module consisted of monocrystalline silicon cells in EVA between a glass cover and a tedlar backing sheet. The EES program was used to compute the equilibrium temperature profile in the PV module. It was shown that heat is dissipated towards the bottom and the top of the module, and that its temperature can be much higher than the ambient temperature. Modelling results indicate that 70-75 per cent of the absorbed solar radiation is dissipated from the solar cells as heat, while 4.7 per cent of the solar energy is absorbed in the glass cover and the EVA. It was also shown that the operating temperature of the PV module decreases with increased wind speed. 2 refs.

  17. Physical models for high burnup fuel

    International Nuclear Information System (INIS)

    Kanyukova, V.; Khoruzhii, O.; Likhanskii, V.; Solodovnikov, G.; Sorokin, A.

    2003-01-01

    In this paper some models of processes in high burnup fuel developed in Src of Russia Troitsk Institute for Innovation and Fusion Research are presented. The emphasis is on the description of the degradation of the fuel heat conductivity, radial profiles of the burnup and the plutonium accumulation, restructuring of the pellet rim, mechanical pellet-cladding interaction. The results demonstrate the possibility of rather accurate description of the behaviour of the fuel of high burnup on the base of simplified models in frame of the fuel performance code if the models are physically ground. The development of such models requires the performance of the detailed physical analysis to serve as a test for a correct choice of allowable simplifications. This approach was applied in the SRC of Russia TRINITI to develop a set of models for the WWER fuel resulting in high reliability of predictions in simulation of the high burnup fuel

  18. High performance MEAs. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-07-15

    The aim of the present project is through modeling, material and process development to obtain significantly better MEA performance and to attain the technology necessary to fabricate stable catalyst materials thereby providing a viable alternative to current industry standard. This project primarily focused on the development and characterization of novel catalyst materials for the use in high temperature (HT) and low temperature (LT) proton-exchange membrane fuel cells (PEMFC). New catalysts are needed in order to improve fuel cell performance and reduce the cost of fuel cell systems. Additional tasks were the development of new, durable sealing materials to be used in PEMFC as well as the computational modeling of heat and mass transfer processes, predominantly in LT PEMFC, in order to improve fundamental understanding of the multi-phase flow issues and liquid water management in fuel cells. An improved fundamental understanding of these processes will lead to improved fuel cell performance and hence will also result in a reduced catalyst loading to achieve the same performance. The consortium have obtained significant research results and progress for new catalyst materials and substrates with promising enhanced performance and fabrication of the materials using novel methods. However, the new materials and synthesis methods explored are still in the early research and development phase. The project has contributed to improved MEA performance using less precious metal and has been demonstrated for both LT-PEM, DMFC and HT-PEM applications. New novel approach and progress of the modelling activities has been extremely satisfactory with numerous conference and journal publications along with two potential inventions concerning the catalyst layer. (LN)

  19. High-performing trauma teams: frequency of behavioral markers of a shared mental model displayed by team leaders and quality of medical performance.

    Science.gov (United States)

    Johnsen, Bjørn Helge; Westli, Heidi Kristina; Espevik, Roar; Wisborg, Torben; Brattebø, Guttorm

    2017-11-10

    High quality team leadership is important for the outcome of medical emergencies. However, the behavioral marker of leadership are not well defined. The present study investigated frequency of behavioral markers of shared mental models (SMM) on quality of medical management. Training video recordings of 27 trauma teams simulating emergencies were analyzed according to team -leader's frequency of shared mental model behavioral markers. The results showed a positive correlation of quality of medical management with leaders sharing information without an explicit demand for the information ("push" of information) and with leaders communicating their situational awareness (SA) and demonstrating implicit supporting behavior. When separating the sample into higher versus lower performing teams, the higher performing teams had leaders who displayed a greater frequency of "push" of information and communication of SA and supportive behavior. No difference was found for the behavioral marker of team initiative, measured as bringing up suggestions to other teammembers. The results of this study emphasize the team leader's role in initiating and updating a team's shared mental model. Team leaders should also set expectations for acceptable interaction patterns (e.g., promoting information exchange) and create a team climate that encourages behaviors, such as mutual performance monitoring, backup behavior, and adaptability to enhance SMM.

  20. Simulating Effects of High Angle of Attack on Turbofan Engine Performance

    Science.gov (United States)

    Liu, Yuan; Claus, Russell W.; Litt, Jonathan S.; Guo, Ten-Huei

    2013-01-01

    A method of investigating the effects of high angle of attack (AOA) flight on turbofan engine performance is presented. The methodology involves combining a suite of diverse simulation tools. Three-dimensional, steady-state computational fluid dynamics (CFD) software is used to model the change in performance of a commercial aircraft-type inlet and fan geometry due to various levels of AOA. Parallel compressor theory is then applied to assimilate the CFD data with a zero-dimensional, nonlinear, dynamic turbofan engine model. The combined model shows that high AOA operation degrades fan performance and, thus, negatively impacts compressor stability margins and engine thrust. In addition, the engine response to high AOA conditions is shown to be highly dependent upon the type of control system employed.

  1. High performance homes

    DEFF Research Database (Denmark)

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    Can prefabrication contribute to the development of high performance homes? To answer this question, this chapter defines high performance in more broadly inclusive terms, acknowledging the technical, architectural, social and economic conditions under which energy consumption and production occur....... Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  2. High Performance Computing Software Applications for Space Situational Awareness

    Science.gov (United States)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  3. MaMR: High-performance MapReduce programming model for material cloud applications

    Science.gov (United States)

    Jing, Weipeng; Tong, Danyu; Wang, Yangang; Wang, Jingyuan; Liu, Yaqiu; Zhao, Peng

    2017-02-01

    With the increasing data size in materials science, existing programming models no longer satisfy the application requirements. MapReduce is a programming model that enables the easy development of scalable parallel applications to process big data on cloud computing systems. However, this model does not directly support the processing of multiple related data, and the processing performance does not reflect the advantages of cloud computing. To enhance the capability of workflow applications in material data processing, we defined a programming model for material cloud applications that supports multiple different Map and Reduce functions running concurrently based on hybrid share-memory BSP called MaMR. An optimized data sharing strategy to supply the shared data to the different Map and Reduce stages was also designed. We added a new merge phase to MapReduce that can efficiently merge data from the map and reduce modules. Experiments showed that the model and framework present effective performance improvements compared to previous work.

  4. Accuracy of W' Recovery Kinetics in High Performance Cyclists - Modelling Intermittent Work Capacity.

    Science.gov (United States)

    Bartram, Jason C; Thewlis, Dominic; Martin, David T; Norton, Kevin I

    2017-10-16

    With knowledge of an individual's critical power (CP) and W' the SKIBA 2 model provides a framework with which to track W' balance during intermittent high intensity work bouts. There are fears the time constant controlling the recovery rate of W' (τ W' ) may require refinement to enable effective use in an elite population. Four elite endurance cyclists completed an array of intermittent exercise protocols to volitional exhaustion. Each protocol lasted approximately 3.5-6 minutes and featured a range of recovery intensities, set in relation to athlete's CPs (DCP). Using the framework of the SKIBA 2 model, the τ W ' values were modified for each protocol to achieve an accurate W' at volitional exhaustion. Modified τ W ' values were compared to equivalent SKIBA 2 τ W ' values to assess the difference in recovery rates for this population. Plotting modified τ W ' values against DCP showed the adjusted relationship between work-rate and recovery-rate. Comparing modified τ W' values against the SKIBA 2 τ W' values showed a negative bias of 112±46s (mean±95%CL), suggesting athlete's recovered W' faster than predicted by SKIBA 2 (p=0.0001). The modified τ W' to DCP relationship was best described by a power function: τ W' =2287.2∗D CP -0.688 (R 2 = 0.433). The current SKIBA 2 model is not appropriate for use in elite cyclists as it under predicts the recovery rate of W'. The modified τ W' equation presented will require validation, but appears more appropriate for high performance athletes. Individual τ W' relationships may be necessary in order to maximise the model's validity.

  5. Governing highly performing lean team behaviors : A mixed-methods longitudinal study

    NARCIS (Netherlands)

    Van Dun, Desirée H.; Wilderom, Celeste P.M.

    2015-01-01

    Work teams go through multiple performance cycles; initially highly performing teams may experience a decline in subsequent performance and vice-versa. This inductive study focuses on team-behavioral and contextual predictors of high lean team performance. Rooted in both the IMOI model and reviewing

  6. Performance prediction of high Tc superconducting small antennas using a two-fluid-moment method model

    Science.gov (United States)

    Cook, G. G.; Khamas, S. K.; Kingsley, S. P.; Woods, R. C.

    1992-01-01

    The radar cross section and Q factors of electrically small dipole and loop antennas made with a YBCO high Tc superconductor are predicted using a two-fluid-moment method model, in order to determine the effects of finite conductivity on the performances of such antennas. The results compare the useful operating bandwidths of YBCO antennas exhibiting varying degrees of impurity with their copper counterparts at 77 K, showing a linear relationship between bandwidth and impurity level.

  7. Experimental Evaluation for the Microvibration Performance of a Segmented PC Method Based High Technology Industrial Facility Using 1/2 Scale Test Models

    Directory of Open Access Journals (Sweden)

    Sijun Kim

    2017-01-01

    Full Text Available The precast concrete (PC method used in the construction process of high technology industrial facilities is limited when applied to those with greater span lengths, due to the transport length restriction (maximum length of 15~16 m in Korea set by traffic laws. In order to resolve this, this study introduces a structural system with a segmented PC system, and a 1/2 scale model with a width of 9000 mm (hereafter Segmented Model is manufactured to evaluate vibration performance. Since a real vibrational environment cannot be reproduced for vibration testing using a scale model, a comparative analysis of their relative performances is conducted in this study. For this purpose, a 1/2 scale model with a width of 7200 mm (hereafter Nonsegmented Model of a high technology industrial facility is additionally prepared using the conventional PC method. By applying the same experiment method for both scale models and comparing the results, the relative vibration performance of the Segmented Model is observed. Through impact testing, the natural frequencies of the two scale models are compared. Also, in order to analyze the estimated response induced by the equipment, the vibration responses due to the exciter are compared. The experimental results show that the Segmented Model exhibits similar or superior performances when compared to the Nonsegmented Model.

  8. Integrated plasma control for high performance tokamaks

    International Nuclear Information System (INIS)

    Humphreys, D.A.; Deranian, R.D.; Ferron, J.R.; Johnson, R.D.; LaHaye, R.J.; Leuer, J.A.; Penaflor, B.G.; Walker, M.L.; Welander, A.S.; Jayakumar, R.J.; Makowski, M.A.; Khayrutdinov, R.R.

    2005-01-01

    Sustaining high performance in a tokamak requires controlling many equilibrium shape and profile characteristics simultaneously with high accuracy and reliability, while suppressing a variety of MHD instabilities. Integrated plasma control, the process of designing high-performance tokamak controllers based on validated system response models and confirming their performance in detailed simulations, provides a systematic method for achieving and ensuring good control performance. For present-day devices, this approach can greatly reduce the need for machine time traditionally dedicated to control optimization, and can allow determination of high-reliability controllers prior to ever producing the target equilibrium experimentally. A full set of tools needed for this approach has recently been completed and applied to present-day devices including DIII-D, NSTX and MAST. This approach has proven essential in the design of several next-generation devices including KSTAR, EAST, JT-60SC, and ITER. We describe the method, results of design and simulation tool development, and recent research producing novel approaches to equilibrium and MHD control in DIII-D. (author)

  9. Modeling Phase-transitions Using a High-performance, Isogeometric Analysis Framework

    KAUST Repository

    Vignal, Philippe; Dalcin, Lisandro; Collier, Nathan; Calo, Victor M.

    2014-01-01

    In this paper, we present a high-performance framework for solving partial differential equations using Isogeometric Analysis, called PetIGA, and show how it can be used to solve phase-field problems. We specifically chose the Cahn-Hilliard equation

  10. Program for aerodynamic performance tests of helium gas compressor model of the gas turbine high temperature reactor (GTHTR300)

    International Nuclear Information System (INIS)

    Takada, Shoji; Takizuka, Takakazu; Kunimoto, Kazuhiko; Yan, Xing; Itaka, Hidehiko; Mori, Eiji

    2003-01-01

    Research and development program for helium gas compressor aerodynamics was planned for the power conversion system of the Gas Turbine High Temperature Reactor (GTHTR300). The axial compressor with polytropic efficiency of 90% and surge margin more than 30% was designed with 3-dimensional aerodynamic design. Performance and surge margin of the helium gas compressor tends to be lower due to the higher boss ratio which makes the tip clearance wide relative to the blade height, as well as due to a larger number of stages. The compressor was designed on the basis of methods and data for the aerodynamic design of industrial open-cycle gas-turbine. To validate the design of the helium gas compressor of the GTHTR300, aerodynamic performance tests were planned, and a 1/3-scale, 4-stage compressor model was designed. In the tests, the performance data of the helium gas compressor model will be acquired by using helium gas as a working fluid. The maximum design pressure at the model inlet is 0.88 MPa, which allows the Reynolds number to be sufficiently high. The present study is entrusted from the Ministry of Education, Culture, Sports, Science and Technology of Japan. (author)

  11. Input data requirements for performance modelling and monitoring of photovoltaic plants

    DEFF Research Database (Denmark)

    Gavriluta, Anamaria Florina; Spataru, Sergiu; Sera, Dezso

    2018-01-01

    This work investigates the input data requirements in the context of performance modeling of thin-film photovoltaic (PV) systems. The analysis focuses on the PVWatts performance model, well suited for on-line performance monitoring of PV strings, due to its low number of parameters and high......, modelling the performance of the PV modules at high irradiances requires a dataset of only a few hundred samples in order to obtain a power estimation accuracy of ~1-2\\%....

  12. Toward High Performance in Industrial Refrigeration Systems

    DEFF Research Database (Denmark)

    Thybo, C.; Izadi-Zamanabadi, Roozbeh; Niemann, H.

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...

  13. Towards high performance in industrial refrigeration systems

    DEFF Research Database (Denmark)

    Thybo, C.; Izadi-Zamanabadi, R.; Niemann, Hans Henrik

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...

  14. Performance of high-rate gravel-packed oil wells

    Energy Technology Data Exchange (ETDEWEB)

    Unneland, Trond

    2001-05-01

    Improved methods for the prediction, evaluation, and monitoring of performance in high-rate cased-hole gravel-packed oil wells are presented in this thesis. The ability to predict well performance prior to the gravel-pack operations, evaluate the results after the operation, and monitor well performance over time has been improved. This lifetime approach to performance analysis of gravel-packed oil wells contributes to increase oil production and field profitability. First, analytical models available for prediction of performance in gravel-packed oil wells are reviewed, with particular emphasis on high-velocity flow effects. From the analysis of field data from three North Sea oil fields, improved and calibrated cased-hole gravel-pack performance prediction models are presented. The recommended model is based on serial flow through formation sand and gravel in the perforation tunnels. In addition, new correlations for high-velocity flow in high-rate gravel-packed oil wells are introduced. Combined, this improves the performance prediction for gravel-packed oil wells, and specific areas can be targeted for optimized well design. Next, limitations in the current methods and alternative methods for evaluation and comparison of well performance are presented. The most widely used parameter, the skin factor, remains a convenient and important parameter. However, using the skin concept in direct comparisons between wells with different reservoir properties may result in misleading or even invalid conclusions. A discussion of the parameters affecting the skin value, with a clarification of limitations, is included. A methodology for evaluation and comparison of gravel-packed well performance is presented, and this includes the use of results from production logs and the use of effective perforation tunnel permeability as a parameter. This contributes to optimized operational procedures from well to well and from field to field. Finally, the data sources available for

  15. Optical Thermal Characterization Enables High-Performance Electronics Applications

    Energy Technology Data Exchange (ETDEWEB)

    2016-02-01

    NREL developed a modeling and experimental strategy to characterize thermal performance of materials. The technique provides critical data on thermal properties with relevance for electronics packaging applications. Thermal contact resistance and bulk thermal conductivity were characterized for new high-performance materials such as thermoplastics, boron-nitride nanosheets, copper nanowires, and atomically bonded layers. The technique is an important tool for developing designs and materials that enable power electronics packaging with small footprint, high power density, and low cost for numerous applications.

  16. High-Performance Modeling and Simulation of Anchoring in Granular Media for NEO Applications

    Science.gov (United States)

    Quadrelli, Marco B.; Jain, Abhinandan; Negrut, Dan; Mazhar, Hammad

    2012-01-01

    NASA is interested in designing a spacecraft capable of visiting a near-Earth object (NEO), performing experiments, and then returning safely. Certain periods of this mission would require the spacecraft to remain stationary relative to the NEO, in an environment characterized by very low gravity levels; such situations require an anchoring mechanism that is compact, easy to deploy, and upon mission completion, easy to remove. The design philosophy used in this task relies on the simulation capability of a high-performance multibody dynamics physics engine. On Earth, it is difficult to create low-gravity conditions, and testing in low-gravity environments, whether artificial or in space, can be costly and very difficult to achieve. Through simulation, the effect of gravity can be controlled with great accuracy, making it ideally suited to analyze the problem at hand. Using Chrono::Engine, a simulation pack age capable of utilizing massively parallel Graphic Processing Unit (GPU) hardware, several validation experiments were performed. Modeling of the regolith interaction has been carried out, after which the anchor penetration tests were performed and analyzed. The regolith was modeled by a granular medium composed of very large numbers of convex three-dimensional rigid bodies, subject to microgravity levels and interacting with each other with contact, friction, and cohesional forces. The multibody dynamics simulation approach used for simulating anchors penetrating a soil uses a differential variational inequality (DVI) methodology to solve the contact problem posed as a linear complementarity method (LCP). Implemented within a GPU processing environment, collision detection is greatly accelerated compared to traditional CPU (central processing unit)- based collision detection. Hence, systems of millions of particles interacting with complex dynamic systems can be efficiently analyzed, and design recommendations can be made in a much shorter time. The figure

  17. Quantum Accelerators for High-Performance Computing Systems

    OpenAIRE

    Britt, Keith A.; Mohiyaddin, Fahd A.; Humble, Travis S.

    2017-01-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantu...

  18. Department of Energy research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-08-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programmatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models, the execution of which is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex, and consequently it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  19. Bioconversion of red ginseng saponins in the gastro-intestinal tract in vitro model studied by high-performance liquid chromatography-high resolution Fourier transform ion cyclotron resonance mass spectrometry

    NARCIS (Netherlands)

    Kong, H.; Wang, M.; Venema, K.; Maathuis, A.; Heijden, R. van der; Greef, J. van der; Xu, G.; Hankemeier, T.

    2009-01-01

    A high-performance liquid chromatography-high resolution Fourier transform ion cyclotron resonance mass spectrometry (HPLC-FTICR-MS) method was developed to investigate the metabolism of ginsenosides in in vitro models of the gastro-intestinal tract. The metabolites were identified by

  20. Simulations of KSTAR high performance steady state operation scenarios

    International Nuclear Information System (INIS)

    Na, Yong-Su; Kessel, C.E.; Park, J.M.; Yi, Sumin; Kim, J.Y.; Becoulet, A.; Sips, A.C.C.

    2009-01-01

    We report the results of predictive modelling of high performance steady state operation scenarios in KSTAR. Firstly, the capabilities of steady state operation are investigated with time-dependent simulations using a free-boundary plasma equilibrium evolution code coupled with transport calculations. Secondly, the reproducibility of high performance steady state operation scenarios developed in the DIII-D tokamak, of similar size to that of KSTAR, is investigated using the experimental data taken from DIII-D. Finally, the capability of ITER-relevant steady state operation is investigated in KSTAR. It is found that KSTAR is able to establish high performance steady state operation scenarios; β N above 3, H 98 (y, 2) up to 2.0, f BS up to 0.76 and f NI equals 1.0. In this work, a realistic density profile is newly introduced for predictive simulations by employing the scaling law of a density peaking factor. The influence of the current ramp-up scenario and the transport model is discussed with respect to the fusion performance and non-inductive current drive fraction in the transport simulations. As observed in the experiments, both the heating and the plasma current waveforms in the current ramp-up phase produce a strong effect on the q-profile, the fusion performance and also on the non-inductive current drive fraction in the current flattop phase. A criterion in terms of q min is found to establish ITER-relevant steady state operation scenarios. This will provide a guideline for designing the current ramp-up phase in KSTAR. It is observed that the transport model also affects the predictive values of fusion performance as well as the non-inductive current drive fraction. The Weiland transport model predicts the highest fusion performance as well as non-inductive current drive fraction in KSTAR. In contrast, the GLF23 model exhibits the lowest ones. ITER-relevant advanced scenarios cannot be obtained with the GLF23 model in the conditions given in this work

  1. Determinants of Students Academic Performance in Senior High ...

    African Journals Online (AJOL)

    A binary logit model is used to investigate the determinants of students' performance in the final high school examination. Questionnaires were administered to a sample of 1,129 final year students (614 boys and 515 girls) in ten senior high schools (SHSs) during the 2008/2009 academic year. Respondents were requested ...

  2. A predictive analytic model for high-performance tunneling field-effect transistors approaching non-equilibrium Green's function simulations

    International Nuclear Information System (INIS)

    Salazar, Ramon B.; Appenzeller, Joerg; Ilatikhameneh, Hesameddin; Rahman, Rajib; Klimeck, Gerhard

    2015-01-01

    A new compact modeling approach is presented which describes the full current-voltage (I-V) characteristic of high-performance (aggressively scaled-down) tunneling field-effect-transistors (TFETs) based on homojunction direct-bandgap semiconductors. The model is based on an analytic description of two key features, which capture the main physical phenomena related to TFETs: (1) the potential profile from source to channel and (2) the elliptic curvature of the complex bands in the bandgap region. It is proposed to use 1D Poisson's equations in the source and the channel to describe the potential profile in homojunction TFETs. This allows to quantify the impact of source/drain doping on device performance, an aspect usually ignored in TFET modeling but highly relevant in ultra-scaled devices. The compact model is validated by comparison with state-of-the-art quantum transport simulations using a 3D full band atomistic approach based on non-equilibrium Green's functions. It is shown that the model reproduces with good accuracy the data obtained from the simulations in all regions of operation: the on/off states and the n/p branches of conduction. This approach allows calculation of energy-dependent band-to-band tunneling currents in TFETs, a feature that allows gaining deep insights into the underlying device physics. The simplicity and accuracy of the approach provide a powerful tool to explore in a quantitatively manner how a wide variety of parameters (material-, size-, and/or geometry-dependent) impact the TFET performance under any bias conditions. The proposed model presents thus a practical complement to computationally expensive simulations such as the 3D NEGF approach

  3. Performance of advanced self-shielding models in DRAGON Version4 on analysis of a high conversion light water reactor lattice

    International Nuclear Information System (INIS)

    Karthikeyan, Ramamoorthy; Hebert, Alain

    2008-01-01

    A high conversion light water reactor lattice has been analysed using the code DRAGON Version4. This analysis was performed to test the performance of the advanced self-shielding models incorporated in DRAGON Version4. The self-shielding models are broadly classified into two groups - 'equivalence in dilution' and 'subgroup approach'. Under the 'equivalence in dilution' approach we have analysed the generalized Stamm'ler model with and without Nordheim model and Riemann integration. These models have been analysed also using the Livolant-Jeanpierre normalization. Under the 'subgroup approach', we have analysed Statistical self-shielding model based on physical probability tables and Ribon extended self-shielding model based on mathematical probability tables. This analysis will help in understanding the performance of advanced self-shielding models for a lattice that is tight and has a large fraction of fissions happening in the resonance region. The nuclear data for the analysis was generated in-house. NJOY99.90 was used for generating libraries in DRAGLIB format for analysis using DRAGON and A Compact ENDF libraries for analysis using MCNP5. The evaluated datafiles were chosen based on the recommendations of the IAEA Co-ordinated Research Project on the WIMS Library Update Project. The reference solution for the problem was obtained using Monte Carlo code MCNP5. It was found that the Ribon extended self-shielding model based on mathematical probability tables using correlation model performed better than all other models

  4. Cavitation performance improvement of high specific speed mixed-flow pump

    International Nuclear Information System (INIS)

    Chen, T; Sun, Y B; Wu, D Z; Wang, L Q

    2012-01-01

    Cavitation performance improvement of large hydraulic machinery such as pump and turbine has been a hot topic for decades. During the design process of the pumps, in order to minimize size, weight and cost centrifugal and mixed-flow pump impellers are required to operate at the highest possible rotational speed. The rotational speed is limited by the phenomenon of cavitation. The hydraulic model of high-speed mixed-flow pump with large flow rate and high pumping head, which was designed based on the traditional method, always involves poor cavitation performance. In this paper, on the basis of the same hydraulic design parameters, two hydraulic models of high-speed mixed-flow pump were designed by using different methods, in order to investigate the cavitation and hydraulic performance of the two models, the method of computational fluid dynamics (CFD) was adopted for internal flow simulation of the high specific speed mixed-flow pump. Based on the results of numerical simulation, the influences of impeller parameters and three-dimensional configuration on pressure distribution of the blades' suction surfaces were analyzed. The numerical simulation results shows a better pressure distribution and lower pressure drop around the leading edge of the improved model. The research results could provide references to the design and optimization of the anti-cavitation blade.

  5. Advanced Performance Modeling with Combined Passive and Active Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Dovrolis, Constantine [Georgia Inst. of Technology, Atlanta, GA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-04-15

    To improve the efficiency of resource utilization and scheduling of scientific data transfers on high-speed networks, the "Advanced Performance Modeling with combined passive and active monitoring" (APM) project investigates and models a general-purpose, reusable and expandable network performance estimation framework. The predictive estimation model and the framework will be helpful in optimizing the performance and utilization of networks as well as sharing resources with predictable performance for scientific collaborations, especially in data intensive applications. Our prediction model utilizes historical network performance information from various network activity logs as well as live streaming measurements from network peering devices. Historical network performance information is used without putting extra load on the resources by active measurement collection. Performance measurements collected by active probing is used judiciously for improving the accuracy of predictions.

  6. High Performance Marine Vessels

    CERN Document Server

    Yun, Liang

    2012-01-01

    High Performance Marine Vessels (HPMVs) range from the Fast Ferries to the latest high speed Navy Craft, including competition power boats and hydroplanes, hydrofoils, hovercraft, catamarans and other multi-hull craft. High Performance Marine Vessels covers the main concepts of HPMVs and discusses historical background, design features, services that have been successful and not so successful, and some sample data of the range of HPMVs to date. Included is a comparison of all HPMVs craft and the differences between them and descriptions of performance (hydrodynamics and aerodynamics). Readers will find a comprehensive overview of the design, development and building of HPMVs. In summary, this book: Focuses on technology at the aero-marine interface Covers the full range of high performance marine vessel concepts Explains the historical development of various HPMVs Discusses ferries, racing and pleasure craft, as well as utility and military missions High Performance Marine Vessels is an ideal book for student...

  7. Using a High-Performance Planning Model to Increase Levels of Functional Effectiveness Within Professional Development.

    Science.gov (United States)

    Winter, Peggi

    2016-01-01

    Nursing professional practice models continue to shape how we practice nursing by putting families and members at the heart of everything we do. Faced with enormous challenges around healthcare reform, models create frameworks for practice by unifying, uniting, and guiding our nurses. The Kaiser Permanente Practice model was developed to ensure consistency for nursing practice across the continuum. Four key pillars support this practice model and the work of nursing: quality and safety, leadership, professional development, and research/evidence-based practice. These four pillars form the foundation that makes transformational practice possible and aligns nursing with Kaiser Permanente's mission. The purpose of this article is to discuss the pillar of professional development and the components of the Nursing Professional Development: Scope and Standards of Practice model (American Nurses Association & National Nursing Staff Development Organization, 2010) and place them in a five-level development framework. This process allowed us to identify the current organizational level of practice, prioritize each nursing professional development component, and design an operational strategy to move nursing professional development toward a level of high performance. This process is suggested for nursing professional development specialists.

  8. Development of high performance cladding materials

    International Nuclear Information System (INIS)

    Park, Jeong Yong; Jeong, Y. H.; Park, S. Y.

    2010-04-01

    The irradiation test for HANA claddings conducted and a series of evaluation for next-HANA claddings as well as their in-pile and out-of pile performances tests were also carried out at Halden research reactor. The 6th irradiation test have been completed successfully in Halden research reactor. As a result, HANA claddings showed high performance, such as corrosion resistance increased by 40% compared to Zircaloy-4. The high performance of HANA claddings in Halden test has enabled lead test rod program as the first step of the commercialization of HANA claddings. DB has been established for thermal and LOCA-related properties. It was confirmed from the thermal shock test that the integrity of HANA claddings was maintained in more expanded region than the criteria regulated by NRC. The manufacturing process of strips was established in order to apply HANA alloys, which were originally developed for the claddings, to the spacer grids. 250 kinds of model alloys for the next-generation claddings were designed and manufactured over 4 times and used to select the preliminary candidate alloys for the next-generation claddings. The selected candidate alloys showed 50% better corrosion resistance and 20% improved high temperature oxidation resistance compared to the foreign advanced claddings. We established the manufacturing condition controlling the performance of the dual-cooled claddings by changing the reduction rate in the cold working steps

  9. Brain inspired high performance electronics on flexible silicon

    KAUST Repository

    Sevilla, Galo T.; Rojas, Jhonathan Prieto; Hussain, Muhammad Mustafa

    2014-01-01

    Brain's stunning speed, energy efficiency and massive parallelism makes it the role model for upcoming high performance computation systems. Although human brain components are a million times slower than state of the art silicon industry components

  10. Analysis and modeling of social influence in high performance computing workloads

    KAUST Repository

    Zheng, Shuai; Shae, Zon Yin; Zhang, Xiangliang; Jamjoom, Hani T.; Fong, Liana

    2011-01-01

    Social influence among users (e.g., collaboration on a project) creates bursty behavior in the underlying high performance computing (HPC) workloads. Using representative HPC and cluster workload logs, this paper identifies, analyzes, and quantifies

  11. Statistical and Machine Learning Models to Predict Programming Performance

    OpenAIRE

    Bergin, Susan

    2006-01-01

    This thesis details a longitudinal study on factors that influence introductory programming success and on the development of machine learning models to predict incoming student performance. Although numerous studies have developed models to predict programming success, the models struggled to achieve high accuracy in predicting the likely performance of incoming students. Our approach overcomes this by providing a machine learning technique, using a set of three significant...

  12. ELMs IN DIII-D HIGH PERFORMANCE DISCHARGES

    International Nuclear Information System (INIS)

    TURNBULL, A.D; LAO, L.L; OSBORNE, T.H; SAUTER, O; STRAIT, E.J; TAYLOR, T.S; CHU, M.S; FERRON, J.R; GREENFIELD, C.M; LEONARD, A.W; MILLER, R.L; SNYDER, P.B; WILSON, H.R; ZOHM, H

    2003-01-01

    A new understanding of edge localized modes (ELMs) in tokamak discharges is emerging [P.B. Snyder, et al., Phys. Plasmas, 9, 2037 (2002)], in which the ELM is an essentially ideal magnetohydrodynamic (MHD) instability and the ELM severity is determined by the radial width of the linearly unstable MHD kink modes. A detailed, comparative study of the penetration into the core of the respective linear instabilities in a standard DIII-D ELMing, high confinement mode (H-mode) discharge, with that for two relatively high performance discharges shows that these are also encompassed within the framework of the new model. These instabilities represent the key, limiting factor in extending the high performance of these discharges. In the standard ELMing H-mode, the MHD instabilities are highly localized in the outer few percent flux surfaces and the ELM is benign, causing only a small temporary drop in the energy confinement. In contrast, for both a very high confinement mode (VH-mode) and an H-mode with a broad internal transport barrier (ITB) extending over the entire core and coalesced with the edge transport barrier, the linearly unstable modes penetrate well into the mid radius and the corresponding consequences for global confinement are significantly more severe. The ELM accordingly results in an irreversible loss of the high performance

  13. Algorithms and Methods for High-Performance Model Predictive Control

    DEFF Research Database (Denmark)

    Frison, Gianluca

    routines employed in the numerical tests. The main focus of this thesis is on linear MPC problems. In this thesis, both the algorithms and their implementation are equally important. About the implementation, a novel implementation strategy for the dense linear algebra routines in embedded optimization...... is proposed, aiming at improving the computational performance in case of small matrices. About the algorithms, they are built on top of the proposed linear algebra, and they are tailored to exploit the high-level structure of the MPC problems, with special care on reducing the computational complexity....

  14. Biology learning evaluation model in Senior High Schools

    Directory of Open Access Journals (Sweden)

    Sri Utari

    2017-06-01

    Full Text Available The study was to develop a Biology learning evaluation model in senior high schools that referred to the research and development model by Borg & Gall and the logic model. The evaluation model included the components of input, activities, output and outcomes. The developing procedures involved a preliminary study in the form of observation and theoretical review regarding the Biology learning evaluation in senior high schools. The product development was carried out by designing an evaluation model, designing an instrument, performing instrument experiment and performing implementation. The instrument experiment involved teachers and Students from Grade XII in senior high schools located in the City of Yogyakarta. For the data gathering technique and instrument, the researchers implemented observation sheet, questionnaire and test. The questionnaire was applied in order to attain information regarding teacher performance, learning performance, classroom atmosphere and scientific attitude; on the other hand, test was applied in order to attain information regarding Biology concept mastery. Then, for the analysis of instrument construct, the researchers performed confirmatory factor analysis by means of Lisrel 0.80 software and the results of this analysis showed that the evaluation instrument valid and reliable. The construct validity was between 0.43-0.79 while the reliability of measurement model was between 0.88-0.94. Last but not the least, the model feasibility test showed that the theoretical model had been supported by the empirical data.

  15. High performance statistical computing with parallel R: applications to biology and climate modelling

    International Nuclear Information System (INIS)

    Samatova, Nagiza F; Branstetter, Marcia; Ganguly, Auroop R; Hettich, Robert; Khan, Shiraj; Kora, Guruprasad; Li, Jiangtian; Ma, Xiaosong; Pan, Chongle; Shoshani, Arie; Yoginath, Srikanth

    2006-01-01

    Ultrascale computing and high-throughput experimental technologies have enabled the production of scientific data about complex natural phenomena. With this opportunity, comes a new problem - the massive quantities of data so produced. Answers to fundamental questions about the nature of those phenomena remain largely hidden in the produced data. The goal of this work is to provide a scalable high performance statistical data analysis framework to help scientists perform interactive analyses of these raw data to extract knowledge. Towards this goal we have been developing an open source parallel statistical analysis package, called Parallel R, that lets scientists employ a wide range of statistical analysis routines on high performance shared and distributed memory architectures without having to deal with the intricacies of parallelizing these routines

  16. 3D printed high performance strain sensors for high temperature applications

    Science.gov (United States)

    Rahman, Md Taibur; Moser, Russell; Zbib, Hussein M.; Ramana, C. V.; Panat, Rahul

    2018-01-01

    Realization of high temperature physical measurement sensors, which are needed in many of the current and emerging technologies, is challenging due to the degradation of their electrical stability by drift currents, material oxidation, thermal strain, and creep. In this paper, for the first time, we demonstrate that 3D printed sensors show a metamaterial-like behavior, resulting in superior performance such as high sensitivity, low thermal strain, and enhanced thermal stability. The sensors were fabricated using silver (Ag) nanoparticles (NPs), using an advanced Aerosol Jet based additive printing method followed by thermal sintering. The sensors were tested under cyclic strain up to a temperature of 500 °C and showed a gauge factor of 3.15 ± 0.086, which is about 57% higher than that of those available commercially. The sensor thermal strain was also an order of magnitude lower than that of commercial gages for operation up to a temperature of 500 °C. An analytical model was developed to account for the enhanced performance of such printed sensors based on enhanced lateral contraction of the NP films due to the porosity, a behavior akin to cellular metamaterials. The results demonstrate the potential of 3D printing technology as a pathway to realize highly stable and high-performance sensors for high temperature applications.

  17. Crystal and molecular simulation of high-performance polymers.

    Science.gov (United States)

    Colquhoun, H M; Williams, D J

    2000-03-01

    Single-crystal X-ray analyses of oligomeric models for high-performance aromatic polymers, interfaced to computer-based molecular modeling and diffraction simulation, have enabled the determination of a range of previously unknown polymer crystal structures from X-ray powder data. Materials which have been successfully analyzed using this approach include aromatic polyesters, polyetherketones, polythioetherketones, polyphenylenes, and polycarboranes. Pure macrocyclic homologues of noncrystalline polyethersulfones afford high-quality single crystals-even at very large ring sizes-and have provided the first examples of a "protein crystallographic" approach to the structures of conventionally amorphous synthetic polymers.

  18. High-performance mass storage system for workstations

    Science.gov (United States)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    media, and the tapes are used as backup media. The storage system is managed by the IEEE mass storage reference model-based UniTree software package. UniTree software will keep track of all files in the system, will automatically migrate the lesser used files to archive media, and will stage the files when needed by the system. The user can access the files without knowledge of their physical location. The high-performance mass storage system developed by Loral AeroSys will significantly boost the system I/O performance and reduce the overall data storage cost. This storage system provides a highly flexible and cost-effective architecture for a variety of applications (e.g., realtime data acquisition with a signal and image processing requirement, long-term data archiving and distribution, and image analysis and enhancement).

  19. Electromagnetic Modeling of Human Body Using High Performance Computing

    Science.gov (United States)

    Ng, Cho-Kuen; Beall, Mark; Ge, Lixin; Kim, Sanghoek; Klaas, Ottmar; Poon, Ada

    Realistic simulation of electromagnetic wave propagation in the actual human body can expedite the investigation of the phenomenon of harvesting implanted devices using wireless powering coupled from external sources. The parallel electromagnetics code suite ACE3P developed at SLAC National Accelerator Laboratory is based on the finite element method for high fidelity accelerator simulation, which can be enhanced to model electromagnetic wave propagation in the human body. Starting with a CAD model of a human phantom that is characterized by a number of tissues, a finite element mesh representing the complex geometries of the individual tissues is built for simulation. Employing an optimal power source with a specific pattern of field distribution, the propagation and focusing of electromagnetic waves in the phantom has been demonstrated. Substantial speedup of the simulation is achieved by using multiple compute cores on supercomputers.

  20. A high-performance model for shallow-water simulations in distributed and heterogeneous architectures

    Science.gov (United States)

    Conde, Daniel; Canelas, Ricardo B.; Ferreira, Rui M. L.

    2017-04-01

    unstructured nature of the mesh topology with the corresponding employed solution, based on space-filling curves, being analyzed and discussed. Intra-node parallelism is achieved through OpenMP for CPUs and CUDA for GPUs, depending on which kind of device the process is running. Here the main difficulty is associated with the Object-Oriented approach, where the presence of complex data structures can degrade model performance considerably. STAV-2D now supports fully distributed and heterogeneous simulations where multiple different devices can be used to accelerate computation time. The advantages, short-comings and specific solutions for the employed unified Object-Oriented approach, where the source code for CPU and GPU has the same compilation units (no device specific branches like seen in available models), are discussed and quantified with a thorough scalability and performance analysis. The assembled parallel model is expected to achieve faster than real-time simulations for high resolutions (from meters to sub-meter) in large scaled problems (from cities to watersheds), effectively bridging the gap between detailed and timely simulation results. Acknowledgements This research as partially supported by Portuguese and European funds, within programs COMPETE2020 and PORL-FEDER, through project PTDC/ECM-HID/6387/2014 and Doctoral Grant SFRH/BD/97933/2013 granted by the National Foundation for Science and Technology (FCT). References Canelas, R.; Murillo, J. & Ferreira, R.M.L. (2013), Two-dimensional depth-averaged modelling of dam-break flows over mobile beds. Journal of Hydraulic Research, 51(4), 392-407. Conde, D. A. S.; Baptista, M. A. V.; Sousa Oliveira, C. & Ferreira, R. M. L. (2013), A shallow-flow model for the propagation of tsunamis over complex geometries and mobile beds, Nat. Hazards and Earth Syst. Sci., 13, 2533-2542. Conde, D. A. S.; Telhado, M. J.; Viana Baptista, M. A. & Ferreira, R. M. L. (2015) Severity and exposure associated with tsunami actions in

  1. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  2. Performance model for a CCTV-MTI

    International Nuclear Information System (INIS)

    Dunn, D.R.; Dunbar, D.L.

    1978-01-01

    CCTV-MTI (closed circuit television--moving target indicator) monitors represent typical components of access control systems, as for example in a material control and accounting (MC and A) safeguards system. This report describes a performance model for a CCTV-MTI monitor. The performance of a human in an MTI role is a separate problem and is not addressed here. This work was done in conjunction with the NRC sponsored LLL assessment procedure for MC and A systems which is presently under development. We develop a noise model for a generic camera system and a model for the detection mechanism for a postulated MTI design. These models are then translated into an overall performance model. Measures of performance are probabilities of detection and false alarm as a function of intruder-induced grey level changes in the protected area. Sensor responsivity, lens F-number, source illumination and spectral response were treated as design parameters. Some specific results are illustrated for a postulated design employing a camera with a Si-target vidicon. Reflectance or light level changes in excess of 10% due to an intruder will be detected with a very high probability for the portion of the visible spectrum with wavelengths above 500 nm. The resulting false alarm rate was less than one per year. We did not address sources of nuisance alarms due to adverse environments, reliability, resistance to tampering, nor did we examine the effects of the spatial frequency response of the optics. All of these are important and will influence overall system detection performance

  3. Low Cost High Performance Nanostructured Spectrally Selective Coating

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Sungho [Univ. of California, San Diego, CA (United States)

    2017-04-05

    Sunlight absorbing coating is a key enabling technology to achieve high-temperature high-efficiency concentrating solar power operation. A high-performance solar absorbing material must simultaneously meet all the following three stringent requirements: high thermal efficiency (usually measured by figure of merit), high-temperature durability, and oxidation resistance. The objective of this research is to employ a highly scalable process to fabricate and coat black oxide nanoparticles onto solar absorber surface to achieve ultra-high thermal efficiency. Black oxide nanoparticles have been synthesized using a facile process and coated onto absorber metal surface. The material composition, size distribution and morphology of the nanoparticle are guided by numeric modeling. Optical and thermal properties have been both modeled and measured. High temperature durability has been achieved by using nanocomposites and high temperature annealing. Mechanical durability on thermal cycling have also been investigated and optimized. This technology is promising for commercial applications in next-generation high-temperature concentration solar power (CSP) plants.

  4. Model My Watershed: A high-performance cloud application for public engagement, watershed modeling and conservation decision support

    Science.gov (United States)

    Aufdenkampe, A. K.; Tarboton, D. G.; Horsburgh, J. S.; Mayorga, E.; McFarland, M.; Robbins, A.; Haag, S.; Shokoufandeh, A.; Evans, B. M.; Arscott, D. B.

    2017-12-01

    The Model My Watershed Web app (https://app.wikiwatershed.org/) and the BiG-CZ Data Portal (http://portal.bigcz.org/) and are web applications that share a common codebase and a common goal to deliver high-performance discovery, visualization and analysis of geospatial data in an intuitive user interface in web browser. Model My Watershed (MMW) was designed as a decision support system for watershed conservation implementation. BiG CZ Data Portal was designed to provide context and background data for research sites. Users begin by creating an Area of Interest, via an automated watershed delineation tool, a free draw tool, selection of a predefined area such as a county or USGS Hydrological Unit (HUC), or uploading a custom polygon. Both Web apps visualize and provide summary statistics of land use, soil groups, streams, climate and other geospatial information. MMW then allows users to run a watershed model to simulate different scenarios of human impacts on stormwater runoff and water-quality. BiG CZ Data Portal allows users to search for scientific and monitoring data within the Area of Interest, which also serves as a prototype for the upcoming Monitor My Watershed web app. Both systems integrate with CUAHSI cyberinfrastructure, including visualizing observational data from CUAHSI Water Data Center and storing user data via CUAHSI HydroShare. Both systems also integrate with the new EnviroDIY Water Quality Data Portal (http://data.envirodiy.org/), a system for crowd-sourcing environmental monitoring data using open-source sensor stations (http://envirodiy.org/mayfly/) and based on the Observations Data Model v2.

  5. Identifying the connective strength between model parameters and performance criteria

    Directory of Open Access Journals (Sweden)

    B. Guse

    2017-11-01

    Full Text Available In hydrological models, parameters are used to represent the time-invariant characteristics of catchments and to capture different aspects of hydrological response. Hence, model parameters need to be identified based on their role in controlling the hydrological behaviour. For the identification of meaningful parameter values, multiple and complementary performance criteria are used that compare modelled and measured discharge time series. The reliability of the identification of hydrologically meaningful model parameter values depends on how distinctly a model parameter can be assigned to one of the performance criteria. To investigate this, we introduce the new concept of connective strength between model parameters and performance criteria. The connective strength assesses the intensity in the interrelationship between model parameters and performance criteria in a bijective way. In our analysis of connective strength, model simulations are carried out based on a latin hypercube sampling. Ten performance criteria including Nash–Sutcliffe efficiency (NSE, Kling–Gupta efficiency (KGE and its three components (alpha, beta and r as well as RSR (the ratio of the root mean square error to the standard deviation for different segments of the flow duration curve (FDC are calculated. With a joint analysis of two regression tree (RT approaches, we derive how a model parameter is connected to different performance criteria. At first, RTs are constructed using each performance criterion as the target variable to detect the most relevant model parameters for each performance criterion. Secondly, RTs are constructed using each parameter as the target variable to detect which performance criteria are impacted by changes in the values of one distinct model parameter. Based on this, appropriate performance criteria are identified for each model parameter. In this study, a high bijective connective strength between model parameters and performance criteria

  6. High-performance computing in accelerating structure design and analysis

    International Nuclear Information System (INIS)

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  7. Dynamic performance of a high-temperature PEM (proton exchange membrane) fuel cell – Modelling and fuzzy control of purging process

    International Nuclear Information System (INIS)

    Zhang, Caizhi; Liu, Zhitao; Zhang, Xiongwen; Chan, Siew Hwa; Wang, Youyi

    2016-01-01

    To improve fuel utilization of HT-PEMFC (high-temperature proton exchange membrane fuel cell), which normally operates under dead-end mode, with properly periodical purging to flush out the accumulated water vapour in the anode flow-field is necessary, otherwise the performance of HT-PEMFC would drop gradually. In this paper, a semi-empirical dynamic voltage model of HT-PEMFC is developed for controller design purpose via fitting the experimental data and validated with experimental results. Then, a fuzzy controller is designed to schedule the purging based on the obtained model. According to the result, the developed model well reflects transient characteristics of HT-PEMFC voltage and the fuzzy controller offers good performance for purging scheduling under uncertain load demands. - Highlights: • A semi-empirical dynamic voltage model of HT-PEMFC is developed for control design. • The model is developed via fitting and validated with experimental results. • A fuzzy controller is designed to schedule the purging based on the obtained model.

  8. High performance systems

    Energy Technology Data Exchange (ETDEWEB)

    Vigil, M.B. [comp.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  9. Critical review of glass performance modeling

    International Nuclear Information System (INIS)

    Bourcier, W.L.

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process

  10. Critical review of glass performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Bourcier, W.L. [Lawrence Livermore National Lab., CA (United States)

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process.

  11. Business Models of High Performance Computing Centres in Higher Education in Europe

    Science.gov (United States)

    Eurich, Markus; Calleja, Paul; Boutellier, Roman

    2013-01-01

    High performance computing (HPC) service centres are a vital part of the academic infrastructure of higher education organisations. However, despite their importance for research and the necessary high capital expenditures, business research on HPC service centres is mostly missing. From a business perspective, it is important to find an answer to…

  12. Stutter-Step Models of Performance in School

    Science.gov (United States)

    Morgan, Stephen L.; Leenman, Theodore S.; Todd, Jennifer J.; Kentucky; Weeden, Kim A.

    2013-01-01

    To evaluate a stutter-step model of academic performance in high school, this article adopts a unique measure of the beliefs of 12,591 high school sophomores from the Education Longitudinal Study, 2002-2006. Verbatim responses to questions on occupational plans are coded to capture specific job titles, the listing of multiple jobs, and the listing…

  13. On the increase of predictive performance with high-level data fusion

    International Nuclear Information System (INIS)

    Doeswijk, T.G.; Smilde, A.K.; Hageman, J.A.; Westerhuis, J.A.; Eeuwijk, F.A. van

    2011-01-01

    The combination of the different data sources for classification purposes, also called data fusion, can be done at different levels: low-level, i.e. concatenating data matrices, medium-level, i.e. concatenating data matrices after feature selection and high-level, i.e. combining model outputs. In this paper the predictive performance of high-level data fusion is investigated. Partial least squares is used on each of the data sets and dummy variables representing the classes are used as response variables. Based on the estimated responses y-hat j for data set j and class k, a Gaussian distribution p(g k |y-hat j ) is fitted. A simulation study is performed that shows the theoretical performance of high-level data fusion for two classes and two data sets. Within group correlations of the predicted responses of the two models and differences between the predictive ability of each of the separate models and the fused models are studied. Results show that the error rate is always less than or equal to the best performing subset and can theoretically approach zero. Negative within group correlations always improve the predictive performance. However, if the data sets have a joint basis, as with metabolomics data, this is not likely to happen. For equally performing individual classifiers the best results are expected for small within group correlations. Fusion of a non-predictive classifier with a classifier that exhibits discriminative ability lead to increased predictive performance if the within group correlations are strong. An example with real life data shows the applicability of the simulation results.

  14. Models for Automated Tube Performance Calculations

    International Nuclear Information System (INIS)

    Brunkhorst, C.

    2002-01-01

    High power radio-frequency systems, as typically used in fusion research devices, utilize vacuum tubes. Evaluation of vacuum tube performance involves data taken from tube operating curves. The acquisition of data from such graphical sources is a tedious process. A simple modeling method is presented that will provide values of tube currents for a given set of element voltages. These models may be used as subroutines in iterative solutions of amplifier operating conditions for a specific loading impedance

  15. Structure-based capacitance modeling and power loss analysis for the latest high-performance slant field-plate trench MOSFET

    Science.gov (United States)

    Kobayashi, Kenya; Sudo, Masaki; Omura, Ichiro

    2018-04-01

    Field-plate trench MOSFETs (FP-MOSFETs), with the features of ultralow on-resistance and very low gate–drain charge, are currently the mainstream of high-performance applications and their advancement is continuing as low-voltage silicon power devices. However, owing to their structure, their output capacitance (C oss), which leads to main power loss, remains to be a problem, especially in megahertz switching. In this study, we propose a structure-based capacitance model of FP-MOSFETs for calculating power loss easily under various conditions. Appropriate equations were modeled for C oss curves as three divided components. Output charge (Q oss) and stored energy (E oss) that were calculated using the model corresponded well to technology computer-aided design (TCAD) simulation, and we validated the accuracy of the model quantitatively. In the power loss analysis of FP-MOSFETs, turn-off loss was sufficiently suppressed, however, mainly Q oss loss increased depending on switching frequency. This analysis reveals that Q oss may become a significant issue in next-generation high-efficiency FP-MOSFETs.

  16. Well performance model

    International Nuclear Information System (INIS)

    Thomas, L.K.; Evans, C.E.; Pierson, R.G.; Scott, S.L.

    1992-01-01

    This paper describes the development and application of a comprehensive oil or gas well performance model. The model contains six distinct sections: stimulation design, tubing and/or casing flow, reservoir and near-wellbore calculations, production forecasting, wellbore heat transmission, and economics. These calculations may be performed separately or in an integrated fashion with data and results shared among the different sections. The model analysis allows evaluation of all aspects of well completion design, including the effects on future production and overall well economics

  17. Modeling of long High Voltage AC Underground

    DEFF Research Database (Denmark)

    Gudmundsdottir, Unnur Stella; Bak, Claus Leth; Wiechowski, W. T.

    2010-01-01

    cable models, perform highly accurate field measurements for validating the model and identifying possible disadvantages of the cable model. Furthermore the project suggests and implements improvements and validates them against several field measurements. It is shown in this paper how a new method...

  18. Research on Appraisal System of Procurator Performance by Using High-Order CFA Model

    Directory of Open Access Journals (Sweden)

    Yong-mao Huang

    2014-01-01

    Full Text Available The prosecutor is the main body of procuratorial organs. The performance appraisal system plays an important role in promoting the work efficiency of procurator. In this paper, we establish the performance appraisal system of procurators by high-order confirmatory factor analysis method and evaluate procurators’ performance by fuzzy comprehensive evaluation method based on the 360 degrees. The results have some help to performance management of procuratorial organs.

  19. Improving low-performing high schools: searching for evidence of promise.

    Science.gov (United States)

    Fleischman, Steve; Heppen, Jessica

    2009-01-01

    Noting that many of the nation's high schools are beset with major problems, such as low student reading and math achievement, high dropout rates, and an inadequate supply of effective teachers, Steve Fleischman and Jessica Heppen survey a range of strategies that educators have used to improve low-performing high schools. The authors begin by showing how the standards-based school reform movement, together with the No Child Left Behind Act requirement that underperforming schools adopt reforms supported by scientifically based research, spurred policy makers, educators, and researchers to create and implement a variety of approaches to attain improvement. Fleischman and Heppen then review a number of widely adopted reform models that aim to change "business as usual" in low-performing high schools. The models include comprehensive school reform programs, dual enrollment and early college high schools, smaller learning communities, specialty (for example, career) academies, charter high schools, and education management organizations. In practice, say the authors, many of these improvement efforts overlap, defying neat distinctions. Often, reforms are combined to reinforce one another. The authors explain the theories that drive the reforms, review evidence of their reforms' effectiveness to date, and suggest what it will take to make them work well. Although the reforms are promising, the authors say, few as yet have solid evidence of systematic or sustained success. In concluding, Fleischman and Heppen emphasize that the reasons for a high school's poor performance are so complex that no one reform model or approach, no matter how powerful, can turn around low-performing schools. They also stress the need for educators to implement each reform program with fidelity to its requirements and to support it for the time required for success. Looking to the future, the authors suggest steps that decision makers, researchers, and sponsors of research can take to promote

  20. Fatigue behaviour of high performance concretes for wind turbines; Ermuedungsverhalten von Hochleistungsbetonen in Windenergieanlagen

    Energy Technology Data Exchange (ETDEWEB)

    Lohaus, Ludger; Oneschkow, Nadja; Elsmeier, Kerstin; Huemme, Julian [Hannover Univ. (Germany). Inst. fuer Baustoffe

    2012-08-15

    New developments in the wind energy sector will lead to wind turbines with enormous capacities. As a result, the loads of the supporting structures are also increasing. For some time now, high performance concretes with self-compacting properties have been used in wind turbines for structural connections. Furthermore, slender foundations and prestressed concrete supporting structures made out of high-strength concrete are under development. In future, fatigue design of these high performance concretes is to be done according to the new fib-Model Code 2010. This code includes a new fatigue design model which enables a safe and economic fatigue design, even for high strength concrete. Extensive research with regard to the fatigue behaviour of different types of high performance concrete has been carried out at the Institute of Building Materials Science, Leibniz Universitaet Hannover. As part of these research activities, the influences of steel fibre reinforcement on the fatigue behaviour of high performance concretes are being investigated. In this paper, interim results of these investigations are presented and the potential for the practical applications of high performance concrete is discussed. The results of the conducted investigations are presented in comparison with the new fatigue design model of the fib-Model Code 2010. (orig.)

  1. Performance Model for High-Power Lithium Titanate Oxide Batteries based on Extended Characterization Tests

    DEFF Research Database (Denmark)

    Stroe, Ana-Irina; Swierczynski, Maciej Jozef; Stroe, Daniel Ioan

    2015-01-01

    Lithium-ion (Li-ion) batteries are found nowadays not only in portable/consumer electronics but also in more power demanding applications, such as stationary renewable energy storage, automotive and back-up power supply, because of their superior characteristics in comparison to other energy...... storage technologies. Nevertheless, prior to be used in any of the aforementioned application, a Li-ion battery cell must be intensively characterized and its behavior needs to be understood. This can be realized by performing extended laboratory characterization tests and developing Li-ion battery...... performance models. Furthermore, accurate performance models are necessary in order to analyze the behavior of the battery cell under different mission profiles, by simulation; thus, avoiding time and cost demanding real life tests. This paper presents the development and the parametrization of a performance...

  2. Modeling and experimental performance of an intermediate temperature reversible solid oxide cell for high-efficiency, distributed-scale electrical energy storage

    Science.gov (United States)

    Wendel, Christopher H.; Gao, Zhan; Barnett, Scott A.; Braun, Robert J.

    2015-06-01

    Electrical energy storage is expected to be a critical component of the future world energy system, performing load-leveling operations to enable increased penetration of renewable and distributed generation. Reversible solid oxide cells, operating sequentially between power-producing fuel cell mode and fuel-producing electrolysis mode, have the capability to provide highly efficient, scalable electricity storage. However, challenges ranging from cell performance and durability to system integration must be addressed before widespread adoption. One central challenge of the system design is establishing effective thermal management in the two distinct operating modes. This work leverages an operating strategy to use carbonaceous reactant species and operate at intermediate stack temperature (650 °C) to promote exothermic fuel-synthesis reactions that thermally self-sustain the electrolysis process. We present performance of a doped lanthanum-gallate (LSGM) electrolyte solid oxide cell that shows high efficiency in both operating modes at 650 °C. A physically based electrochemical model is calibrated to represent the cell performance and used to simulate roundtrip operation for conditions unique to these reversible systems. Design decisions related to system operation are evaluated using the cell model including current density, fuel and oxidant reactant compositions, and flow configuration. The analysis reveals tradeoffs between electrical efficiency, thermal management, energy density, and durability.

  3. Combining high productivity with high performance on commodity hardware

    DEFF Research Database (Denmark)

    Skovhede, Kenneth

    -like compiler for translating CIL bytecode on the CELL-BE. I then introduce a bytecode converter that transforms simple loops in Java bytecode to GPGPU capable code. I then introduce the numeric library for the Common Intermediate Language, NumCIL. I can then utilizing the vector programming model from Num......CIL and map this to the Bohrium framework. The result is a complete system that gives the user a choice of high-level languages with no explicit parallelism, yet seamlessly performs efficient execution on a number of hardware setups....

  4. Adoption of High Performance Computational (HPC) Modeling Software for Widespread Use in the Manufacture of Welded Structures

    Energy Technology Data Exchange (ETDEWEB)

    Brust, Frederick W. [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Punch, Edward F. [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Twombly, Elizabeth Kurth [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Kalyanam, Suresh [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Kennedy, James [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Hattery, Garty R. [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Dodds, Robert H. [Professional Consulting Services, Inc., Lisle, IL (United States); Mach, Justin C [Caterpillar, Peoria, IL (United States); Chalker, Alan [Ohio Supercomputer Center (OSC), Columbus, OH (United States); Nicklas, Jeremy [Ohio Supercomputer Center (OSC), Columbus, OH (United States); Gohar, Basil M [Ohio Supercomputer Center (OSC), Columbus, OH (United States); Hudak, David [Ohio Supercomputer Center (OSC), Columbus, OH (United States)

    2016-12-30

    This report summarizes the final product developed for the US DOE Small Business Innovation Research (SBIR) Phase II grant made to Engineering Mechanics Corporation of Columbus (Emc2) between April 16, 2014 and August 31, 2016 titled ‘Adoption of High Performance Computational (HPC) Modeling Software for Widespread Use in the Manufacture of Welded Structures’. Many US companies have moved fabrication and production facilities off shore because of cheaper labor costs. A key aspect in bringing these jobs back to the US is the use of technology to render US-made fabrications more cost-efficient overall with higher quality. One significant advantage that has emerged in the US over the last two decades is the use of virtual design for fabrication of small and large structures in weld fabrication industries. Industries that use virtual design and analysis tools have reduced material part size, developed environmentally-friendly fabrication processes, improved product quality and performance, and reduced manufacturing costs. Indeed, Caterpillar Inc. (CAT), one of the partners in this effort, continues to have a large fabrication presence in the US because of the use of weld fabrication modeling to optimize fabrications by controlling weld residual stresses and distortions and improving fatigue, corrosion, and fracture performance. This report describes Emc2’s DOE SBIR Phase II final results to extend an existing, state-of-the-art software code, Virtual Fabrication Technology (VFT®), currently used to design and model large welded structures prior to fabrication - to a broader range of products with widespread applications for small and medium-sized enterprises (SMEs). VFT® helps control distortion, can minimize and/or control residual stresses, control welding microstructure, and pre-determine welding parameters such as weld-sequencing, pre-bending, thermal-tensioning, etc. VFT® uses material properties, consumable properties, etc. as inputs

  5. Distributed control software of high-performance control-loop algorithm

    CERN Document Server

    Blanc, D

    1999-01-01

    The majority of industrial cooling and ventilation plants require the control of complex processes. All these processes are highly important for the operation of the machines. The stability and reliability of these processes are leading factors identifying the quality of the service provided. The control system architecture and software structure, as well, are required to have high dynamical performance and robust behaviour. The intelligent systems based on PID or RST controllers are used for their high level of stability and accuracy. The design and tuning of these complex controllers require the dynamic model of the plant to be known (generally obtained by identification) and the desired performance of the various control loops to be specified for achieving good performances. The concept of having a distributed control algorithm software provides full automation facilities with well-adapted functionality and good performances, giving methodology, means and tools to master the dynamic process optimization an...

  6. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  7. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  8. Optimal dynamic performance for high-precision actuators/stages

    International Nuclear Information System (INIS)

    Preissner, C.; Lee, S.-H.; Royston, T. J.; Shu, D.

    2002-01-01

    System dynamic performance of actuator/stage groups, such as those found in optical instrument positioning systems and other high-precision applications, is dependent upon both individual component behavior and the system configuration. Experimental modal analysis techniques were implemented to determine the six degree of freedom stiffnesses and damping for individual actuator components. These experimental data were then used in a multibody dynamic computer model to investigate the effect of stage group configuration. Running the computer model through the possible stage configurations and observing the predicted vibratory response determined the optimal stage group configuration. Configuration optimization can be performed for any group of stages, provided there is stiffness and damping data available for the constituent pieces

  9. A High Performance Backend for Array-Oriented Programming on Next-Generation Processing Units

    DEFF Research Database (Denmark)

    Lund, Simon Andreas Frimann

    The financial crisis, which started in 2008, spawned the HIPERFIT research center as a preventive measure against future financial crises. The goal of prevention is to be met by improving mathematical models for finance, the verifiable description of them in domain-specific languages...... and the efficient execution of them on high performance systems. This work investigates the requirements for, and the implementation of, a high performance backend supporting these goals. This involves an outline of the hardware available today, in the near future and how to program it for high performance....... The main challenge is to bridge the gaps between performance, productivity and portability. A declarative high-level array-oriented programming model is explored to achieve this goal and a backend implemented to support it. Different strategies to the backend design and application of optimizations...

  10. NEDO project reports. High performance industrial furnace development project - High temperature air combustion

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-21

    For the purpose of reducing energy consumption, a NEDO project 'Developmental research on high efficiency industrial furnaces' was carried out from FY 1993 to FY 1999 by The Japan Industrial Furnaces Manufacturers Association, and the paper outlined the details of the project. Industrial furnaces handled in this R and D can bring 30% reduction of the energy consumption and approximately 50% NOx reduction, and were given the 9th Nikkei global environmental technology prize. In the study of combustion phenomena of high temperature air combustion, the paper arranged characteristics of flame, the base of gaseous fuel flame, the base of liquid fuel flame, the base of solid fuel flame, etc. Concerning high temperature air combustion models for simulation, fluid dynamics and heat transfer models, and reaction and NOx models, etc. As to impacts of high temperature air combustion on performance of industrial furnaces, energy conservation, lowering of pollution, etc. In relation to a guide for the design of high efficiency industrial furnaces, flow charts, conceptual design, evaluation method for heat balance and efficiency using charts, combustion control system, applicability of high efficiency industrial furnaces, etc. (NEDO)

  11. NEDO project reports. High performance industrial furnace development project - High temperature air combustion

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-21

    For the purpose of reducing energy consumption, a NEDO project 'Developmental research on high efficiency industrial furnaces' was carried out from FY 1993 to FY 1999 by The Japan Industrial Furnaces Manufacturers Association, and the paper outlined the details of the project. Industrial furnaces handled in this R and D can bring 30% reduction of the energy consumption and approximately 50% NOx reduction, and were given the 9th Nikkei global environmental technology prize. In the study of combustion phenomena of high temperature air combustion, the paper arranged characteristics of flame, the base of gaseous fuel flame, the base of liquid fuel flame, the base of solid fuel flame, etc. Concerning high temperature air combustion models for simulation, fluid dynamics and heat transfer models, and reaction and NOx models, etc. As to impacts of high temperature air combustion on performance of industrial furnaces, energy conservation, lowering of pollution, etc. In relation to a guide for the design of high efficiency industrial furnaces, flow charts, conceptual design, evaluation method for heat balance and efficiency using charts, combustion control system, applicability of high efficiency industrial furnaces, etc. (NEDO)

  12. GPU-based high-performance computing for radiation therapy

    International Nuclear Information System (INIS)

    Jia, Xun; Jiang, Steve B; Ziegenhein, Peter

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. (topical review)

  13. High Performance, Robust Control of Flexible Space Structures: MSFC Center Director's Discretionary Fund

    Science.gov (United States)

    Whorton, M. S.

    1998-01-01

    Many spacecraft systems have ambitious objectives that place stringent requirements on control systems. Achievable performance is often limited because of difficulty of obtaining accurate models for flexible space structures. To achieve sufficiently high performance to accomplish mission objectives may require the ability to refine the control design model based on closed-loop test data and tune the controller based on the refined model. A control system design procedure is developed based on mixed H2/H(infinity) optimization to synthesize a set of controllers explicitly trading between nominal performance and robust stability. A homotopy algorithm is presented which generates a trajectory of gains that may be implemented to determine maximum achievable performance for a given model error bound. Examples show that a better balance between robustness and performance is obtained using the mixed H2/H(infinity) design method than either H2 or mu-synthesis control design. A second contribution is a new procedure for closed-loop system identification which refines parameters of a control design model in a canonical realization. Examples demonstrate convergence of the parameter estimation and improved performance realized by using the refined model for controller redesign. These developments result in an effective mechanism for achieving high-performance control of flexible space structures.

  14. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  15. Drift-kinetic Alfven modes in high performance tokamaks

    International Nuclear Information System (INIS)

    Jaun, A.; Fasoli, A.F.; Testa, D.; Vaclavik, J.; Villard, L.

    2001-01-01

    The stability of fast-particle driven Alfven eigenmodes is modeled in high performance tokamaks, successively with a conventional shear, an optimized shear and a tight aspect ratio plasma. A large bulk pressure yields global kinetic Alfven eigenmodes that are stabilized by mode conversion in the presence of a divertor. This suggests how conventional reactor scenarii could withstand significant pressure gradients from the fusion products. A large safety factor in the core q 0 >2.5 in deeply shear reversed configurations and a relatively large bulk ion Larmor radius in a low magnetic field can trigger global drift-kinetic Alfven eigenmodes that are unstable in high performance JET, NSTX and ITER plasmas. (author)

  16. Modeling attacking of high skills volleyball players

    Directory of Open Access Journals (Sweden)

    Vladimir Gamaliy

    2014-12-01

    Full Text Available Purpose: to determine the model indicators of technical and tactical actions in the attack highly skilled volleyball players. Material and Methods: the study used statistical data of major international competitions: Olympic Games – 2012 World Championships – 2010, World League – 2010–2014 European Championship – 2010–2014. A total of 130 analyzed games. Methods were used: analysis and generalization of scientific and methodological literature, analysis of competitive activity highly skilled volleyball players, teacher observation, modeling technical and tactical actions in attacking highly skilled volleyball players. Results: it was found that the largest volume application of technical and tactical actions in the attack belongs to the group tactics «supple movement», whose indicator is 21,3%. The smallest amount of application belongs to the group tactics «flight level» model whose indicators is 5,4%, the efficiency of 3,4%, respectively. It is found that the power service in the jump from model parameters used in 51,6% of cases, the planning targets – 21,7% and 4,4% planning to reduce. Attacks performed with the back line, on model parameters used in the amount of 20,8% efficiency –13,7%. Conclusions: we prove that the performance of technical and tactical actions in the attack can be used as model in the control system of training and competitive process highly skilled volleyball players

  17. High-Performance Modeling of Carbon Dioxide Sequestration by Coupling Reservoir Simulation and Molecular Dynamics

    KAUST Repository

    Bao, Kai

    2015-10-26

    The present work describes a parallel computational framework for carbon dioxide (CO2) sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel high-performance-computing (HPC) systems. In this framework, a parallel reservoir simulator, reservoir-simulation toolbox (RST), solves the flow and transport equations that describe the subsurface flow behavior, whereas the MD simulations are performed to provide the required physical parameters. Technologies from several different fields are used to make this novel coupled system work efficiently. One of the major applications of the framework is the modeling of large-scale CO2 sequestration for long-term storage in subsurface geological formations, such as depleted oil and gas reservoirs and deep saline aquifers, which has been proposed as one of the few attractive and practical solutions to reduce CO2 emissions and address the global-warming threat. Fine grids and accurate prediction of the properties of fluid mixtures under geological conditions are essential for accurate simulations. In this work, CO2 sequestration is presented as a first example for coupling reservoir simulation and MD, although the framework can be extended naturally to the full multiphase multicomponent compositional flow simulation to handle more complicated physical processes in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our MD simulations compared with published data, and good scalability is observed with the massively parallel HPC systems. The performance and capacity of the proposed framework are well-demonstrated with several experiments with hundreds of millions to one billion cells. To the best of our knowledge, the present work represents the first attempt to couple reservoir simulation and molecular simulation for large-scale modeling. Because of the complexity of

  18. ENVIRONMENTAL RESEARCH BRIEF : ANALYTIC ELEMENT MODELING OF GROUND-WATER FLOW AND HIGH PERFORMANCE COMPUTING

    Science.gov (United States)

    Several advances in the analytic element method have been made to enhance its performance and facilitate three-dimensional ground-water flow modeling in a regional aquifer setting. First, a new public domain modular code (ModAEM) has been developed for modeling ground-water flow ...

  19. High-performance heat pipes for heat recovery applications

    Science.gov (United States)

    Saaski, E. W.; Hartl, J. H.

    1980-01-01

    Methods to improve the performance of reflux heat pipes for heat recovery applications were examined both analytically and experimentally. Various models for the estimation of reflux heat pipe transport capacity were surveyed in the literature and compared with experimental data. A high transport capacity reflux heat pipe was developed that provides up to a factor of 10 capacity improvement over conventional open tube designs; analytical models were developed for this device and incorporated into a computer program HPIPE. Good agreement of the model predictions with data for R-11 and benzene reflux heat pipes was obtained.

  20. Performance and Costs of Ductless Heat Pumps in Marine-Climate High-Performance Homes -- Habitat for Humanity The Woods

    Energy Technology Data Exchange (ETDEWEB)

    Lubliner, Michael [Washington State Univ., Pullman, WA (United States); Howard, Luke [Washington State Univ., Pullman, WA (United States); Hales, David [Washington State Univ., Pullman, WA (United States); Kunkle, Rick [Washington State Univ., Pullman, WA (United States); Gordon, Andy [Washington State Univ., Pullman, WA (United States); Spencer, Melinda [Washington State Univ., Pullman, WA (United States)

    2016-02-23

    This final Building America Partnership report focuses on the results of field testing, modeling, and monitoring of ductless mini-split heat pump hybrid heating systems in seven homes built and first occupied at various times between September 2013 and October 2014. The report also provides WSU documentation of high-performance home observations, lessons learned, and stakeholder recommendations for builders of affordable high-performance housing.

  1. Delighting the Customer: Creativity-Oriented High-Performance Work Systems, Frontline Employee Creative Performance, and Customer Satisfaction

    OpenAIRE

    Martinaityte, Ieva; Sacramento, Claudia; Aryee, Samuel

    2016-01-01

    Drawing on self-determination theory, we proposed and tested a cross-level model of how perceived creativity-oriented high-performance work systems (HPWS) influence customer satisfaction. Data were obtained from frontline employees (FLEs), their managers, and branch records of two organizations (retail bank and cosmetics) in Lithuania. Results of multilevel structural equation modeling analyses revealed partial support for our model. Although perceived creativity-oriented HPWS related to crea...

  2. Using the Eclipse Parallel Tools Platform to Assist Earth Science Model Development and Optimization on High Performance Computers

    Science.gov (United States)

    Alameda, J. C.

    2011-12-01

    Development and optimization of computational science models, particularly on high performance computers, and with the advent of ubiquitous multicore processor systems, practically on every system, has been accomplished with basic software tools, typically, command-line based compilers, debuggers, performance tools that have not changed substantially from the days of serial and early vector computers. However, model complexity, including the complexity added by modern message passing libraries such as MPI, and the need for hybrid code models (such as openMP and MPI) to be able to take full advantage of high performance computers with an increasing core count per shared memory node, has made development and optimization of such codes an increasingly arduous task. Additional architectural developments, such as many-core processors, only complicate the situation further. In this paper, we describe how our NSF-funded project, "SI2-SSI: A Productive and Accessible Development Workbench for HPC Applications Using the Eclipse Parallel Tools Platform" (WHPC) seeks to improve the Eclipse Parallel Tools Platform, an environment designed to support scientific code development targeted at a diverse set of high performance computing systems. Our WHPC project to improve Eclipse PTP takes an application-centric view to improve PTP. We are using a set of scientific applications, each with a variety of challenges, and using PTP to drive further improvements to both the scientific application, as well as to understand shortcomings in Eclipse PTP from an application developer perspective, to drive our list of improvements we seek to make. We are also partnering with performance tool providers, to drive higher quality performance tool integration. We have partnered with the Cactus group at Louisiana State University to improve Eclipse's ability to work with computational frameworks and extremely complex build systems, as well as to develop educational materials to incorporate into

  3. Solving nonlinear, High-order partial differential equations using a high-performance isogeometric analysis framework

    KAUST Repository

    Cortes, Adriano Mauricio; Vignal, Philippe; Sarmiento, Adel; Garcí a, Daniel O.; Collier, Nathan; Dalcin, Lisandro; Calo, Victor M.

    2014-01-01

    In this paper we present PetIGA, a high-performance implementation of Isogeometric Analysis built on top of PETSc. We show its use in solving nonlinear and time-dependent problems, such as phase-field models, by taking advantage of the high-continuity of the basis functions granted by the isogeometric framework. In this work, we focus on the Cahn-Hilliard equation and the phase-field crystal equation.

  4. High performance work practices, innovation and performance

    DEFF Research Database (Denmark)

    Jørgensen, Frances; Newton, Cameron; Johnston, Kim

    2013-01-01

    Research spanning nearly 20 years has provided considerable empirical evidence for relationships between High Performance Work Practices (HPWPs) and various measures of performance including increased productivity, improved customer service, and reduced turnover. What stands out from......, and Africa to examine these various questions relating to the HPWP-innovation-performance relationship. Each paper discusses a practice that has been identified in HPWP literature and potential variables that can facilitate or hinder the effects of these practices of innovation- and performance...

  5. Practices and Processes of Leading High Performance Home Builders in the Upper Midwest

    Energy Technology Data Exchange (ETDEWEB)

    Von Thoma, Ed [Univ. of Minnesota, St. Paul, MN (United States). NorthernSTAR Building America Partnership; Ojzcyk, Cindy [Univ. of Minnesota, St. Paul, MN (United States). NorthernSTAR Building America Partnership

    2012-12-01

    The NorthernSTAR Building America Partnership team proposed this study to gain insight into the business, sales, and construction processes of successful high performance builders. The knowledge gained by understanding the high performance strategies used by individual builders, as well as the process each followed to move from traditional builder to high performance builder, will be beneficial in proposing more in-depth research to yield specific action items to assist the industry at large transform to high performance new home construction. This investigation identified the best practices of three successful high performance builders in the upper Midwest. In-depth field analysis of the performance levels of their homes, their business models, and their strategies for market acceptance were explored.

  6. Sensitivity and uncertainty analyses for performance assessment modeling

    International Nuclear Information System (INIS)

    Doctor, P.G.

    1988-08-01

    Sensitivity and uncertainty analyses methods for computer models are being applied in performance assessment modeling in the geologic high level radioactive waste repository program. The models used in performance assessment tend to be complex physical/chemical models with large numbers of input variables. There are two basic approaches to sensitivity and uncertainty analyses: deterministic and statistical. The deterministic approach to sensitivity analysis involves numerical calculation or employs the adjoint form of a partial differential equation to compute partial derivatives; the uncertainty analysis is based on Taylor series expansions of the input variables propagated through the model to compute means and variances of the output variable. The statistical approach to sensitivity analysis involves a response surface approximation to the model with the sensitivity coefficients calculated from the response surface parameters; the uncertainty analysis is based on simulation. The methods each have strengths and weaknesses. 44 refs

  7. Improving UV Resistance of High Performance Fibers

    Science.gov (United States)

    Hassanin, Ahmed

    % rutile TiO2 nanoparticles showed excellent protection of braid from PBO. Only 7.5% strength loss was observed. To optimize the degree of protection of the sheath loaded with UV blocker particles, computational models were developed to optimize the protective layer thickness/weight and the amount of UV particles that provide the maximum protection with lightest weight of the protective layer and minimum amount of UV particles. The simulated results were found to be higher that the experimental results due to the tendency of nanoparticles to be agglomerated in real experiments. The third approach to achieve a maximum protection with the minimum weight added is constructing a sleeve from SpectraRTM (Ultra High Molecular Weight Polyethylene (UHMWPE) high performance fiber), which is known to resist UV, woven fabric. Covering the braid from PBO fiber with Spectra RTM woven fabric provide hybrid structure with two compatible components that can share the load and thus maintain the high strength to weight ratio. Although the SpectraRTM fabric had maximum cover factor, 20 % of visible light and about 15 % of UV were able to penetrate the fabric. This transmittance of UV-VIS light negatively affected the protection performance of the SpectraRTM woven fabric layer. It is thought that SpectraRTM fabric be coated with a thin layer (mentioned earlier) containing UV blocker for additional protection while maintain strength contribution to the hybrid structure. To maximize the strength to weight ratio of the hybrid structure (with core from PBO braid and sheath from SpectraRTM woven fabric) an established finite element model was utilized. The theoretical results using the finite element theory indicated that by controlling the bending rigidity of the filling yarn of the SpectraRTM fabric, the extension at peak load of woven fabric in warp direction (loading direction) could be controlled to match the braid extension at peak load. The match in the extension at peak load of the two

  8. Building Trust in High-Performing Teams

    Directory of Open Access Journals (Sweden)

    Aki Soudunsaari

    2012-06-01

    Full Text Available Facilitation of growth is more about good, trustworthy contacts than capital. Trust is a driving force for business creation, and to create a global business you need to build a team that is capable of meeting the challenge. Trust is a key factor in team building and a needed enabler for cooperation. In general, trust building is a slow process, but it can be accelerated with open interaction and good communication skills. The fast-growing and ever-changing nature of global business sets demands for cooperation and team building, especially for startup companies. Trust building needs personal knowledge and regular face-to-face interaction, but it also requires empathy, respect, and genuine listening. Trust increases communication, and rich and open communication is essential for the building of high-performing teams. Other building materials are a shared vision, clear roles and responsibilities, willingness for cooperation, and supporting and encouraging leadership. This study focuses on trust in high-performing teams. It asks whether it is possible to manage trust and which tools and operation models should be used to speed up the building of trust. In this article, preliminary results from the authors’ research are presented to highlight the importance of sharing critical information and having a high level of communication through constant interaction.

  9. Repository environmental parameters and models/methodologies relevant to assessing the performance of high-level waste packages in basalt, tuff, and salt

    Energy Technology Data Exchange (ETDEWEB)

    Claiborne, H.C.; Croff, A.G.; Griess, J.C.; Smith, F.J.

    1987-09-01

    This document provides specifications for models/methodologies that could be employed in determining postclosure repository environmental parameters relevant to the performance of high-level waste packages for the Basalt Waste Isolation Project (BWIP) at Richland, Washington, the tuff at Yucca Mountain by the Nevada Test Site, and the bedded salt in Deaf Smith County, Texas. Guidance is provided on the identify of the relevant repository environmental parameters; the models/methodologies employed to determine the parameters, and the input data base for the models/methodologies. Supporting studies included are an analysis of potential waste package failure modes leading to identification of the relevant repository environmental parameters, an evaluation of the credible range of the repository environmental parameters, and a summary of the review of existing models/methodologies currently employed in determining repository environmental parameters relevant to waste package performance. 327 refs., 26 figs., 19 tabs.

  10. Repository environmental parameters and models/methodologies relevant to assessing the performance of high-level waste packages in basalt, tuff, and salt

    International Nuclear Information System (INIS)

    Claiborne, H.C.; Croff, A.G.; Griess, J.C.; Smith, F.J.

    1987-09-01

    This document provides specifications for models/methodologies that could be employed in determining postclosure repository environmental parameters relevant to the performance of high-level waste packages for the Basalt Waste Isolation Project (BWIP) at Richland, Washington, the tuff at Yucca Mountain by the Nevada Test Site, and the bedded salt in Deaf Smith County, Texas. Guidance is provided on the identify of the relevant repository environmental parameters; the models/methodologies employed to determine the parameters, and the input data base for the models/methodologies. Supporting studies included are an analysis of potential waste package failure modes leading to identification of the relevant repository environmental parameters, an evaluation of the credible range of the repository environmental parameters, and a summary of the review of existing models/methodologies currently employed in determining repository environmental parameters relevant to waste package performance. 327 refs., 26 figs., 19 tabs

  11. Performance modeling of network data services

    Energy Technology Data Exchange (ETDEWEB)

    Haynes, R.A.; Pierson, L.G.

    1997-01-01

    Networks at major computational organizations are becoming increasingly complex. The introduction of large massively parallel computers and supercomputers with gigabyte memories are requiring greater and greater bandwidth for network data transfers to widely dispersed clients. For networks to provide adequate data transfer services to high performance computers and remote users connected to them, the networking components must be optimized from a combination of internal and external performance criteria. This paper describes research done at Sandia National Laboratories to model network data services and to visualize the flow of data from source to sink when using the data services.

  12. New Developments in Modeling MHD Systems on High Performance Computing Architectures

    Science.gov (United States)

    Germaschewski, K.; Raeder, J.; Larson, D. J.; Bhattacharjee, A.

    2009-04-01

    Modeling the wide range of time and length scales present even in fluid models of plasmas like MHD and X-MHD (Extended MHD including two fluid effects like Hall term, electron inertia, electron pressure gradient) is challenging even on state-of-the-art supercomputers. In the last years, HPC capacity has continued to grow exponentially, but at the expense of making the computer systems more and more difficult to program in order to get maximum performance. In this paper, we will present a new approach to managing the complexity caused by the need to write efficient codes: Separating the numerical description of the problem, in our case a discretized right hand side (r.h.s.), from the actual implementation of efficiently evaluating it. An automatic code generator is used to describe the r.h.s. in a quasi-symbolic form while leaving the translation into efficient and parallelized code to a computer program itself. We implemented this approach for OpenGGCM (Open General Geospace Circulation Model), a model of the Earth's magnetosphere, which was accelerated by a factor of three on regular x86 architecture and a factor of 25 on the Cell BE architecture (commonly known for its deployment in Sony's PlayStation 3).

  13. Empirical testing of Kotler's high-performance factors to increase sales growth

    Directory of Open Access Journals (Sweden)

    Oren Dayan

    2010-12-01

    Full Text Available Purpose and/or objectives: The primary objective of this study is to empirically test Kotler's (2003 high-performance model which ensures an increase in sales growth. More specifically, the study explores the influence of process variables (as measured by marketing strategies, resources management (as measured by the management of labour, materials, machines, information technology and energy and organisational variables (as measured by TQM and organisational culture on sales growth in the food, motorcar and high-technology manufacturing industries. Problem investigated Various research studies suggest that the managers of firms are continuously challenged in their attempts to increase their sales (Morre, 2007; Pauwels, Silva Risso, Srinivasan & Hanssens, 2004: 142-143; Gray & Hayes, 2007: 1. Kotler (2003 suggests a model that leads to a high performing business. The question is posed as to whether this model can be used to increase sales growth in all businesses. This study seeks to develop a generic model to increase sales growth across industries by using an adapted version of Kotler's (2003 high-performance model. The study investigates the application of this adapted model on the food, motorcar and high-technology manufacturing industries. Design and/or methodology and/or approach: An empirical causal research design that includes 770 marketing and product development practitioners from multinational food, motorcar and high-technology manufacturing firms, was used in this study. A response rate of 76.1% was achieved as only 571 useable questionnaires were returned. The internal reliability and discriminant validity of the measuring instrument were assessed by the calculation of Cronbach alpha coefficients and the conducting an exploratory factor analysis respectively. Structural Equation Modelling SEM was used to statistically test the relationships between the independent variables (marketing strategies, resource management, TQM and

  14. High performance computation of landscape genomic models including local indicators of spatial association.

    Science.gov (United States)

    Stucki, S; Orozco-terWengel, P; Forester, B R; Duruz, S; Colli, L; Masembe, C; Negrini, R; Landguth, E; Jones, M R; Bruford, M W; Taberlet, P; Joost, S

    2017-09-01

    With the increasing availability of both molecular and topo-climatic data, the main challenges facing landscape genomics - that is the combination of landscape ecology with population genomics - include processing large numbers of models and distinguishing between selection and demographic processes (e.g. population structure). Several methods address the latter, either by estimating a null model of population history or by simultaneously inferring environmental and demographic effects. Here we present samβada, an approach designed to study signatures of local adaptation, with special emphasis on high performance computing of large-scale genetic and environmental data sets. samβada identifies candidate loci using genotype-environment associations while also incorporating multivariate analyses to assess the effect of many environmental predictor variables. This enables the inclusion of explanatory variables representing population structure into the models to lower the occurrences of spurious genotype-environment associations. In addition, samβada calculates local indicators of spatial association for candidate loci to provide information on whether similar genotypes tend to cluster in space, which constitutes a useful indication of the possible kinship between individuals. To test the usefulness of this approach, we carried out a simulation study and analysed a data set from Ugandan cattle to detect signatures of local adaptation with samβada, bayenv, lfmm and an F ST outlier method (FDIST approach in arlequin) and compare their results. samβada - an open source software for Windows, Linux and Mac OS X available at http://lasig.epfl.ch/sambada - outperforms other approaches and better suits whole-genome sequence data processing. © 2016 The Authors. Molecular Ecology Resources Published by John Wiley & Sons Ltd.

  15. The computer program LIAR for the simulation and modeling of high performance linacs

    International Nuclear Information System (INIS)

    Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.O.; Siemann, R.; Thompson, K.; Zimmermann, F.

    1997-07-01

    High performance linear accelerators are the central components of the proposed next generation of linear colliders. They must provide acceleration of up to 750 GeV per beam while maintaining small normalized emittances. Standard simulation programs, mainly developed for storage rings, did not meet the specific requirements for high performance linacs with high bunch charges and strong wakefields. The authors present the program. LIAR (LInear Accelerator Research code) that includes single and multi-bunch wakefield effects, a 6D coupled beam description, specific optimization algorithms and other advanced features. LIAR has been applied to and checked against the existing Stanford Linear Collider (SLC), the linacs of the proposed Next Linear Collider (NLC) and the proposed Linac Coherent Light Source (LCLS) at SLAC. Its modular structure allows easy extension for different purposes. The program is available for UNIX workstations and Windows PC's

  16. Use of simplified models in the performance assessment of a high-level waste repository system in Japan

    International Nuclear Information System (INIS)

    Pensado, Osvaldo; Mohanty, Sitakanta; Kanno, Takeshi; Tochigi, Yoshikatsu

    2005-01-01

    This paper explores simplifications to the H12 performance assessment model to enhance performance in Monte Carlo analyses. It is shown that similar reference case results to those of the H12 model can be derived by describing the buffer material surrounding a waste package as a planar body. Other possible simplifications to the performance assessment model in areas related to the stratification of the host rock transmissivity domain and solubility constraints in the buffer material are explored. (author)

  17. High-performance web services for querying gene and variant annotation.

    Science.gov (United States)

    Xin, Jiwen; Mark, Adam; Afrasiabi, Cyrus; Tsueng, Ginger; Juchler, Moritz; Gopal, Nikhil; Stupp, Gregory S; Putman, Timothy E; Ainscough, Benjamin J; Griffith, Obi L; Torkamani, Ali; Whetzel, Patricia L; Mungall, Christopher J; Mooney, Sean D; Su, Andrew I; Wu, Chunlei

    2016-05-06

    Efficient tools for data management and integration are essential for many aspects of high-throughput biology. In particular, annotations of genes and human genetic variants are commonly used but highly fragmented across many resources. Here, we describe MyGene.info and MyVariant.info, high-performance web services for querying gene and variant annotation information. These web services are currently accessed more than three million times permonth. They also demonstrate a generalizable cloud-based model for organizing and querying biological annotation information. MyGene.info and MyVariant.info are provided as high-performance web services, accessible at http://mygene.info and http://myvariant.info . Both are offered free of charge to the research community.

  18. Design of high-speed planing hulls for the improvement of resistance and seakeeping performance

    Directory of Open Access Journals (Sweden)

    Dong Jin Kim

    2013-03-01

    Full Text Available High-speed vessels require good resistance and seakeeping performance for safe operations in rough seas. The resistance and seakeeping performance of high-speed vessels varies significantly depending on their hull forms. In this study, three planing hulls that have almost the same displacement and principal dimension are designed and the hydrodynamic characteristics of those hulls are estimated by high-speed model tests. All model ships are deep-V type planing hulls. The bows of no.2 and no.3 model ships are designed to be advantageous for wave-piercing in rough water. No. 2 and no. 3 model ships have concave and straight forebody cross-sections, respectively. And length-to-beam ratios of no.2 and no.3 models are larger than that of no.1 model. In calm water tests, running attitude and resistance of model ships are measured at various speeds. And motion tests in regular waves are performed to measure the heave and pitch motion responses of the model ships. The required power of no.1 (VPS model is smallest, but its vertical motion amplitudes in waves are the largest. No.2 (VWC model shows the smallest motion amplitudes in waves, but needs the greatest power at high speed. The resistance and seakeeping performance of no.3 (VWS model ship are the middle of three model ships, respectively. And in regular waves, no.1 model ship experiences ‘fly over’ phenomena around its resonant frequency. Vertical accelerations at specific locations such as F.P., center of gravity of model ships are measured at their resonant frequency. It is necessary to measure accelerations by accelerometers or other devices in model tests for the accurate prediction of vertical accelerations in real ships.

  19. Performance and Costs of Ductless Heat Pumps in Marine-Climate High-Performance Homes -- Habitat for Humanity The Woods

    Energy Technology Data Exchange (ETDEWEB)

    Lubliner, Michael [Building America Partnership for Improved Residential Construction, Olympia, WA (United States). Washington States Univ. Energy Program; Howard, Luke [Building America Partnership for Improved Residential Construction, Olympia, WA (United States). Washington States Univ. Energy Program; Hales, David [Building America Partnership for Improved Residential Construction, Olympia, WA (United States). Washington States Univ. Energy Program; Kunkle, Rick [Building America Partnership for Improved Residential Construction, Olympia, WA (United States). Washington States Univ. Energy Program; Gordon, Andy [Building America Partnership for Improved Residential Construction, Olympia, WA (United States). Washington States Univ. Energy Program; Spencer, Melinda [Building America Partnership for Improved Residential Construction, Olympia, WA (United States). Washington States Univ. Energy Program

    2016-02-18

    The Woods is a Habitat for Humanity (HFH) community of ENERGY STAR Homes Northwest (ESHNW)-certified homes located in the marine climate of Tacoma/Pierce County, Washington. This research report builds on an earlier preliminary draft 2014 BA report, and includes significant billing analysis and cost effectiveness research from a collaborative, ongoing Ductless Heat Pump (DHP)research effort for Tacoma Public Utilities (TPU) and Bonneville Power Administration (BPA). This report focuses on the results of field testing, modeling, and monitoring of ductless mini-split heat pump hybrid heating systems in seven homes built and first occupied at various times between September 2013 and October 2014. The report also provides WSU documentation of high-performance home observations, lessons learned, and stakeholder recommendations for builders of affordable high-performance housing such as HFH. Tacoma Public Utilities (TPU) and Bonneville Power Administration (BPA). This report focuses on the results of field testing, modeling, and monitoring of ductless mini-split heat pump hybrid heating systems in seven homes built and first occupied at various times between September 2013 and October 2014. The report also provides WSU documentation of high-performance home observations, lessons learned, and stakeholder recommendations for builders of affordable high-performance housing such as HFH.

  20. RavenDB high performance

    CERN Document Server

    Ritchie, Brian

    2013-01-01

    RavenDB High Performance is comprehensive yet concise tutorial that developers can use to.This book is for developers & software architects who are designing systems in order to achieve high performance right from the start. A basic understanding of RavenDB is recommended, but not required. While the book focuses on advanced topics, it does not assume that the reader has a great deal of prior knowledge of working with RavenDB.

  1. High school and college biology: A multi-level model of the effects of high school biology courses on student academic performance in introductory college biology courses

    Science.gov (United States)

    Loehr, John Francis

    The issue of student preparation for college study in science has been an ongoing concern for both college-bound students and educators of various levels. This study uses a national sample of college students enrolled in introductory biology courses to address the relationship between high school biology preparation and subsequent introductory college biology performance. Multi-Level Modeling was used to investigate the relationship between students' high school science and mathematics experiences and college biology performance. This analysis controls for student demographic and educational background factors along with factors associated with the college or university attended. The results indicated that high school course-taking and science instructional experiences have the largest impact on student achievement in the first introductory college biology course. In particular, enrollment in courses, such as high school Calculus and Advanced Placement (AP) Biology, along with biology course content that focuses on developing a deep understanding of the topics is found to be positively associated with student achievement in introductory college biology. On the other hand, experiencing high numbers of laboratory activities, demonstrations, and independent projects along with higher levels of laboratory freedom are associated with negative achievement. These findings are relevant to high school biology teachers, college students, their parents, and educators looking beyond the goal of high school graduation.

  2. NIF capsule performance modeling

    Directory of Open Access Journals (Sweden)

    Weber S.

    2013-11-01

    Full Text Available Post-shot modeling of NIF capsule implosions was performed in order to validate our physical and numerical models. Cryogenic layered target implosions and experiments with surrogate targets produce an abundance of capsule performance data including implosion velocity, remaining ablator mass, times of peak x-ray and neutron emission, core image size, core symmetry, neutron yield, and x-ray spectra. We have attempted to match the integrated data set with capsule-only simulations by adjusting the drive and other physics parameters within expected uncertainties. The simulations include interface roughness, time-dependent symmetry, and a model of mix. We were able to match many of the measured performance parameters for a selection of shots.

  3. Numerical Model of High Strength Concrete

    Science.gov (United States)

    Wang, R. Z.; Wang, C. Y.; Lin, Y. L.

    2018-03-01

    The purpose of this paper is to present a three-dimensional constitutive model based on the concept of equivalent uniaxial strain. closed Menetrey-Willam (CMW) failure surfaces which combined with Menetrey-Willam meridian and the cap model are introduced in this paper. Saenz stress-strain model is applied and adjusted by the ultimate strength parameters from CMW failure surface to reflect the latest stress or strain condition. The high strength concrete (HSC) under tri-axial non-proportional loading is considered and the model in this paper performed a good prediction.

  4. Multiprocessor performance modeling with ADAS

    Science.gov (United States)

    Hayes, Paul J.; Andrews, Asa M.

    1989-01-01

    A graph managing strategy referred to as the Algorithm to Architecture Mapping Model (ATAMM) appears useful for the time-optimized execution of application algorithm graphs in embedded multiprocessors and for the performance prediction of graph designs. This paper reports the modeling of ATAMM in the Architecture Design and Assessment System (ADAS) to make an independent verification of ATAMM's performance prediction capability and to provide a user framework for the evaluation of arbitrary algorithm graphs. Following an overview of ATAMM and its major functional rules are descriptions of the ADAS model of ATAMM, methods to enter an arbitrary graph into the model, and techniques to analyze the simulation results. The performance of a 7-node graph example is evaluated using the ADAS model and verifies the ATAMM concept by substantiating previously published performance results.

  5. Protective design of critical infrastructure with high performance concretes

    International Nuclear Information System (INIS)

    Riedel, W.; Nöldgen, M.; Stolz, A.; Roller, C.

    2012-01-01

    Conclusions: High performance concrete constructions will allow innovative design solutions for critical infrastructures. Validation of engineering methods can reside on large and model scale experiments conducted on conventional concrete structures. New consistent impact experiments show extreme protection potential for UHPC. Modern FEM with concrete models and explicit rebar can model HPC and UHPC penetration resistance. SDOF and TDOF approaches are valuable design tools on local and global level. Combination of at least 2 out of 3 design methods FEM – XDOF- EXP allow reliable prediction and efficient innovative designs

  6. Understanding the Implementation of Knowledge Management in High-Performance Schools in Malaysia

    OpenAIRE

    Rahmad Sukor Ab. Samad; Mohamed Iskandar Rahmad Sukor; Darwyan Syah; Eneng Muslihah

    2014-01-01

    This study intends to assess the implementation of policies in high-performance schools (HPS). One hundred fifty-two administrators in 52 HPS were selected using full sampling. Only two factors serve as contributors in knowledge management model for high-performing schools in Malaysia, which were school culture and school strategy. Whereas the correlation indicated that all 10 factors, namely, mission and vision, schoo...

  7. Transport in JET high performance plasmas

    International Nuclear Information System (INIS)

    2001-01-01

    Two type of high performance scenarios have been produced in JET during DTE1 campaign. One of them is the well known and extensively used in the past ELM-free hot ion H-mode scenario which has two distinct regions- plasma core and the edge transport barrier. The results obtained during DTE-1 campaign with D, DT and pure T plasmas confirms our previous conclusion that the core transport scales as a gyroBohm in the inner half of plasma volume, recovers its Bohm nature closer to the separatrix and behaves as ion neoclassical in the transport barrier. Measurements on the top of the barrier suggest that the width of the barrier is dependent upon isotope and moreover suggest that fast ions play a key role. The other high performance scenario is a relatively recently developed Optimised Shear Scenario with small or slightly negative magnetic shear in plasma core. Different mechanisms of Internal Transport Barrier (ITB) formation have been tested by predictive modelling and the results are compared with experimentally observed phenomena. The experimentally observed non-penetration of the heavy impurities through the strong ITB which contradicts to a prediction of the conventional neo-classical theory is discussed. (author)

  8. Transport in JET high performance plasmas

    International Nuclear Information System (INIS)

    1999-01-01

    Two type of high performance scenarios have been produced in JET during DTE1 campaign. One of them is the well known and extensively used in the past ELM-free hot ion H-mode scenario which has two distinct regions- plasma core and the edge transport barrier. The results obtained during DTE-1 campaign with D, DT and pure T plasmas confirms our previous conclusion that the core transport scales as a gyroBohm in the inner half of plasma volume, recovers its Bohm nature closer to the separatrix and behaves as ion neoclassical in the transport barrier. Measurements on the top of the barrier suggest that the width of the barrier is dependent upon isotope and moreover suggest that fast ions play a key role. The other high performance scenario is a relatively recently developed Optimised Shear Scenario with small or slightly negative magnetic shear in plasma core. Different mechanisms of Internal Transport Barrier (ITB) formation have been tested by predictive modelling and the results are compared with experimentally observed phenomena. The experimentally observed non-penetration of the heavy impurities through the strong ITB which contradicts to a prediction of the conventional neo-classical theory is discussed. (author)

  9. High-Performance Networking

    CERN Multimedia

    CERN. Geneva

    2003-01-01

    The series will start with an historical introduction about what people saw as high performance message communication in their time and how that developed to the now to day known "standard computer network communication". It will be followed by a far more technical part that uses the High Performance Computer Network standards of the 90's, with 1 Gbit/sec systems as introduction for an in depth explanation of the three new 10 Gbit/s network and interconnect technology standards that exist already or emerge. If necessary for a good understanding some sidesteps will be included to explain important protocols as well as some necessary details of concerned Wide Area Network (WAN) standards details including some basics of wavelength multiplexing (DWDM). Some remarks will be made concerning the rapid expanding applications of networked storage.

  10. Modeling High Pressure Micro Hollow Cathode Discharges

    National Research Council Canada - National Science Library

    Boeuf, Jean-Pierre; Pitchford, Leanne

    2004-01-01

    This report results from a contract tasking CPAT as follows: The Grantee will perform theoretical modeling of point, surface, and volume high-pressure plasmas created using Micro Hollow Cathode Discharge sources...

  11. Progression of performance assessment modeling for the Yucca Mountain disposal system for spent nuclear fuel and high-level radioactive waste

    International Nuclear Information System (INIS)

    Rechard, Rob P.; Wilson, Michael L.; Sevougian, S. David

    2014-01-01

    This paper summarizes the evolution of consequence modeling for a repository for spent nuclear fuel and high-level radioactive waste at Yucca Mountain in southern Nevada. The discussion includes four early performance assessments (PAs) conducted between 1982 and 1995 to support selection and to evaluate feasibility and three major PAs conducted between 1998 and 2008 to evaluate viability, recommend the site, and assess compliance. Modeling efforts in 1982 estimated dose to individuals 18 km from the site caused by volcanic eruption through the repository. Modeling in 1984 estimated releases via the groundwater pathway because of container corrosion. In combination, this early analysis supported the first environmental assessment. Analysts in 1991 evaluated cumulative release, as specified in the 1985 US radiation protection standards, via the groundwater pathway over 10 4 yr at a 5-km boundary by modeling waste degradation and flow/transport in the saturated and unsaturated zones. By 1992, however, the US Congress mandated a change to a dose measure. Thus, the 1993 and 1995 performance assessments improved modeling of waste container degradation to provide better estimates of radionuclide release rates out to 10 6 yr. The 1998 viability assessment was a major step in modeling complexity. Dose at a 20-km boundary from the repository was evaluated through 10 6 yr for undisturbed conditions using more elaborate modeling of flow and the addition of modules for modeling infiltration, drift seepage, the chemical environment, and biosphere transport. The 2000 assessment for the site recommendation refined the analysis. Seepage modeling was greatly improved and waste form degradation modeling included more chemical dependence. The 2008 compliance assessment for the license application incorporated the influence of the seismicity on waste package performance to evaluate dose at an ∼18-km boundary. - Highlights: • Evolution of the consequence models to simulate physical

  12. High Performance Concrete

    Directory of Open Access Journals (Sweden)

    Traian Oneţ

    2009-01-01

    Full Text Available The paper presents the last studies and researches accomplished in Cluj-Napoca related to high performance concrete, high strength concrete and self compacting concrete. The purpose of this paper is to raid upon the advantages and inconveniences when a particular concrete type is used. Two concrete recipes are presented, namely for the concrete used in rigid pavement for roads and another one for self-compacting concrete.

  13. Performance concerns for high duty fuel cycle

    International Nuclear Information System (INIS)

    Esposito, V.J.; Gutierrez, J.E.

    1999-01-01

    One of the goals of the nuclear industry is to achieve economic performance such that nuclear power plants are competitive in a de-regulated market. The manner in which nuclear fuel is designed and operated lies at the heart of economic viability. In this sense reliability, operating flexibility and low costs are the three major requirements of the NPP today. The translation of these three requirements to the design is part of our work. The challenge today is to produce a fuel design which will operate with long operating cycles, high discharge burnup, power up-rating and while still maintaining all design and safety margins. European Fuel Group (EFG) understands that to achieve the required performance high duty/energy fuel designs are needed. The concerns for high duty design includes, among other items, core design methods, advanced Safety Analysis methodologies, performance models, advanced material and operational strategies. The operational aspects require the trade-off and evaluation of various parameters including coolant chemistry control, material corrosion, boiling duty, boron level impacts, etc. In this environment MAEF is the design that EFG is now offering based on ZIRLO alloy and a robust skeleton. This new design is able to achieve 70 GWd/tU and Lead Test Programs are being executed to demonstrate this capability. A number of performance issues which have been a concern with current designs have been resolved such as cladding corrosion and incomplete RCCA insertion (IRI). As the core duty becomes more aggressive other new issues need to be addressed such as Axial Offset Anomaly. These new issues are being addressed by combination of the new design in concert with advanced methodologies to meet the demanding needs of NPP. The ability and strategy to meet high duty core requirements, flexibility of operation and maintain acceptable balance of all technical issues is the discussion in this paper. (authors)

  14. Performance of a High-Fidelity 4kW-Class Engineering Model PPU and Integration with HiVHAc System

    Science.gov (United States)

    Pinero, Luis R.; Kamhawi, Hani; Shilo, Vlad

    2016-01-01

    The High Voltage Hall Accelerator (HiVHAc) propulsion system consists of a thruster, power processing unit (PPU), and propellant feed system. An engineering model PPU was developed by Colorado Power Electronics, Inc. funded by NASA's Small Business Innovative Research Program. This PPU uses an innovative 3-phase resonant converter to deliver 4 kW of discharge power over a wide range of input and output voltage conditions. The PPU includes a digital control interface unit that automatically controls the PPU and a xenon flow control module (XFCM). It interfaces with a control computer to receive highlevel commands and relay telemetry through a MIL-STD-1553B interface. The EM PPU was thoroughly tested at GRC for functionality and performance at temperature limits and demonstrated total efficiencies a high as 95 percent. Integrated testing of the unit was performed with the HiVHAc thruster and the XFCM to demonstrate closed-loop control of discharge current with anode flow. Initiation of the main discharge and power throttling were also successfully demonstrated and discharge oscillations were characterized.

  15. HPTA: High-Performance Text Analytics

    OpenAIRE

    Vandierendonck, Hans; Murphy, Karen; Arif, Mahwish; Nikolopoulos, Dimitrios S.

    2017-01-01

    One of the main targets of data analytics is unstructured data, which primarily involves textual data. High-performance processing of textual data is non-trivial. We present the HPTA library for high-performance text analytics. The library helps programmers to map textual data to a dense numeric representation, which can be handled more efficiently. HPTA encapsulates three performance optimizations: (i) efficient memory management for textual data, (ii) parallel computation on associative dat...

  16. Performance prediction of industrial centrifuges using scale-down models.

    Science.gov (United States)

    Boychyn, M; Yim, S S S; Bulmer, M; More, J; Bracewell, D G; Hoare, M

    2004-12-01

    Computational fluid dynamics was used to model the high flow forces found in the feed zone of a multichamber-bowl centrifuge and reproduce these in a small, high-speed rotating disc device. Linking the device to scale-down centrifugation, permitted good estimation of the performance of various continuous-flow centrifuges (disc stack, multichamber bowl, CARR Powerfuge) for shear-sensitive protein precipitates. Critically, the ultra scale-down centrifugation process proved to be a much more accurate predictor of production multichamber-bowl performance than was the pilot centrifuge.

  17. Design of High Performance Permanent-Magnet Synchronous Wind Generators

    Directory of Open Access Journals (Sweden)

    Chun-Yu Hsiao

    2014-11-01

    Full Text Available This paper is devoted to the analysis and design of high performance permanent-magnet synchronous wind generators (PSWGs. A systematic and sequential methodology for the design of PMSGs is proposed with a high performance wind generator as a design model. Aiming at high induced voltage, low harmonic distortion as well as high generator efficiency, optimal generator parameters such as pole-arc to pole-pitch ratio and stator-slot-shoes dimension, etc. are determined with the proposed technique using Maxwell 2-D, Matlab software and the Taguchi method. The proposed double three-phase and six-phase winding configurations, which consist of six windings in the stator, can provide evenly distributed current for versatile applications regarding the voltage and current demands for practical consideration. Specifically, windings are connected in series to increase the output voltage at low wind speed, and in parallel during high wind speed to generate electricity even when either one winding fails, thereby enhancing the reliability as well. A PMSG is designed and implemented based on the proposed method. When the simulation is performed with a 6 Ω load, the output power for the double three-phase winding and six-phase winding are correspondingly 10.64 and 11.13 kW. In addition, 24 Ω load experiments show that the efficiencies of double three-phase winding and six-phase winding are 96.56% and 98.54%, respectively, verifying the proposed high performance operation.

  18. Performance Modelling of Steam Turbine Performance using Fuzzy ...

    African Journals Online (AJOL)

    Performance Modelling of Steam Turbine Performance using Fuzzy Logic ... AFRICAN JOURNALS ONLINE (AJOL) · Journals · Advanced Search · USING AJOL · RESOURCES. Journal of Applied Sciences and Environmental Management ... A Fuzzy Inference System for predicting the performance of steam turbine

  19. Pressurized planar electrochromatography, high-performance thin-layer chromatography and high-performance liquid chromatography--comparison of performance.

    Science.gov (United States)

    Płocharz, Paweł; Klimek-Turek, Anna; Dzido, Tadeusz H

    2010-07-16

    Kinetic performance, measured by plate height, of High-Performance Thin-Layer Chromatography (HPTLC), High-Performance Liquid Chromatography (HPLC) and Pressurized Planar Electrochromatography (PPEC) was compared for the systems with adsorbent of the HPTLC RP18W plate from Merck as the stationary phase and the mobile phase composed of acetonitrile and buffer solution. The HPLC column was packed with the adsorbent, which was scrapped from the chromatographic plate mentioned. An additional HPLC column was also packed with adsorbent of 5 microm particle diameter, C18 type silica based (LiChrosorb RP-18 from Merck). The dependence of plate height of both HPLC and PPEC separating systems on flow velocity of the mobile phase and on migration distance of the mobile phase in TLC system was presented applying test solute (prednisolone succinate). The highest performance, amongst systems investigated, was obtained for the PPEC system. The separation efficiency of the systems investigated in the paper was additionally confirmed by the separation of test component mixture composed of six hormones. 2010 Elsevier B.V. All rights reserved.

  20. Development of a High Performance Spacer Grid

    Energy Technology Data Exchange (ETDEWEB)

    Song, Kee Nam; Song, K. N.; Yoon, K. H. (and others)

    2007-03-15

    A spacer grid in a LWR fuel assembly is a key structural component to support fuel rods and to enhance the heat transfer from the fuel rod to the coolant. In this research, the main research items are the development of inherent and high performance spacer grid shapes, the establishment of mechanical/structural analysis and test technology, and the set-up of basic test facilities for the spacer grid. The main research areas and results are as follows. 1. 18 different spacer grid candidates have been invented and applied for domestic and US patents. Among the candidates 16 are chosen from the patent. 2. Two kinds of spacer grids are finally selected for the advanced LWR fuel after detailed performance tests on the candidates and commercial spacer grids from a mechanical/structural point of view. According to the test results the features of the selected spacer grids are better than those of the commercial spacer grids. 3. Four kinds of basic test facilities are set up and the relevant test technologies are established. 4. Mechanical/structural analysis models and technology for spacer grid performance are developed and the analysis results are compared with the test results to enhance the reliability of the models.

  1. Human performance modeling for system of systems analytics :soldier fatigue.

    Energy Technology Data Exchange (ETDEWEB)

    Lawton, Craig R.; Campbell, James E.; Miller, Dwight Peter

    2005-10-01

    The military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives as can be seen in the Department of Defense's (DoD) Defense Modeling and Simulation Office's (DMSO) Master Plan (DoD 5000.59-P 1995). To this goal, the military is currently spending millions of dollars on programs devoted to HPM in various military contexts. Examples include the Human Performance Modeling Integration (HPMI) program within the Air Force Research Laboratory, which focuses on integrating HPMs with constructive models of systems (e.g. cockpit simulations) and the Navy's Human Performance Center (HPC) established in September 2003. Nearly all of these initiatives focus on the interface between humans and a single system. This is insufficient in the era of highly complex network centric SoS. This report presents research and development in the area of HPM in a system-of-systems (SoS). Specifically, this report addresses modeling soldier fatigue and the potential impacts soldier fatigue can have on SoS performance.

  2. High-performance zig-zag and meander inductors embedded in ferrite material

    International Nuclear Information System (INIS)

    Stojanovic, Goran; Damnjanovic, Mirjana; Desnica, Vladan; Zivanov, Ljiljana; Raghavendra, Ramesh; Bellew, Pat; Mcloughlin, Neil

    2006-01-01

    This paper describes the design, modeling, simulation and fabrication of zig-zag and meander inductors embedded in low- or high-permeability soft ferrite material. These microinductors have been developed with ceramic coprocessing technology. We compare the electrical properties of zig-zag and meander inductors structures installed as surface-mount devices. The equivalent model of the new structures is presented, suitable for design, circuit simulations and for prediction of the performance of proposed inductors. The relatively high impedance values allow these microinductors to be used in high-frequency suppressors. The components were tested in the frequency range of 1 MHz-3 GHz using an Agilent 4287A RF LCR meter. The measurements confirm the validity of the analytical model

  3. Do Danes enjoy a high performing chronic care system?

    DEFF Research Database (Denmark)

    Hernández-Quevedo, Christina; Olejaz, Maria; Juul, Annegrete

    2012-01-01

    The trends in population health in Denmark are similar to those in most Western European countries. Major health issues include, among others, the high prevalence of chronic illnesses and lifestyle related risk factors such as obesity, tobacco, physical inactivity and alcohol. This has pressed...... the health system towards a model of provision of care based on the management of chronic care conditions. While the Chronic Care Model was introduced in 2005, the Danish health system does not fulfil the ten key preconditions that would characterise a high-performing chronic care system. As revealed...... in a recent report, the fragmented structure of the Danish health system poses challenges in providing effectively coordinated care to patients with chronic diseases....

  4. Nested Interrupt Analysis of Low Cost and High Performance Embedded Systems Using GSPN Framework

    Science.gov (United States)

    Lin, Cheng-Min

    Interrupt service routines are a key technology for embedded systems. In this paper, we introduce the standard approach for using Generalized Stochastic Petri Nets (GSPNs) as a high-level model for generating CTMC Continuous-Time Markov Chains (CTMCs) and then use Markov Reward Models (MRMs) to compute the performance for embedded systems. This framework is employed to analyze two embedded controllers with low cost and high performance, ARM7 and Cortex-M3. Cortex-M3 is designed with a tail-chaining mechanism to improve the performance of ARM7 when a nested interrupt occurs on an embedded controller. The Platform Independent Petri net Editor 2 (PIPE2) tool is used to model and evaluate the controllers in terms of power consumption and interrupt overhead performance. Using numerical results, in spite of the power consumption or interrupt overhead, Cortex-M3 performs better than ARM7.

  5. High Accuracy Transistor Compact Model Calibrations

    Energy Technology Data Exchange (ETDEWEB)

    Hembree, Charles E. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Mar, Alan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robertson, Perry J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.

  6. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  7. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  8. High-Performance Operating Systems

    DEFF Research Database (Denmark)

    Sharp, Robin

    1999-01-01

    Notes prepared for the DTU course 49421 "High Performance Operating Systems". The notes deal with quantitative and qualitative techniques for use in the design and evaluation of operating systems in computer systems for which performance is an important parameter, such as real-time applications......, communication systems and multimedia systems....

  9. Integrated model for supplier selection and performance evaluation

    Directory of Open Access Journals (Sweden)

    Borges de Araújo, Maria Creuza

    2015-08-01

    Full Text Available This paper puts forward a model for selecting suppliers and evaluating the performance of those already working with a company. A simulation was conducted in a food industry. This sector has high significance in the economy of Brazil. The model enables the phases of selecting and evaluating suppliers to be integrated. This is important so that a company can have partnerships with suppliers who are able to meet their needs. Additionally, a group method is used to enable managers who will be affected by this decision to take part in the selection stage. Finally, the classes resulting from the performance evaluation are shown to support the contractor in choosing the most appropriate relationship with its suppliers.

  10. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  11. Photovoltaic performance models - A report card

    Science.gov (United States)

    Smith, J. H.; Reiter, L. R.

    1985-01-01

    Models for the analysis of photovoltaic (PV) systems' designs, implementation policies, and economic performance, have proliferated while keeping pace with rapid changes in basic PV technology and extensive empirical data compiled for such systems' performance. Attention is presently given to the results of a comparative assessment of ten well documented and widely used models, which range in complexity from first-order approximations of PV system performance to in-depth, circuit-level characterizations. The comparisons were made on the basis of the performance of their subsystem, as well as system, elements. The models fall into three categories in light of their degree of aggregation into subsystems: (1) simplified models for first-order calculation of system performance, with easily met input requirements but limited capability to address more than a small variety of design considerations; (2) models simulating PV systems in greater detail, encompassing types primarily intended for either concentrator-incorporating or flat plate collector PV systems; and (3) models not specifically designed for PV system performance modeling, but applicable to aspects of electrical system design. Models ignoring subsystem failure or degradation are noted to exclude operating and maintenance characteristics as well.

  12. High-Performance Buildings – Value, Messaging, Financial and Policy Mechanisms

    Energy Technology Data Exchange (ETDEWEB)

    McCabe, Molly

    2011-02-22

    At the request of the Pacific Northwest National Laboratory, an in-depth analysis of the rapidly evolving state of real estate investments, high-performance building technology, and interest in efficiency was conducted by HaydenTanner, LLC, for the U.S. Department of Energy (DOE) Building Technologies Program. The analysis objectives were • to evaluate the link between high-performance buildings and their market value • to identify core messaging to motivate owners, investors, financiers, and others in the real estate sector to appropriately value and deploy high-performance strategies and technologies across new and existing buildings • to summarize financial mechanisms that facilitate increased investment in these buildings. To meet these objectives, work consisted of a literature review of relevant writings, examination of existing and emergent financial and policy mechanisms, interviews with industry stakeholders, and an evaluation of the value implications through financial modeling. This report documents the analysis methodology and findings, conclusion and recommendations. Its intent is to support and inform the DOE Building Technologies Program on policy and program planning for the financing of high-performance new buildings and building retrofit projects.

  13. High performance APCS conceptual design and evaluation scoping study

    International Nuclear Information System (INIS)

    Soelberg, N.; Liekhus, K.; Chambers, A.; Anderson, G.

    1998-02-01

    This Air Pollution Control System (APCS) Conceptual Design and Evaluation study was conducted to evaluate a high-performance (APC) system for minimizing air emissions from mixed waste thermal treatment systems. Seven variations of high-performance APCS designs were conceptualized using several design objectives. One of the system designs was selected for detailed process simulation using ASPEN PLUS to determine material and energy balances and evaluate performance. Installed system capital costs were also estimated. Sensitivity studies were conducted to evaluate the incremental cost and benefit of added carbon adsorber beds for mercury control, specific catalytic reduction for NO x control, and offgas retention tanks for holding the offgas until sample analysis is conducted to verify that the offgas meets emission limits. Results show that the high-performance dry-wet APCS can easily meet all expected emission limits except for possibly mercury. The capability to achieve high levels of mercury control (potentially necessary for thermally treating some DOE mixed streams) could not be validated using current performance data for mercury control technologies. The engineering approach and ASPEN PLUS modeling tool developed and used in this study identified APC equipment and system performance, size, cost, and other issues that are not yet resolved. These issues need to be addressed in feasibility studies and conceptual designs for new facilities or for determining how to modify existing facilities to meet expected emission limits. The ASPEN PLUS process simulation with current and refined input assumptions and calculations can be used to provide system performance information for decision-making, identifying best options, estimating costs, reducing the potential for emission violations, providing information needed for waste flow analysis, incorporating new APCS technologies in existing designs, or performing facility design and permitting activities

  14. High performance fuel technology development

    Energy Technology Data Exchange (ETDEWEB)

    Koon, Yang Hyun; Kim, Keon Sik; Park, Jeong Yong; Yang, Yong Sik; In, Wang Kee; Kim, Hyung Kyu [KAERI, Daejeon (Korea, Republic of)

    2012-01-15

    {omicron} Development of High Plasticity and Annular Pellet - Development of strong candidates of ultra high burn-up fuel pellets for a PCI remedy - Development of fabrication technology of annular fuel pellet {omicron} Development of High Performance Cladding Materials - Irradiation test of HANA claddings in Halden research reactor and the evaluation of the in-pile performance - Development of the final candidates for the next generation cladding materials. - Development of the manufacturing technology for the dual-cooled fuel cladding tubes. {omicron} Irradiated Fuel Performance Evaluation Technology Development - Development of performance analysis code system for the dual-cooled fuel - Development of fuel performance-proving technology {omicron} Feasibility Studies on Dual-Cooled Annular Fuel Core - Analysis on the property of a reactor core with dual-cooled fuel - Feasibility evaluation on the dual-cooled fuel core {omicron} Development of Design Technology for Dual-Cooled Fuel Structure - Definition of technical issues and invention of concept for dual-cooled fuel structure - Basic design and development of main structure components for dual- cooled fuel - Basic design of a dual-cooled fuel rod.

  15. CFD modelling of hydrogen stratification in enclosures: Model validation and application to PAR performance

    Energy Technology Data Exchange (ETDEWEB)

    Hoyes, J.R., E-mail: james.hoyes@hsl.gsi.gov.uk; Ivings, M.J.

    2016-12-15

    Highlights: • The ability of CFD to predict hydrogen stratification phenomena is investigated. • Contrary to expectation, simulations on tetrahedral meshes under-predict mixing. • Simulations on structured meshes give good agreement with experimental data. • CFD model used to investigate the effects of stratification on PAR performance. • Results show stratification can have a significant effect on PAR performance. - Abstract: Computational Fluid Dynamics (CFD) models are maturing into useful tools for supporting safety analyses. This paper investigates the capabilities of CFD models for predicting hydrogen stratification in a containment vessel using data from the NEA/OECD SETH2 MISTRA experiments. Further simulations are then carried out to illustrate the qualitative effects of hydrogen stratification on the performance of Passive Autocatalytic Recombiner (PAR) units. The MISTRA experiments have well-defined initial and boundary conditions which makes them well suited for use in a validation study. Results are presented for the sensitivity to mesh resolution and mesh type. Whilst the predictions are shown to be largely insensitive to the mesh resolution they are surprisingly sensitive to the mesh type. In particular, tetrahedral meshes are found to induce small unphysical convection currents that result in molecular diffusion and turbulent mixing being under-predicted. This behaviour is not unique to the CFD model used here (ANSYS CFX) and furthermore, it may affect simulations run on other non-aligned meshes (meshes that are not aligned perpendicular to gravity), including non-aligned structured meshes. Following existing best practice guidelines can help to identify potential unphysical predictions, but as an additional precaution consideration should be given to using gravity-aligned meshes for modelling stratified flows. CFD simulations of hydrogen recombination in the Becker Technologies THAI facility are presented with high and low PAR positions

  16. A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations

    International Nuclear Information System (INIS)

    Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel; Buluc, Aydin; Shao, Meiyue

    2017-01-01

    As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using the compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.

  17. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    International Nuclear Information System (INIS)

    Pop, Florin

    2014-01-01

    Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  18. High performance homes

    DEFF Research Database (Denmark)

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    . Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  19. Using high-performance mathematical modelling tools to predict erosion and sediment fluxes in peri-urban catchments

    Science.gov (United States)

    Pereira, André; Conde, Daniel; Ferreira, Carla S. S.; Walsh, Rory; Ferreira, Rui M. L.

    2017-04-01

    Deforestation and urbanization generally lead to increased soil erosion andthrough the indirect effect of increased overland flow and peak flood discharges. Mathematical modelling tools can be helpful for predicting the spatial distribution of erosion and the morphological changes on the channel network. This is especially useful to predict the impacts of land-use changes in parts of the watershed, namely due to urbanization. However, given the size of the computational domain (normally the watershed itself), the need for high spatial resolution data to model accurately sediment transport processes and possible need to model transcritical flows, the computational cost is high and requires high-performance computing techniques. The aim of this work is to present the latest developments of the hydrodynamic and morphological model STAV2D and its applicability to predict runoff and erosion at watershed scale. STAV2D was developed at CEris - Instituto Superior Técnico, Universidade de Lisboa - as a tool particularly appropriated to model strong transient flows in complex and dynamic geometries. It is based on an explicit, first-order 2DH finite-volume discretization scheme for unstructured triangular meshes, in which a flux-splitting technique is paired with a reviewed Roe-Riemann solver, yielding a model applicable to discontinuous flows over time-evolving geometries. STAV2D features solid transport in both Euleran and Lagrangian forms, with the aim of describing the transport of fine natural sediments and then the large individual debris. The model has been validated with theoretical solutions and laboratory experiments (Canelas et al., 2013 & Conde et al., 2015). STAV-2D now supports fully distributed and heterogeneous simulations where multiple different hardware devices can be used to accelerate computation time within a unified Object-Oriented approach: the source code for CPU and GPU has the same compilation units and requires no device specific branches, like

  20. Characterising performance of environmental models

    NARCIS (Netherlands)

    Bennett, N.D.; Croke, B.F.W.; Guariso, G.; Guillaume, J.H.A.; Hamilton, S.H.; Jakeman, A.J.; Marsili-Libelli, S.; Newham, L.T.H.; Norton, J.; Perrin, C.; Pierce, S.; Robson, B.; Seppelt, R.; Voinov, A.; Fath, B.D.; Andreassian, V.

    2013-01-01

    In order to use environmental models effectively for management and decision-making, it is vital to establish an appropriate level of confidence in their performance. This paper reviews techniques available across various fields for characterising the performance of environmental models with focus

  1. Proficient brain for optimal performance: the MAP model perspective.

    Science.gov (United States)

    Bertollo, Maurizio; di Fronso, Selenia; Filho, Edson; Conforto, Silvia; Schmid, Maurizio; Bortoli, Laura; Comani, Silvia; Robazza, Claudio

    2016-01-01

    Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS) activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP) model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1) and optimal-controlled (Type 2) performances. Methods. Ten elite shooters (6 male and 4 female) with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time) repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha) for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the "neural efficiency hypothesis." We also observed more ERD as related to optimal-controlled performance in conditions of "neural adaptability" and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques.

  2. High-Performance First-Principles Molecular Dynamics for Predictive Theory and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Gygi, Francois [Univ. of California, Davis, CA (United States). Dept. of Computer Science; Galli, Giulia [Univ. of Chicago, IL (United States); Schwegler, Eric [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-12-03

    This project focused on developing high-performance software tools for First-Principles Molecular Dynamics (FPMD) simulations, and applying them in investigations of materials relevant to energy conversion processes. FPMD is an atomistic simulation method that combines a quantum-mechanical description of electronic structure with the statistical description provided by molecular dynamics (MD) simulations. This reliance on fundamental principles allows FPMD simulations to provide a consistent description of structural, dynamical and electronic properties of a material. This is particularly useful in systems for which reliable empirical models are lacking. FPMD simulations are increasingly used as a predictive tool for applications such as batteries, solar energy conversion, light-emitting devices, electro-chemical energy conversion devices and other materials. During the course of the project, several new features were developed and added to the open-source Qbox FPMD code. The code was further optimized for scalable operation of large-scale, Leadership-Class DOE computers. When combined with Many-Body Perturbation Theory (MBPT) calculations, this infrastructure was used to investigate structural and electronic properties of liquid water, ice, aqueous solutions, nanoparticles and solid-liquid interfaces. Computing both ionic trajectories and electronic structure in a consistent manner enabled the simulation of several spectroscopic properties, such as Raman spectra, infrared spectra, and sum-frequency generation spectra. The accuracy of the approximations used allowed for direct comparisons of results with experimental data such as optical spectra, X-ray and neutron diffraction spectra. The software infrastructure developed in this project, as applied to various investigations of solids, liquids and interfaces, demonstrates that FPMD simulations can provide a detailed, atomic-scale picture of structural, vibrational and electronic properties of complex systems

  3. Performing arts medicine: A research model for South Africa

    Directory of Open Access Journals (Sweden)

    Karendra Devroop

    2014-11-01

    Full Text Available Performing Arts Medicine has developed into a highly specialised field over the past three decades. The Performing Arts Medical Association (PAMA has been the leading proponent of this unique and innovative field with ground-breaking research studies, symposia, conferences and journals dedicated specifically to the medical problems of performing artists. Similar to sports medicine, performing arts medicine caters specifically for the medical problems of performing artists including musicians and dancers. In South Africa there is a tremendous lack of knowledge of the field and unlike our international counterparts, we do not have specialised clinical settings that cater for the medical problems of performing artists. There is also a tremendous lack of research on performance-related medical problems of performing artists in South Africa. Accordingly the purpose of this paper is to present an overview of the field of performing arts medicine, highlight some of the significant findings from recent research studies and present a model for conducting research into the field of performing arts medicine. It is hoped that this research model will lead to increased research on the medical problems of performing artists in South Africa.

  4. Strategy Guideline: High Performance Residential Lighting

    Energy Technology Data Exchange (ETDEWEB)

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  5. An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology.

    Science.gov (United States)

    Deodhar, Suruchi; Bisset, Keith R; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V

    2014-07-01

    We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity.

  6. SISYPHUS: A high performance seismic inversion factory

    Science.gov (United States)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with

  7. Performance of chromatographic systems to model soil-water sorption.

    Science.gov (United States)

    Hidalgo-Rodríguez, Marta; Fuguet, Elisabet; Ràfols, Clara; Rosés, Martí

    2012-08-24

    A systematic approach for evaluating the goodness of chromatographic systems to model the sorption of neutral organic compounds by soil from water is presented in this work. It is based on the examination of the three sources of error that determine the overall variance obtained when soil-water partition coefficients are correlated against chromatographic retention factors: the variance of the soil-water sorption data, the variance of the chromatographic data, and the variance attributed to the dissimilarity between the two systems. These contributions of variance are easily predicted through the characterization of the systems by the solvation parameter model. According to this method, several chromatographic systems besides the reference octanol-water partition system have been selected to test their performance in the emulation of soil-water sorption. The results from the experimental correlations agree with the predicted variances. The high-performance liquid chromatography system based on an immobilized artificial membrane and the micellar electrokinetic chromatography systems of sodium dodecylsulfate and sodium taurocholate provide the most precise correlation models. They have shown to predict well soil-water sorption coefficients of several tested herbicides. Octanol-water partitions and high-performance liquid chromatography measurements using C18 columns are less suited for the estimation of soil-water partition coefficients. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. High performance conductometry

    International Nuclear Information System (INIS)

    Saha, B.

    2000-01-01

    Inexpensive but high performance systems have emerged progressively for basic and applied measurements in physical and analytical chemistry on one hand, and for on-line monitoring and leak detection in plants and facilities on the other. Salient features of the developments will be presented with specific examples

  9. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  10. Transport modelling and gyrokinetic analysis of advanced high performance discharges

    International Nuclear Information System (INIS)

    Kinsey, J.E.; Imbeaux, F.; Staebler, G.M.; Budny, R.; Bourdelle, C.; Fukuyama, A.; Garbet, X.; Tala, T.; Parail, V.

    2005-01-01

    Predictive transport modelling and gyrokinetic stability analyses of demonstration hybrid (HYBRID) and advanced tokamak (AT) discharges from the International Tokamak Physics Activity (ITPA) profile database are presented. Both regimes have exhibited enhanced core confinement (above the conventional ITER reference H-mode scenario) but differ in their current density profiles. Recent contributions to the ITPA database have facilitated an effort to study the underlying physics governing confinement in these advanced scenarios. In this paper, we assess the level of commonality of the turbulent transport physics and the relative roles of the transport suppression mechanisms (i.e. E x B shear and Shafranov shift (α) stabilization) using data for select HYBRID and AT discharges from the DIII-D, JET and AUG tokamaks. GLF23 transport modelling and gyrokinetic stability analysis indicate that E x B shear and Shafranov shift stabilization play essential roles in producing the improved core confinement in both HYBRID and AT discharges. Shafranov shift stabilization is found to be more important in AT discharges than in HYBRID discharges. We have also examined the competition between the stabilizing effects of E x B shear and Shafranov shift stabilization and the destabilizing effects of higher safety factors and parallel velocity shear. Linear and nonlinear gyrokinetic simulations of idealized low and high safety factor cases reveal some interesting consequences. A low safety factor (i.e. HYBRID relevant) is directly beneficial in reducing the transport, and E x B shear stabilization can dominate parallel velocity shear destabilization allowing the turbulence to be quenched. However, at low-q/high current, Shafranov shift stabilization plays less of a role. Higher safety factors (as found in AT discharges), on the other hand, have larger amounts of Shafranov shift stabilization, but parallel velocity shear destabilization can prevent E x B shear quenching of the turbulent

  11. Transport modeling and gyrokinetic analysis of advanced high performance discharges

    International Nuclear Information System (INIS)

    Kinsey, J.; Imbeaux, F.; Bourdelle, C.; Garbet, X.; Staebler, G.; Budny, R.; Fukuyama, A.; Tala, T.; Parail, V.

    2005-01-01

    Predictive transport modeling and gyrokinetic stability analyses of demonstration hybrid (HYBRID) and Advanced Tokamak (AT) discharges from the International Tokamak Physics Activity (ITPA) profile database are presented. Both regimes have exhibited enhanced core confinement (above the conventional ITER reference H-mode scenario) but differ in their current density profiles. Recent contributions to the ITPA database have facilitated an effort to study the underlying physics governing confinement in these advanced scenarios. In this paper, we assess the level of commonality of the turbulent transport physics and the relative roles of the transport suppression mechanisms (i.e. ExB shear and Shafranov shift (α) stabilization) using data for select HYBRID and AT discharges from the DIII-D, JET, and AUG tokamaks. GLF23 transport modeling and gyrokinetic stability analysis indicates that ExB shear and Shafranov shift stabilization play essential roles in producing the improved core confinement in both HYBRID and AT discharges. Shafranov shift stabilization is found to be more important in AT discharges than in HYBRID discharges. We have also examined the competition between the stabilizing effects of ExB shear and Shafranov shift stabilization and the destabilizing effects of higher safety factors and parallel velocity shear. Linear and nonlinear gyrokinetic simulations of idealized low and high safety factor cases reveals some interesting consequences. A low safety factor (i.e. HYBRID relevant) is directly beneficial in reducing the transport, and ExB shear stabilization can win out over parallel velocity shear destabilization allowing the turbulence to be quenched. However, at low-q/high current, Shafranov shift stabilization plays less of a role. Higher safety factors (as found in AT discharges), on the other hand, have larger amounts of Shafranov shift stabilization, but parallel velocity shear destabilization can prevent ExB shear quenching of the turbulent

  12. High-performance sensorless nonlinear power control of a flywheel energy storage system

    International Nuclear Information System (INIS)

    Amodeo, S.J.; Chiacchiarini, H.G.; Solsona, J.A.; Busada, C.A.

    2009-01-01

    The flywheel energy storage systems (FESS) can be used to store and release energy in high power pulsed systems. Based on the use of a homopolar synchronous machine in a FESS, a high performance model-based power flow control law is developed using the feedback linearization methodology. This law is based on the voltage space vector reference frame machine model. To reduce the magnetic losses, a pulse amplitude modulation driver for the armature is more adequate. The restrictions in amplitude and phase imposed by the driver are also included. A full order Luenberger observer for the torque angle and rotor speed is developed to implement a sensorless control strategy. Simulation results are presented to illustrate the performance.

  13. Best Practices Guide for High-Performance Indian Office Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Reshma [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sartor, Dale [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ghatikar, Girish [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2013-04-01

    This document provides best practice guidance and energy- efficiency recommendations for the design, construction, and operation of high-­performance office buildings in India. Through a discussion of learnings from exemplary projects and inputs from experts, it provides recommendations that can potentially help achieve (1) enhanced working environments, (2) economic construction/faster payback, (3) reduced operating costs, and (4) reduced greenhouse gas (GHG) emissions. It also provides ambitious (but achievable) energy performance benchmarks, both as adopted targets during building modeling (design phase) and during measurement and verification (operations phase). These benchmarks have been derived from a set of representative best-in-class office buildings in India. The best practices strategies presented in this guide would ideally help in delivering high-­performance in terms of a triad—of energy efficiency, cost efficiency, and occupant comfort and well-­being. These best practices strategies and metrics should be normalized—that is, corrected to account for building characteristics, diversity of operations, weather, and materials and construction methods.

  14. Administrator Leadership Styles and Their Impact on School Nursing Part II. A High-Performance School Nurse-Building Administrator Relationship Model.

    Science.gov (United States)

    Davis, Charles R; Lynch, Erik J

    2018-06-01

    There is a significant disparity in roles, responsibilities, education, training, and expertise between the school nurse and building administrator. Because of this disparity, a natural chasm must be bridged to optimize student health, safety, well-being, and achievement in the classroom while meeting the individual needs of both professionals. This article constructs and presents a new school nurse-building administrator relationship model, the foundation of which is formed from the pioneering and seminal work on high-performance professional relationships and outcomes of Lewin and Drucker. The authors posit that this new model provides the framework for successful school nurse-building administrator interactions that will lead to optimal student outcomes.

  15. INL High Performance Building Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  16. Can High-Performance Equipment Lead to a Low-Performance Building?

    Energy Technology Data Exchange (ETDEWEB)

    Jonlin, Duane; Thornton, Brian A.; Rosenberg, Michael I.

    2016-08-22

    The performance-based compliance alternative available in most energy codes, intended to provide energy efficiency equivalent to that of prescriptive compliance while allowing innovation and design flexibility, can instead result in sub-standard energy performance in both the short and the long term. The potential deficiencies in modeled buildings originate with subtleties in the energy modeling rules, allowing building systems that consume more energy than their real-world, prescriptively-designed counterparts. This performance gap is exacerbated over subsequent decades as less efficient permanent features of the building remain while elements with shorter lives are regularly upgraded in most buildings. This paper summarizes an investigation into the topic for Pacific Northwest National Laboratory and the City of Seattle, including identification of the principal deficiencies exploited in the modeling path, and several potential code amendments that could resolve these deficiencies and establish better equivalency between prescriptive and performance compliance paths. The study, focusing on Seattle and Washington State energy codes, offers lessons and implications for other jurisdictions and energy codes.

  17. High Performance Networks for High Impact Science

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  18. PHARAO laser source flight model: Design and performances

    Energy Technology Data Exchange (ETDEWEB)

    Lévèque, T., E-mail: thomas.leveque@cnes.fr; Faure, B.; Esnault, F. X.; Delaroche, C.; Massonnet, D.; Grosjean, O.; Buffe, F.; Torresi, P. [Centre National d’Etudes Spatiales, 18 avenue Edouard Belin, 31400 Toulouse (France); Bomer, T.; Pichon, A.; Béraud, P.; Lelay, J. P.; Thomin, S. [Sodern, 20 Avenue Descartes, 94451 Limeil-Brévannes (France); Laurent, Ph. [LNE-SYRTE, CNRS, UPMC, Observatoire de Paris, 61 avenue de l’Observatoire, 75014 Paris (France)

    2015-03-15

    In this paper, we describe the design and the main performances of the PHARAO laser source flight model. PHARAO is a laser cooled cesium clock specially designed for operation in space and the laser source is one of the main sub-systems. The flight model presented in this work is the first remote-controlled laser system designed for spaceborne cold atom manipulation. The main challenges arise from mechanical compatibility with space constraints, which impose a high level of compactness, a low electric power consumption, a wide range of operating temperature, and a vacuum environment. We describe the main functions of the laser source and give an overview of the main technologies developed for this instrument. We present some results of the qualification process. The characteristics of the laser source flight model, and their impact on the clock performances, have been verified in operational conditions.

  19. High performance fuel technology development : Development of high performance cladding materials

    International Nuclear Information System (INIS)

    Park, Jeongyong; Jeong, Y. H.; Park, S. Y.

    2012-04-01

    The superior in-pile performance of the HANA claddings have been verified by the successful irradiation test and in the Halden research reactor up to the high burn-up of 67GWD/MTU. The in-pile corrosion and creep resistances of HANA claddings were improved by 40% and 50%, respectively, over Zircaloy-4. HANA claddings have been also irradiated in the commercial reactor up to 2 reactor cycles, showing the corrosion resistance 40% better than that of ZIRLO in the same fuel assembly. Long-term out-of-pile performance tests for the candidates of the next generation cladding materials have produced the highly reliable test results. The final candidate alloys were selected and they showed the corrosion resistance 50% better than the foreign advanced claddings, which is beyond the original target. The LOCA-related properties were also improved by 20% over the foreign advanced claddings. In order to establish the optimal manufacturing process for the inner and outer claddings of the dual-cooled fuel, 18 different kinds of specimens were fabricated with various cold working and annealing conditions. Based on the performance tests and various out-of-pile test results obtained from the specimens, the optimal manufacturing process was established for the inner and outer cladding tubes of the dual-cooled fuel

  20. Carbon nanomaterials for high-performance supercapacitors

    OpenAIRE

    Tao Chen; Liming Dai

    2013-01-01

    Owing to their high energy density and power density, supercapacitors exhibit great potential as high-performance energy sources for advanced technologies. Recently, carbon nanomaterials (especially, carbon nanotubes and graphene) have been widely investigated as effective electrodes in supercapacitors due to their high specific surface area, excellent electrical and mechanical properties. This article summarizes the recent progresses on the development of high-performance supercapacitors bas...

  1. Impacts of government subsidies on pricing and performance level choice in Energy Performance Contracting: A two-step optimal decision model

    International Nuclear Information System (INIS)

    Lu, Zhijian; Shao, Shuai

    2016-01-01

    Highlights: • An ESCO optimal decision model considering governmental subsidies is proposed. • Optimal price and performance level are deduced via a two-stage model. • Demand, profit, and performance level increase with increasing subsidies. • ESCO’s market strategy should firstly focus on high energy consumption industries. • Governmental subsidies standard in different industries should be differentiated. - Abstract: Government subsidies generally play a crucial role in pricing and the choice of performance levels in Energy Performance Contracting (EPC). However, the existing studies pay little attention to how the Energy Service Company (ESCO) prices and chooses performance levels for EPC with government subsidies. To fill such a gap, we propose a joint optimal decision model of pricing and performance level in EPC considering government subsidies. The optimization of the model is achieved via a two-stage process. At the first stage, given a performance level, ESCOs choose the best price; and at the second stage, ESCOs choose the optimal performance level for the optimal price. Furthermore, we carry out a numerical analysis to illuminate such an optimal decision mechanism. The results show that both price sensitivity and performance level sensitivity have significant effects on the choice of performance levels with government subsidies. Government subsidies can induce higher performance levels of EPC, the demand for EPC, and the profit of ESCO. We suggest that ESCO’s market strategy should firstly focus on high energy consumption industries with government subsidies and that government subsidies standard adopted in different industries should be differentiated according to the market characteristics and energy efficiency levels of various industries.

  2. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  3. Wavefront control performance modeling with WFIRST shaped pupil coronagraph testbed

    Science.gov (United States)

    Zhou, Hanying; Nemati, Bijian; Krist, John; Cady, Eric; Kern, Brian; Poberezhskiy, Ilya

    2017-09-01

    NASA's WFIRST mission includes a coronagraph instrument (CGI) for direct imaging of exoplanets. Significant improvement in CGI model fidelity has been made recently, alongside a testbed high contrast demonstration in a simulated dynamic environment at JPL. We present our modeling method and results of comparisons to testbed's high order wavefront correction performance for the shaped pupil coronagraph. Agreement between model prediction and testbed result at better than a factor of 2 has been consistently achieved in raw contrast (contrast floor, chromaticity, and convergence), and with that comes good agreement in contrast sensitivity to wavefront perturbations and mask lateral shear.

  4. Clojure high performance programming

    CERN Document Server

    Kumar, Shantanu

    2013-01-01

    This is a short, practical guide that will teach you everything you need to know to start writing high performance Clojure code.This book is ideal for intermediate Clojure developers who are looking to get a good grip on how to achieve optimum performance. You should already have some experience with Clojure and it would help if you already know a little bit of Java. Knowledge of performance analysis and engineering is not required. For hands-on practice, you should have access to Clojure REPL with Leiningen.

  5. Constitutive modeling of SMA SMP multifunctional high performance smart adaptive shape memory composite

    International Nuclear Information System (INIS)

    Jarali, Chetan S; Raja, S; Upadhya, A R

    2010-01-01

    Materials design involving the thermomechanical constitutive modeling of shape memory alloy (SMA) and shape memory polymer (SMP) composites is a key topic in the development of smart adaptive shape memory composites (SASMC). In this work, a constitutive model for SASMC is developed. First, a one-dimensional SMA model, which can simulate the pseudoelastic (PE) and shape memory effects (SME) is presented. Subsequently, a one-dimensional SMP model able to reproduce the SME is addressed. Both SMA and SMP models are based on a single internal state variable, namely the martensite fraction and the frozen fraction, which can be expressed as a function of temperature. A consistent form of the analytical solution for the SMP model is obtained using the fourth-order Runge–Kutta method. Finally, the SASMC constitutive model is proposed, following two analytical homogenization approaches. One approach is based on an equivalent inclusion method and the other approach is the rule of mixtures. The SMA and SMP constitutive models are validated independently with experimental results. However, the validation of the composite model is performed using the two homogenization approaches and a close agreement in results is observed. Results regarding the isothermal and thermomechanical stress–strain responses are analyzed as a function of SMA volume fraction. Further, it is concluded that the proposed composite model is able to reproduce consistently the overall composite response by taking into consideration not only the phase transformations, variable modulus and transformation stresses in SMA but also the variable modulus, the evolution of stored strain and thermal strain in the SMP

  6. Cpl6: The New Extensible, High-Performance Parallel Coupler forthe Community Climate System Model

    Energy Technology Data Exchange (ETDEWEB)

    Craig, Anthony P.; Jacob, Robert L.; Kauffman, Brain; Bettge,Tom; Larson, Jay; Ong, Everest; Ding, Chris; He, Yun

    2005-03-24

    Coupled climate models are large, multiphysics applications designed to simulate the Earth's climate and predict the response of the climate to any changes in the forcing or boundary conditions. The Community Climate System Model (CCSM) is a widely used state-of-art climate model that has released several versions to the climate community over the past ten years. Like many climate models, CCSM employs a coupler, a functional unit that coordinates the exchange of data between parts of climate system such as the atmosphere and ocean. This paper describes the new coupler, cpl6, contained in the latest version of CCSM,CCSM3. Cpl6 introduces distributed-memory parallelism to the coupler, a class library for important coupler functions, and a standardized interface for component models. Cpl6 is implemented entirely in Fortran90 and uses Model Coupling Toolkit as the base for most of its classes. Cpl6 gives improved performance over previous versions and scales well on multiple platforms.

  7. Wind Farm Layout Optimization through a Crossover-Elitist Evolutionary Algorithm performed over a High Performing Analytical Wake Model

    Science.gov (United States)

    Kirchner-Bossi, Nicolas; Porté-Agel, Fernando

    2017-04-01

    Wind turbine wakes can significantly disrupt the performance of further downstream turbines in a wind farm, thus seriously limiting the overall wind farm power output. Such effect makes the layout design of a wind farm to play a crucial role on the whole performance of the project. An accurate definition of the wake interactions added to a computationally compromised layout optimization strategy can result in an efficient resource when addressing the problem. This work presents a novel soft-computing approach to optimize the wind farm layout by minimizing the overall wake effects that the installed turbines exert on one another. An evolutionary algorithm with an elitist sub-optimization crossover routine and an unconstrained (continuous) turbine positioning set up is developed and tested over an 80-turbine offshore wind farm over the North Sea off Denmark (Horns Rev I). Within every generation of the evolution, the wind power output (cost function) is computed through a recently developed and validated analytical wake model with a Gaussian profile velocity deficit [1], which has shown to outperform the traditionally employed wake models through different LES simulations and wind tunnel experiments. Two schemes with slightly different perimeter constraint conditions (full or partial) are tested. Results show, compared to the baseline, gridded layout, a wind power output increase between 5.5% and 7.7%. In addition, it is observed that the electric cable length at the facilities is reduced by up to 21%. [1] Bastankhah, Majid, and Fernando Porté-Agel. "A new analytical model for wind-turbine wakes." Renewable Energy 70 (2014): 116-123.

  8. High-performance cement-based grouts for use in a nuclear waste disposal facility

    International Nuclear Information System (INIS)

    Onofrei, M.; Gray, M.N.

    1992-12-01

    National and international agencies have identified cement-based materials as prime candidates for sealing vaults that would isolate nuclear fuel wastes from the biosphere. Insufficient information is currently available to allow a reasonable analysis of the long-term performance of these sealing materials in a vault. A combined laboratory and modelling research program was undertaken to provide the necessary information for a specially developed high-performance cement grout. The results indicate that acceptable performance is likely for at least thousands of years and probably for much longer periods. The materials, which have been proven to be effective in field applications, are shown to be virtually impermeable and highly leach resistant under vault conditions. Special plasticizing additives used in the material formulation enhance the physical characteristics of the grout without detriment to its chemical durability. Neither modelling nor laboratory testing have yet provided a definitive assessment of the grout's longevity. However, none of the results of these studies has contraindicated the use of high-performance cement-based grouts in vault sealing applications. (Author) (24 figs., 6 tabs., 21 refs.)

  9. Spatial variability and parametric uncertainty in performance assessment models

    International Nuclear Information System (INIS)

    Pensado, Osvaldo; Mancillas, James; Painter, Scott; Tomishima, Yasuo

    2011-01-01

    The problem of defining an appropriate treatment of distribution functions (which could represent spatial variability or parametric uncertainty) is examined based on a generic performance assessment model for a high-level waste repository. The generic model incorporated source term models available in GoldSim ® , the TDRW code for contaminant transport in sparse fracture networks with a complex fracture-matrix interaction process, and a biosphere dose model known as BDOSE TM . Using the GoldSim framework, several Monte Carlo sampling approaches and transport conceptualizations were evaluated to explore the effect of various treatments of spatial variability and parametric uncertainty on dose estimates. Results from a model employing a representative source and ensemble-averaged pathway properties were compared to results from a model allowing for stochastic variation of transport properties along streamline segments (i.e., explicit representation of spatial variability within a Monte Carlo realization). We concluded that the sampling approach and the definition of an ensemble representative do influence consequence estimates. In the examples analyzed in this paper, approaches considering limited variability of a transport resistance parameter along a streamline increased the frequency of fast pathways resulting in relatively high dose estimates, while those allowing for broad variability along streamlines increased the frequency of 'bottlenecks' reducing dose estimates. On this basis, simplified approaches with limited consideration of variability may suffice for intended uses of the performance assessment model, such as evaluation of site safety. (author)

  10. Proficient brain for optimal performance: the MAP model perspective

    Directory of Open Access Journals (Sweden)

    Maurizio Bertollo

    2016-05-01

    Full Text Available Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1 and optimal-controlled (Type 2 performances. Methods. Ten elite shooters (6 male and 4 female with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the “neural efficiency hypothesis.” We also observed more ERD as related to optimal-controlled performance in conditions of “neural adaptability” and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques.

  11. Is it better to be average? High and low performance as predictors of employee victimization.

    Science.gov (United States)

    Jensen, Jaclyn M; Patel, Pankaj C; Raver, Jana L

    2014-03-01

    Given increased interest in whether targets' behaviors at work are related to their victimization, we investigated employees' job performance level as a precipitating factor for being victimized by peers in one's work group. Drawing on rational choice theory and the victim precipitation model, we argue that perpetrators take into consideration the risks of aggressing against particular targets, such that high performers tend to experience covert forms of victimization from peers, whereas low performers tend to experience overt forms of victimization. We further contend that the motivation to punish performance deviants will be higher when performance differentials are salient, such that the effects of job performance on covert and overt victimization will be exacerbated by group performance polarization, yet mitigated when the target has high equity sensitivity (benevolence). Finally, we investigate whether victimization is associated with future performance impairments. Results from data collected at 3 time points from 576 individuals in 62 work groups largely support the proposed model. The findings suggest that job performance is a precipitating factor to covert victimization for high performers and overt victimization for low performers in the workplace with implications for subsequent performance.

  12. Delivering high performance BWR fuel reliably

    International Nuclear Information System (INIS)

    Schardt, J.F.

    1998-01-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  13. High performance bio-integrated devices

    Science.gov (United States)

    Kim, Dae-Hyeong; Lee, Jongha; Park, Minjoon

    2014-06-01

    In recent years, personalized electronics for medical applications, particularly, have attracted much attention with the rise of smartphones because the coupling of such devices and smartphones enables the continuous health-monitoring in patients' daily life. Especially, it is expected that the high performance biomedical electronics integrated with the human body can open new opportunities in the ubiquitous healthcare. However, the mechanical and geometrical constraints inherent in all standard forms of high performance rigid wafer-based electronics raise unique integration challenges with biotic entities. Here, we describe materials and design constructs for high performance skin-mountable bio-integrated electronic devices, which incorporate arrays of single crystalline inorganic nanomembranes. The resulting electronic devices include flexible and stretchable electrophysiology electrodes and sensors coupled with active electronic components. These advances in bio-integrated systems create new directions in the personalized health monitoring and/or human-machine interfaces.

  14. URBAN MODELLING PERFORMANCE OF NEXT GENERATION SAR MISSIONS

    Directory of Open Access Journals (Sweden)

    U. G. Sefercik

    2017-09-01

    Full Text Available In synthetic aperture radar (SAR technology, urban mapping and modelling have become possible with revolutionary missions TerraSAR-X (TSX and Cosmo-SkyMed (CSK since 2007. These satellites offer 1m spatial resolution in high-resolution spotlight imaging mode and capable for high quality digital surface model (DSM acquisition for urban areas utilizing interferometric SAR (InSAR technology. With the advantage of independent generation from seasonal weather conditions, TSX and CSK DSMs are much in demand by scientific users. The performance of SAR DSMs is influenced by the distortions such as layover, foreshortening, shadow and double-bounce depend up on imaging geometry. In this study, the potential of DSMs derived from convenient 1m high-resolution spotlight (HS InSAR pairs of CSK and TSX is validated by model-to-model absolute and relative accuracy estimations in an urban area. For the verification, an airborne laser scanning (ALS DSM of the study area was used as the reference model. Results demonstrated that TSX and CSK urban DSMs are compatible in open, built-up and forest land forms with the absolute accuracy of 8–10 m. The relative accuracies based on the coherence of neighbouring pixels are superior to absolute accuracies both for CSK and TSX.

  15. Electrical circuit models for performance modeling of Lithium-Sulfur batteries

    DEFF Research Database (Denmark)

    Knap, Vaclav; Stroe, Daniel Ioan; Teodorescu, Remus

    2015-01-01

    emerging technology for various applications, there is a need for Li-S battery performance model; however, developing such models represents a challenging task due to batteries' complex ongoing chemical reactions. Therefore, the literature review was performed to summarize electrical circuit models (ECMs......) used for modeling the performance behavior of Li-S batteries. The studied Li-S pouch cell was tested in the laboratory in order to parametrize four basic ECM topologies. These topologies were compared by analyzing their voltage estimation accuracy values, which were obtained for different battery...... current profiles. Based on these results, the 3 R-C ECM was chosen and the Li-S battery cell discharging performance model with current dependent parameters was derived and validated....

  16. Determination of performance criteria for high-level solidified nuclear waste

    Energy Technology Data Exchange (ETDEWEB)

    Heckman, R.A.; Holdsworth, T.

    1979-05-07

    To minimize radiological risk from the operation of a waste management system, performance limits on volatilization, particulate dispersion, and dissolution characteristics of solidified high level waste must be specified. The results show clearly that the pre-emplacement environs are more limiting in establishing the waste form performance criteria than the post-emplacement environs. Absolute values of expected risk are very sensitive to modeling assumptions. The transportation and interim storage operations appear to be most limiting in determining the performance characteristics required. The expected values of risk do not rely upon the repositories remaining intact over the potentially hazardous lifetime of the waste.

  17. Determination of performance criteria for high-level solidified nuclear waste

    International Nuclear Information System (INIS)

    Heckman, R.A.; Holdsworth, T.

    1979-01-01

    To minimize radiological risk from the operation of a waste management system, performance limits on volatilization, particulate dispersion, and dissolution characteristics of solidified high level waste must be specified. The results show clearly that the pre-emplacement environs are more limiting in establishing the waste form performance criteria than the post-emplacement environs. Absolute values of expected risk are very sensitive to modeling assumptions. The transportation and interim storage operations appear to be most limiting in determining the performance characteristics required. The expected values of risk do not rely upon the repositories remaining intact over the potentially hazardous lifetime of the waste

  18. High-Performance Modeling of Carbon Dioxide Sequestration by Coupling Reservoir Simulation and Molecular Dynamics

    KAUST Repository

    Bao, Kai; Yan, Mi; Allen, Rebecca; Salama, Amgad; Lu, Ligang; Jordan, Kirk E.; Sun, Shuyu; Keyes, David E.

    2015-01-01

    The present work describes a parallel computational framework for carbon dioxide (CO2) sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel high-performance-computing (HPC) systems

  19. High Performance Macromolecular Material

    National Research Council Canada - National Science Library

    Forest, M

    2002-01-01

    .... In essence, most commercial high-performance polymers are processed through fiber spinning, following Nature and spider silk, which is still pound-for-pound the toughest liquid crystalline polymer...

  20. Delivering high performance BWR fuel reliably

    Energy Technology Data Exchange (ETDEWEB)

    Schardt, J.F. [GE Nuclear Energy, Wilmington, NC (United States)

    1998-07-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  1. How motivation affects academic performance: a structural equation modelling analysis.

    Science.gov (United States)

    Kusurkar, R A; Ten Cate, Th J; Vos, C M P; Westers, P; Croiset, G

    2013-03-01

    Few studies in medical education have studied effect of quality of motivation on performance. Self-Determination Theory based on quality of motivation differentiates between Autonomous Motivation (AM) that originates within an individual and Controlled Motivation (CM) that originates from external sources. To determine whether Relative Autonomous Motivation (RAM, a measure of the balance between AM and CM) affects academic performance through good study strategy and higher study effort and compare this model between subgroups: males and females; students selected via two different systems namely qualitative and weighted lottery selection. Data on motivation, study strategy and effort was collected from 383 medical students of VU University Medical Center Amsterdam and their academic performance results were obtained from the student administration. Structural Equation Modelling analysis technique was used to test a hypothesized model in which high RAM would positively affect Good Study Strategy (GSS) and study effort, which in turn would positively affect academic performance in the form of grade point averages. This model fit well with the data, Chi square = 1.095, df = 3, p = 0.778, RMSEA model fit = 0.000. This model also fitted well for all tested subgroups of students. Differences were found in the strength of relationships between the variables for the different subgroups as expected. In conclusion, RAM positively correlated with academic performance through deep strategy towards study and higher study effort. This model seems valid in medical education in subgroups such as males, females, students selected by qualitative and weighted lottery selection.

  2. Carpet Aids Learning in High Performance Schools

    Science.gov (United States)

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  3. Performability assessment by model checking of Markov reward models

    NARCIS (Netherlands)

    Baier, Christel; Cloth, L.; Haverkort, Boudewijn R.H.M.; Hermanns, H.; Katoen, Joost P.

    2010-01-01

    This paper describes efficient procedures for model checking Markov reward models, that allow us to evaluate, among others, the performability of computer-communication systems. We present the logic CSRL (Continuous Stochastic Reward Logic) to specify performability measures. It provides flexibility

  4. The Fuel Performance Analysis of LWR Fuel containing High Thermal Conductivity Reinforcements

    International Nuclear Information System (INIS)

    Kim, Seung Su; Ryu, Ho Jin

    2015-01-01

    The thermal conductivity of fuel affects many performance parameters including the fuel centerline temperature, fission gas release and internal pressure. In addition, enhanced safety margin of fuel might be expected when the thermal conductivity of fuel is improved by the addition of high thermal conductivity reinforcements. Therefore, the effects of thermal conductivity enhancement on the fuel performance of reinforced UO2 fuel with high thermal conductivity compounds should be analyzed. In this study, we analyzed the fuel performance of modified UO2 fuel with high thermal conductivity reinforcements by using the FRAPCON-3.5 code. The fissile density and mechanical properties of the modified fuel are considered the same with the standard UO2 fuel. The fuel performance of modified UO2 with high thermal conductivity reinforcements were analyzed by using the FRAPCON-3.5 code. The thermal conductivity enhancement factors of the modified fuels were obtained from the Maxwell model considering the volume fraction of reinforcements

  5. Performance assessment modeling of pyrometallurgical process wasteforms

    International Nuclear Information System (INIS)

    Nutt, W.M.; Hill, R.N.; Bullen, D.B.

    1995-01-01

    Performance assessment analyses have been completed to estimate the behavior of high-level nuclear wasteforms generated from the pyrometallurgical processing of liquid metal reactor (LMR) and light water reactor (LWR) spent nuclear fuel. Waste emplaced in the proposed repository at Yucca Mountain is investigated as the basis for the study. The resulting cumulative actinide and fission product releases to the accessible environment within a 100,000 year period from the various pyrometallurgical process wasteforms are compared to those of directly disposed LWR spent fuel using the same total repository system model. The impact of differing radionuclide transport models on the overall release characteristics is investigated

  6. A formalism to generate probability distributions for performance-assessment modeling

    International Nuclear Information System (INIS)

    Kaplan, P.G.

    1990-01-01

    A formalism is presented for generating probability distributions of parameters used in performance-assessment modeling. The formalism is used when data are either sparse or nonexistent. The appropriate distribution is a function of the known or estimated constraints and is chosen to maximize a quantity known as Shannon's informational entropy. The formalism is applied to a parameter used in performance-assessment modeling. The functional form of the model that defines the parameter, data from the actual field site, and natural analog data are analyzed to estimate the constraints. A beta probability distribution of the example parameter is generated after finding four constraints. As an example of how the formalism is applied to the site characterization studies of Yucca Mountain, the distribution is generated for an input parameter in a performance-assessment model currently used to estimate compliance with disposal of high-level radioactive waste in geologic repositories, 10 CFR 60.113(a)(2), commonly known as the ground water travel time criterion. 8 refs., 2 figs

  7. High-performance-vehicle technology. [fighter aircraft propulsion

    Science.gov (United States)

    Povinelli, L. A.

    1979-01-01

    Propulsion needs of high performance military aircraft are discussed. Inlet performance, nozzle performance and cooling, and afterburner performance are covered. It is concluded that nonaxisymmetric nozzles provide cleaner external lines and enhanced maneuverability, but the internal flows are more complex. Swirl afterburners show promise for enhanced performance in the high altitude, low Mach number region.

  8. Fuel analysis code FAIR and its high burnup modelling capabilities

    International Nuclear Information System (INIS)

    Prasad, P.S.; Dutta, B.K.; Kushwaha, H.S.; Mahajan, S.C.; Kakodkar, A.

    1995-01-01

    A computer code FAIR has been developed for analysing performance of water cooled reactor fuel pins. It is capable of analysing high burnup fuels. This code has recently been used for analysing ten high burnup fuel rods irradiated at Halden reactor. In the present paper, the code FAIR and its various high burnup models are described. The performance of code FAIR in analysing high burnup fuels and its other applications are highlighted. (author). 21 refs., 12 figs

  9. Academic performance in high school as factor associated to academic performance in college

    Directory of Open Access Journals (Sweden)

    Mileidy Salcedo Barragán

    2008-12-01

    Full Text Available This study intends to find the relationship between academic performance in High School and College, focusing on Natural Sciences and Mathematics. It is a descriptive correlational study, and the variables were academic performance in High School, performance indicators and educational history. The correlations between variables were established with Spearman’s correlation coefficient. Results suggest that there is a positive relationship between academic performance in High School and Educational History, and a very weak relationship between performance in Science and Mathematics in High School and performance in College.

  10. Cognitive performance modeling based on general systems performance theory.

    Science.gov (United States)

    Kondraske, George V

    2010-01-01

    General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).

  11. The Five Key Questions of Human Performance Modeling.

    Science.gov (United States)

    Wu, Changxu

    2018-01-01

    Via building computational (typically mathematical and computer simulation) models, human performance modeling (HPM) quantifies, predicts, and maximizes human performance, human-machine system productivity and safety. This paper describes and summarizes the five key questions of human performance modeling: 1) Why we build models of human performance; 2) What the expectations of a good human performance model are; 3) What the procedures and requirements in building and verifying a human performance model are; 4) How we integrate a human performance model with system design; and 5) What the possible future directions of human performance modeling research are. Recent and classic HPM findings are addressed in the five questions to provide new thinking in HPM's motivations, expectations, procedures, system integration and future directions.

  12. High Performance Grinding and Advanced Cutting Tools

    CERN Document Server

    Jackson, Mark J

    2013-01-01

    High Performance Grinding and Advanced Cutting Tools discusses the fundamentals and advances in high performance grinding processes, and provides a complete overview of newly-developing areas in the field. Topics covered are grinding tool formulation and structure, grinding wheel design and conditioning and applications using high performance grinding wheels. Also included are heat treatment strategies for grinding tools, using grinding tools for high speed applications, laser-based and diamond dressing techniques, high-efficiency deep grinding, VIPER grinding, and new grinding wheels.

  13. Hydrological model performance and parameter estimation in the wavelet-domain

    Directory of Open Access Journals (Sweden)

    B. Schaefli

    2009-10-01

    Full Text Available This paper proposes a method for rainfall-runoff model calibration and performance analysis in the wavelet-domain by fitting the estimated wavelet-power spectrum (a representation of the time-varying frequency content of a time series of a simulated discharge series to the one of the corresponding observed time series. As discussed in this paper, calibrating hydrological models so as to reproduce the time-varying frequency content of the observed signal can lead to different results than parameter estimation in the time-domain. Therefore, wavelet-domain parameter estimation has the potential to give new insights into model performance and to reveal model structural deficiencies. We apply the proposed method to synthetic case studies and a real-world discharge modeling case study and discuss how model diagnosis can benefit from an analysis in the wavelet-domain. The results show that for the real-world case study of precipitation – runoff modeling for a high alpine catchment, the calibrated discharge simulation captures the dynamics of the observed time series better than the results obtained through calibration in the time-domain. In addition, the wavelet-domain performance assessment of this case study highlights the frequencies that are not well reproduced by the model, which gives specific indications about how to improve the model structure.

  14. THE MODEL CHARACTERISTICS OF JUMP ACTIONS STRUCTURE OF HIGH PERFORMANCE FEMALE VOLLEYBALL PLAYERS

    Directory of Open Access Journals (Sweden)

    Stech M.

    2012-12-01

    Full Text Available The purpose of this study was to develop generalized and individual models of the jump actions of skilled female volleyball players. The main prerequisite for the development of the jump actions models were the results of our earlier studies of factor structure of jump actions of 10 sportswomen of the Polish volleyball team "Gedania" (Premier League in the preparatory and competitive periods of the annual cycle of preparation. The athletes age was 22.0 +- 2.9 years, the sports experience - 8.1 +- 3.1 years, body height - 181.9 +- 8.4 years and body weight - 72.8 +- 10.8 kg. Mathematical and statistical processing of the data (the definition of M ± SD and significant differences between the samples was performed using a standard computer program "STATISTICA 7,0". Based on the analysis of the factor structure of 20 jump actions of skilled women volleyball players determined to within 5 of the most informative indexes and their tentative values recommended for the formation of a generalized model of this structure. Comparison of individual models of jump actions of skilled women volleyball players with their generalized models in different periods of preparation can be used for the rational choice of means and methods for the increasing of the training process efficiency.

  15. The Effect of Covert Modeling on Communication Apprehension, Communication Confidence, and Performance.

    Science.gov (United States)

    Nimocks, Mittie J.; Bromley, Patricia L.; Parsons, Theron E.; Enright, Corinne S.; Gates, Elizabeth A.

    This study examined the effect of covert modeling on communication apprehension, public speaking anxiety, and communication competence. Students identified as highly communication apprehensive received covert modeling, a technique in which one first observes a model doing a behavior, then visualizes oneself performing the behavior and obtaining a…

  16. High Performance Computing Multicast

    Science.gov (United States)

    2012-02-01

    A History of the Virtual Synchrony Replication Model,” in Replication: Theory and Practice, Charron-Bost, B., Pedone, F., and Schiper, A. (Eds...Performance Computing IP / IPv4 Internet Protocol (version 4.0) IPMC Internet Protocol MultiCast LAN Local Area Network MCMD Dr. Multicast MPI

  17. Numerical research on the thermal performance of high altitude scientific balloons

    International Nuclear Information System (INIS)

    Dai, Qiumin; Xing, Daoming; Fang, Xiande; Zhao, Yingjie

    2017-01-01

    Highlights: • A model is presented to evaluate the IR radiation between translucent surfaces. • Comprehensive ascent and thermal models of balloons are established. • The effect of IR transmissivity on film temperature distribution is unneglectable. • Atmospheric IR radiation is the primary thermal factor of balloons at night. • Solar radiation is the primary thermal factor of balloons during the day. - Abstract: Internal infrared (IR) radiation is an important factor that affects the thermal performance of high altitude balloons. The internal IR radiation is commonly neglected or treated as the IR radiation between opaque gray bodies. In this paper, a mathematical model which considers the IR transmissivity of the film is proposed to estimate the internal IR radiation. Comprehensive ascent and thermal models for high altitude scientific balloons are established. Based on the models, thermal characteristics of a NASA super pressure balloon are simulated. The effects of film IR property on the thermal behaviors of the balloon are discussed in detail. The results are helpful for the design and operation of high altitude scientific balloons.

  18. Highlighting High Performance: Whitman Hanson Regional High School; Whitman, Massachusetts

    Energy Technology Data Exchange (ETDEWEB)

    2006-06-01

    This brochure describes the key high-performance building features of the Whitman-Hanson Regional High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar and wind energy, building envelope, heating and cooling systems, water conservation, and acoustics. Energy cost savings are also discussed.

  19. Applying the High Reliability Health Care Maturity Model to Assess Hospital Performance: A VA Case Study.

    Science.gov (United States)

    Sullivan, Jennifer L; Rivard, Peter E; Shin, Marlena H; Rosen, Amy K

    2016-09-01

    The lack of a tool for categorizing and differentiating hospitals according to their high reliability organization (HRO)-related characteristics has hindered progress toward implementing and sustaining evidence-based HRO practices. Hospitals would benefit both from an understanding of the organizational characteristics that support HRO practices and from knowledge about the steps necessary to achieve HRO status to reduce the risk of harm and improve outcomes. The High Reliability Health Care Maturity (HRHCM) model, a model for health care organizations' achievement of high reliability with zero patient harm, incorporates three major domains critical for promoting HROs-Leadership, Safety Culture, and Robust Process Improvement ®. A study was conducted to examine the content validity of the HRHCM model and evaluate whether it can differentiate hospitals' maturity levels for each of the model's components. Staff perceptions of patient safety at six US Department of Veterans Affairs (VA) hospitals were examined to determine whether all 14 HRHCM components were present and to characterize each hospital's level of organizational maturity. Twelve of the 14 components from the HRHCM model were detected; two additional characteristics emerged that are present in the HRO literature but not represented in the model-teamwork culture and system-focused tools for learning and improvement. Each hospital's level of organizational maturity could be characterized for 9 of the 14 components. The findings suggest the HRHCM model has good content validity and that there is differentiation between hospitals on model components. Additional research is needed to understand how these components can be used to build the infrastructure necessary for reaching high reliability.

  20. High performance cellular level agent-based simulation with FLAME for the GPU.

    Science.gov (United States)

    Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela

    2010-05-01

    Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.

  1. High performance polymeric foams

    International Nuclear Information System (INIS)

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-01-01

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy

  2. LIAR -- A computer program for the modeling and simulation of high performance linacs

    International Nuclear Information System (INIS)

    Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.; Siemann, R.; Thompson, K.; Zimmermann, F.

    1997-04-01

    The computer program LIAR (LInear Accelerator Research Code) is a numerical modeling and simulation tool for high performance linacs. Amongst others, it addresses the needs of state-of-the-art linear colliders where low emittance, high-intensity beams must be accelerated to energies in the 0.05-1 TeV range. LIAR is designed to be used for a variety of different projects. LIAR allows the study of single- and multi-particle beam dynamics in linear accelerators. It calculates emittance dilutions due to wakefield deflections, linear and non-linear dispersion and chromatic effects in the presence of multiple accelerator imperfections. Both single-bunch and multi-bunch beams can be simulated. Several basic and advanced optimization schemes are implemented. Present limitations arise from the incomplete treatment of bending magnets and sextupoles. A major objective of the LIAR project is to provide an open programming platform for the accelerator physics community. Due to its design, LIAR allows straight-forward access to its internal FORTRAN data structures. The program can easily be extended and its interactive command language ensures maximum ease of use. Presently, versions of LIAR are compiled for UNIX and MS Windows operating systems. An interface for the graphical visualization of results is provided. Scientific graphs can be saved in the PS and EPS file formats. In addition a Mathematica interface has been developed. LIAR now contains more than 40,000 lines of source code in more than 130 subroutines. This report describes the theoretical basis of the program, provides a reference for existing features and explains how to add further commands. The LIAR home page and the ONLINE version of this manual can be accessed under: http://www.slac.stanford.edu/grp/arb/rwa/liar.htm

  3. Development of a code and models for high burnup fuel performance analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kinoshita, M; Kitajima, S [Central Research Inst. of Electric Power Industry, Tokyo (Japan)

    1997-08-01

    First the high burnup LWR fuel behavior is discussed and necessary models for the analysis are reviewed. These aspects of behavior are the changes of power history due to the higher enrichment, the temperature feedback due to fission gas release and resultant degradation of gap conductance, axial fission gas transport in fuel free volume, fuel conductivity degradation due to fission product solution and modification of fuel micro-structure. Models developed for these phenomena, modifications in the code, and the benchmark results mainly based on Risoe fission gas project is presented. Finally the rim effect which is observe only around the fuel periphery will be discussed focusing into the fuel conductivity degradation and swelling due to the porosity development. (author). 18 refs, 13 figs, 3 tabs.

  4. High energy model for irregular absorbing particles

    International Nuclear Information System (INIS)

    Chiappetta, Pierre.

    1979-05-01

    In the framework of a high energy formulation of relativistic quantum scattering a model is presented which describes the scattering functions and polarization of irregular absorbing particles, whose dimensions are greater than the incident wavelength. More precisely in the forward direction an amplitude parametrization of eikonal type is defined which generalizes the usual diffraction theory, and in the backward direction a reflective model is used including a shadow function. The model predictions are in good agreement with the scattering measurements off irregular compact and fluffy particles performed by Zerull, Giese and Weiss (1977)

  5. Responsive design high performance

    CERN Document Server

    Els, Dewald

    2015-01-01

    This book is ideal for developers who have experience in developing websites or possess minor knowledge of how responsive websites work. No experience of high-level website development or performance tweaking is required.

  6. Striving for Excellence Sometimes Hinders High Achievers: Performance-Approach Goals Deplete Arithmetical Performance in Students with High Working Memory Capacity

    Science.gov (United States)

    Crouzevialle, Marie; Smeding, Annique; Butera, Fabrizio

    2015-01-01

    We tested whether the goal to attain normative superiority over other students, referred to as performance-approach goals, is particularly distractive for high-Working Memory Capacity (WMC) students—that is, those who are used to being high achievers. Indeed, WMC is positively related to high-order cognitive performance and academic success, a record of success that confers benefits on high-WMC as compared to low-WMC students. We tested whether such benefits may turn out to be a burden under performance-approach goal pursuit. Indeed, for high achievers, aiming to rise above others may represent an opportunity to reaffirm their positive status—a stake susceptible to trigger disruptive outcome concerns that interfere with task processing. Results revealed that with performance-approach goals—as compared to goals with no emphasis on social comparison—the higher the students’ WMC, the lower their performance at a complex arithmetic task (Experiment 1). Crucially, this pattern appeared to be driven by uncertainty regarding the chances to outclass others (Experiment 2). Moreover, an accessibility measure suggested the mediational role played by status-related concerns in the observed disruption of performance. We discuss why high-stake situations can paradoxically lead high-achievers to sub-optimally perform when high-order cognitive performance is at play. PMID:26407097

  7. Delay model and performance testing for FPGA carry chain TDC

    International Nuclear Information System (INIS)

    Kang Xiaowen; Liu Yaqiang; Cui Junjian Yang Zhangcan; Jin Yongjie

    2011-01-01

    Time-of-flight (TOF) information would improve the performance of PET (position emission tomography). TDC design is a key technique. It proposed Carry Chain TDC Delay model. Through changing the significant delay parameter of model, paper compared the difference of TDC performance, and finally realized Time-to-Digital Convertor (TDC) based on Carry Chain Method using FPGA EP2C20Q240C8N with 69 ps LSB, max error below 2 LSB. Such result could meet the TOF demand. It also proposed a Coaxial Cable Measuring method for TDC testing, without High-precision test equipment. (authors)

  8. Predictive performance models and multiple task performance

    Science.gov (United States)

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  9. High-performance ceramics. Fabrication, structure, properties

    International Nuclear Information System (INIS)

    Petzow, G.; Tobolski, J.; Telle, R.

    1996-01-01

    The program ''Ceramic High-performance Materials'' pursued the objective to understand the chaining of cause and effect in the development of high-performance ceramics. This chain of problems begins with the chemical reactions for the production of powders, comprises the characterization, processing, shaping and compacting of powders, structural optimization, heat treatment, production and finishing, and leads to issues of materials testing and of a design appropriate to the material. The program ''Ceramic High-performance Materials'' has resulted in contributions to the understanding of fundamental interrelationships in terms of materials science, which are summarized in the present volume - broken down into eight special aspects. (orig./RHM)

  10. Structured Innovation of High-Performance Wave Energy Converter Technology: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Weber, Jochem W. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Laird, Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2018-01-25

    Wave energy converter (WEC) technology development has not yet delivered the desired commercial maturity nor, and more importantly, the techno-economic performance. The reasons for this have been recognized and fundamental requirements for successful WEC technology development have been identified. This paper describes a multi-year project pursued in collaboration by the National Renewable Energy Laboratory and Sandia National Laboratories to innovate and develop new WEC technology. It specifies the project strategy, shows how this differs from the state-of-the-art approach and presents some early project results. Based on the specification of fundamental functional requirements of WEC technology, structured innovation and systemic problem solving methodologies are applied to invent and identify new WEC technology concepts. Using Technology Performance Levels (TPL) as an assessment metric of the techno-economic performance potential, high performance technology concepts are identified and selected for further development. System performance is numerically modelled and optimized and key performance aspects are empirically validated. The project deliverables are WEC technology specifications of high techno-economic performance technologies of TPL 7 or higher at TRL 3 with some key technology challenges investigated at higher TRL. These wave energy converter technology specifications will be made available to industry for further, full development and commercialisation (TRL 4 - TRL 9).

  11. Maintenance Personnel Performance Simulation (MAPPS) model

    International Nuclear Information System (INIS)

    Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Knee, H.E.; Haas, P.M.

    1984-01-01

    A stochastic computer model for simulating the actions and behavior of nuclear power plant maintenance personnel is described. The model considers personnel, environmental, and motivational variables to yield predictions of maintenance performance quality and time to perform. The mode has been fully developed and sensitivity tested. Additional evaluation of the model is now taking place

  12. Practices and Processes of Leading High Performance Home Builders in the Upper Midwest

    Energy Technology Data Exchange (ETDEWEB)

    Von Thoma, E.; Ojczyk, C.

    2012-12-01

    The NorthernSTAR Building America Partnership team proposed this study to gain insight into the business, sales, and construction processes of successful high performance builders. The knowledge gained by understanding the high performance strategies used by individual builders, as well as the process each followed to move from traditional builder to high performance builder, will be beneficial in proposing more in-depth research to yield specific action items to assist the industry at large transform to high performance new home construction. This investigation identified the best practices of three successful high performance builders in the upper Midwest. In-depth field analysis of the performance levels of their homes, their business models, and their strategies for market acceptance were explored. All three builders commonly seek ENERGY STAR certification on their homes and implement strategies that would allow them to meet the requirements for the Building America Builders Challenge program. Their desire for continuous improvement, willingness to seek outside assistance, and ambition to be leaders in their field are common themes. Problem solving to overcome challenges was accepted as part of doing business. It was concluded that crossing the gap from code-based building to high performance based building was a natural evolution for these leading builders.

  13. A parallel calibration utility for WRF-Hydro on high performance computers

    Science.gov (United States)

    Wang, J.; Wang, C.; Kotamarthi, V. R.

    2017-12-01

    A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.

  14. A multilateral modelling of Youth Soccer Performance Index (YSPI)

    Science.gov (United States)

    Bisyri Husin Musawi Maliki, Ahmad; Razali Abdullah, Mohamad; Juahir, Hafizan; Abdullah, Farhana; Ain Shahirah Abdullah, Nurul; Muazu Musa, Rabiu; Musliha Mat-Rasid, Siti; Adnan, Aleesha; Azura Kosni, Norlaila; Muhamad, Wan Siti Amalina Wan; Afiqah Mohamad Nasir, Nur

    2018-04-01

    This study aims to identify the most dominant factors that influencing performance of soccer player and to predict group performance for soccer players. A total of 184 of youth soccer players from Malaysia sport school and six soccer academy encompasses as respondence of the study. Exploratory factor analysis (EFA) and Confirmatory factor analysis (CFA) were computed to identify the most dominant factors whereas reducing the initial 26 parameters with recommended >0.5 of factor loading. Meanwhile, prediction of the soccer performance was predicted by regression model. CFA revealed that sit and reach, vertical jump, VO2max, age, weight, height, sitting height, calf circumference (cc), medial upper arm circumference (muac), maturation, bicep, triceps, subscapular, suprailiac, 5M, 10M, and 20M speed were the most dominant factors. Further index analysis forming Youth Soccer Performance Index (YSPI) resulting by categorizing three groups namely, high, moderate, and low. The regression model for this study was significant set as p < 0.001 and R2 is 0.8222 which explained that the model contributed a total of 82% prediction ability to predict the whole set of the variables. The significant parameters in contributing prediction of YSPI are discussed. As a conclusion, the precision of the prediction models by integrating a multilateral factor reflecting for predicting potential soccer player and hopefully can create a competitive soccer games.

  15. Brain inspired high performance electronics on flexible silicon

    KAUST Repository

    Sevilla, Galo T.

    2014-06-01

    Brain\\'s stunning speed, energy efficiency and massive parallelism makes it the role model for upcoming high performance computation systems. Although human brain components are a million times slower than state of the art silicon industry components [1], they can perform 1016 operations per second while consuming less power than an electrical light bulb. In order to perform the same amount of computation with today\\'s most advanced computers, the output of an entire power station would be needed. In that sense, to obtain brain like computation, ultra-fast devices with ultra-low power consumption will have to be integrated in extremely reduced areas, achievable only if brain folded structure is mimicked. Therefore, to allow brain-inspired computation, flexible and transparent platform will be needed to achieve foldable structures and their integration on asymmetric surfaces. In this work, we show a new method to fabricate 3D and planar FET architectures in flexible and semitransparent silicon fabric without comprising performance and maintaining cost/yield advantage offered by silicon-based electronics.

  16. Performance modeling of parallel algorithms for solving neutron diffusion problems

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1995-01-01

    Neutron diffusion calculations are the most common computational methods used in the design, analysis, and operation of nuclear reactors and related activities. Here, mathematical performance models are developed for the parallel algorithm used to solve the neutron diffusion equation on message passing and shared memory multiprocessors represented by the Intel iPSC/860 and the Sequent Balance 8000, respectively. The performance models are validated through several test problems, and these models are used to estimate the performance of each of the two considered architectures in situations typical of practical applications, such as fine meshes and a large number of participating processors. While message passing computers are capable of producing speedup, the parallel efficiency deteriorates rapidly as the number of processors increases. Furthermore, the speedup fails to improve appreciably for massively parallel computers so that only small- to medium-sized message passing multiprocessors offer a reasonable platform for this algorithm. In contrast, the performance model for the shared memory architecture predicts very high efficiency over a wide range of number of processors reasonable for this architecture. Furthermore, the model efficiency of the Sequent remains superior to that of the hypercube if its model parameters are adjusted to make its processors as fast as those of the iPSC/860. It is concluded that shared memory computers are better suited for this parallel algorithm than message passing computers

  17. Performance capabilities of EDM of high carbon high chromium steel with copper and brass electrodes

    Science.gov (United States)

    Surekha, B.; Swain, Sudiptha; Suleman, Abu Jafar; Choudhury, Suvan Dev

    2017-07-01

    The paper address the statistical modeling of input-output relationships of electric discharge machining. In the present work, peak current (I) pulse on time (T) and gap voltage of electric discharge machining (EDM) process are chosen as control parameters to analyze the performance of the process. The output characteristics, namely radial overcut, electrode wear rate (EWR) and metal removal rate (MRR) are treated as the responses. A full factorial design (FFD) of experiments has been used to conduct the experiments and linear regression models are developed for different process characteristics. While conducting the experiments, high carbon and high chromium steel is considered as work piece material and brass and copper are used as electrode material. It is important to note that the experimental conditions are kept similar while machining with the help of different electrode materials. The data obtained from the experiments has been used to develop the regression models for three process parameters for two electrode materials.

  18. High performance data transfer

    Science.gov (United States)

    Cottrell, R.; Fang, C.; Hanushevsky, A.; Kreuger, W.; Yang, W.

    2017-10-01

    The exponentially increasing need for high speed data transfer is driven by big data, and cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software. This has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a scalable, balanced, easy to deploy and use way while minimizing power and space utilization. In collaboration with several commercial vendors, Proofs of Concept (PoC) consisting of clusters have been put together using off-the- shelf components to test the ZX scalability and ability to balance services using multiple cores, and links. The PoCs are based on SSD flash storage that is managed by a parallel file system. Each cluster occupies 4 rack units. Using the PoCs, between clusters we have achieved almost 200Gbps memory to memory over two 100Gbps links, and 70Gbps parallel file to parallel file with encryption over a 5000 mile 100Gbps link.

  19. Advanced transport systems analysis, modeling, and evaluation of performances

    CERN Document Server

    Janić, Milan

    2014-01-01

    This book provides a systematic analysis, modeling and evaluation of the performance of advanced transport systems. It offers an innovative approach by presenting a multidimensional examination of the performance of advanced transport systems and transport modes, useful for both theoretical and practical purposes. Advanced transport systems for the twenty-first century are characterized by the superiority of one or several of their infrastructural, technical/technological, operational, economic, environmental, social, and policy performances as compared to their conventional counterparts. The advanced transport systems considered include: Bus Rapid Transit (BRT) and Personal Rapid Transit (PRT) systems in urban area(s), electric and fuel cell passenger cars, high speed tilting trains, High Speed Rail (HSR), Trans Rapid Maglev (TRM), Evacuated Tube Transport system (ETT), advanced commercial subsonic and Supersonic Transport Aircraft (STA), conventionally- and Liquid Hydrogen (LH2)-fuelled commercial air trans...

  20. Monitoring the performance of Aux. Feedwater Pump using Smart Sensing Model

    Energy Technology Data Exchange (ETDEWEB)

    No, Young Gyu; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2015-10-15

    Many artificial intelligence (AI) techniques equipped with learning systems have recently been proposed to monitor sensors and components in NPPs. Therefore, the objective of this study is the development of an integrity evaluation method for safety critical components such as Aux. feedwater pump, high pressure safety injection (HPSI) pump, etc. using smart sensing models based on AI techniques. In this work, the smart sensing model is developed at first to predict the performance of Aux. feedwater pump by estimating flowrate using group method of data handing (GMDH) method. If the performance prediction is achieved by this feasibility study, the smart sensing model will be applied to development of the integrity evaluation method for safety critical components. Also, the proposed algorithm for the performance prediction is verified by comparison with the simulation data of the MARS code for station blackout (SBO) events. In this study, the smart sensing model for the prediction performance of Aux. feedwater pump has been developed. In order to develop the smart sensing model, the GMDH algorithm is employed. The GMDH algorithm is the way to find a function that can well express a dependent variable from independent variables. This method uses a data structure similar to that of multiple regression models. The proposed GMDH model can accurately predict the performance of Aux.

  1. Monitoring the performance of Aux. Feedwater Pump using Smart Sensing Model

    International Nuclear Information System (INIS)

    No, Young Gyu; Seong, Poong Hyun

    2015-01-01

    Many artificial intelligence (AI) techniques equipped with learning systems have recently been proposed to monitor sensors and components in NPPs. Therefore, the objective of this study is the development of an integrity evaluation method for safety critical components such as Aux. feedwater pump, high pressure safety injection (HPSI) pump, etc. using smart sensing models based on AI techniques. In this work, the smart sensing model is developed at first to predict the performance of Aux. feedwater pump by estimating flowrate using group method of data handing (GMDH) method. If the performance prediction is achieved by this feasibility study, the smart sensing model will be applied to development of the integrity evaluation method for safety critical components. Also, the proposed algorithm for the performance prediction is verified by comparison with the simulation data of the MARS code for station blackout (SBO) events. In this study, the smart sensing model for the prediction performance of Aux. feedwater pump has been developed. In order to develop the smart sensing model, the GMDH algorithm is employed. The GMDH algorithm is the way to find a function that can well express a dependent variable from independent variables. This method uses a data structure similar to that of multiple regression models. The proposed GMDH model can accurately predict the performance of Aux

  2. Strategy Guideline. Partnering for High Performance Homes

    Energy Technology Data Exchange (ETDEWEB)

    Prahl, Duncan [IBACOS, Inc., Pittsburgh, PA (United States)

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  3. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  4. High-performance computing on GPUs for resistivity logging of oil and gas wells

    Science.gov (United States)

    Glinskikh, V.; Dudaev, A.; Nechaev, O.; Surodina, I.

    2017-10-01

    We developed and implemented into software an algorithm for high-performance simulation of electrical logs from oil and gas wells using high-performance heterogeneous computing. The numerical solution of the 2D forward problem is based on the finite-element method and the Cholesky decomposition for solving a system of linear algebraic equations (SLAE). Software implementations of the algorithm used the NVIDIA CUDA technology and computing libraries are made, allowing us to perform decomposition of SLAE and find its solution on central processor unit (CPU) and graphics processor unit (GPU). The calculation time is analyzed depending on the matrix size and number of its non-zero elements. We estimated the computing speed on CPU and GPU, including high-performance heterogeneous CPU-GPU computing. Using the developed algorithm, we simulated resistivity data in realistic models.

  5. ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS

    International Nuclear Information System (INIS)

    WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

    2002-01-01

    OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability

  6. High-Rate Performance of Muon Drift Tube Detectors

    CERN Document Server

    Schwegler, Philipp

    The Large Hadron Collider (LHC) at the European Centre for Particle Physics, CERN, collides protons with an unprecedentedly high centre-of-mass energy and luminosity. The collision products are recorded and analysed by four big experiments, one of which is the ATLAS detector. In parallel with the first LHC run from 2009 to 2012, which culminated in the discovery of the last missing particle of the Standard Model of particle physics, the Higgs boson, planning of upgrades of the LHC for higher instantaneous luminosities (HL-LHC) is already progressing. The high instantaneous luminosity of the LHC puts high demands on the detectors with respect to radiation hardness and rate capability which are further increased with the luminosity upgrade. In this thesis, the limitations of the Muon Drift Tube (MDT) chambers of the ATLAS Muon Spectrometer at the high background counting rates at the LHC and performance of new small diameter muon drift tube (sMDT) detectors at the even higher background rates at HL-LHC are stud...

  7. Work domain constraints for modelling surgical performance.

    Science.gov (United States)

    Morineau, Thierry; Riffaud, Laurent; Morandi, Xavier; Villain, Jonathan; Jannin, Pierre

    2015-10-01

    Three main approaches can be identified for modelling surgical performance: a competency-based approach, a task-based approach, both largely explored in the literature, and a less known work domain-based approach. The work domain-based approach first describes the work domain properties that constrain the agent's actions and shape the performance. This paper presents a work domain-based approach for modelling performance during cervical spine surgery, based on the idea that anatomical structures delineate the surgical performance. This model was evaluated through an analysis of junior and senior surgeons' actions. Twenty-four cervical spine surgeries performed by two junior and two senior surgeons were recorded in real time by an expert surgeon. According to a work domain-based model describing an optimal progression through anatomical structures, the degree of adjustment of each surgical procedure to a statistical polynomial function was assessed. Each surgical procedure showed a significant suitability with the model and regression coefficient values around 0.9. However, the surgeries performed by senior surgeons fitted this model significantly better than those performed by junior surgeons. Analysis of the relative frequencies of actions on anatomical structures showed that some specific anatomical structures discriminate senior from junior performances. The work domain-based modelling approach can provide an overall statistical indicator of surgical performance, but in particular, it can highlight specific points of interest among anatomical structures that the surgeons dwelled on according to their level of expertise.

  8. A high-resolution global flood hazard model

    Science.gov (United States)

    Sampson, Christopher C.; Smith, Andrew M.; Bates, Paul B.; Neal, Jeffrey C.; Alfieri, Lorenzo; Freer, Jim E.

    2015-09-01

    Floods are a natural hazard that affect communities worldwide, but to date the vast majority of flood hazard research and mapping has been undertaken by wealthy developed nations. As populations and economies have grown across the developing world, so too has demand from governments, businesses, and NGOs for modeled flood hazard data in these data-scarce regions. We identify six key challenges faced when developing a flood hazard model that can be applied globally and present a framework methodology that leverages recent cross-disciplinary advances to tackle each challenge. The model produces return period flood hazard maps at ˜90 m resolution for the whole terrestrial land surface between 56°S and 60°N, and results are validated against high-resolution government flood hazard data sets from the UK and Canada. The global model is shown to capture between two thirds and three quarters of the area determined to be at risk in the benchmark data without generating excessive false positive predictions. When aggregated to ˜1 km, mean absolute error in flooded fraction falls to ˜5%. The full complexity global model contains an automatically parameterized subgrid channel network, and comparison to both a simplified 2-D only variant and an independently developed pan-European model shows the explicit inclusion of channels to be a critical contributor to improved model performance. While careful processing of existing global terrain data sets enables reasonable model performance in urban areas, adoption of forthcoming next-generation global terrain data sets will offer the best prospect for a step-change improvement in model performance.

  9. High Performance Multi-GPU SpMV for Multi-component PDE-Based Applications

    KAUST Repository

    Abdelfattah, Ahmad; Ltaief, Hatem; Keyes, David E.

    2015-01-01

    -block structure. While these optimizations are important for high performance dense kernel executions, they are even more critical when dealing with sparse linear algebra operations. The most time-consuming phase of many multicomponent applications, such as models

  10. High-performance HR practices, positive affect and employee outcomes

    OpenAIRE

    Mostafa, Ahmed

    2017-01-01

    Purpose – The purpose of this paper is to provide insight into the affective or emotional mechanisms that underlie the relationship between high-performance HR practices (HPHRP) and employee attitudes and behaviours. Drawing on affective events theory (AET), this paper examines a mediation model in which HPHRP influence positive affect which in turn affects job satisfaction and organizational citizenship behaviours (OCBs). Design/methodology/approach – Two-wave data was collected from a sampl...

  11. High Performance Object-Oriented Scientific Programming in Fortran 90

    Science.gov (United States)

    Norton, Charles D.; Decyk, Viktor K.; Szymanski, Boleslaw K.

    1997-01-01

    We illustrate how Fortran 90 supports object-oriented concepts by example of plasma particle computations on the IBM SP. Our experience shows that Fortran 90 and object-oriented methodology give high performance while providing a bridge from Fortran 77 legacy codes to modern programming principles. All of our object-oriented Fortran 90 codes execute more quickly thatn the equeivalent C++ versions, yet the abstraction modelling capabilities used for scentific programming are comparably powereful.

  12. High-performance control system for a heavy-ion medical accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Lancaster, H.D.; Magyary, S.B.; Sah, R.C.

    1983-03-01

    A high performance control system is being designed as part of a heavy ion medical accelerator. The accelerator will be a synchrotron dedicated to clinical and other biomedical uses of heavy ions, and it will deliver fully stripped ions at energies up to 800 MeV/nucleon. A key element in the design of an accelerator which will operate in a hospital environment is to provide a high performance control system. This control system will provide accelerator modeling to facilitate changes in operating mode, provide automatic beam tuning to simplify accelerator operations, and provide diagnostics to enhance reliability. The control system being designed utilizes many microcomputers operating in parallel to collect and transmit data; complex numerical computations are performed by a powerful minicomputer. In order to provide the maximum operational flexibility, the Medical Accelerator control system will be capable of dealing with pulse-to-pulse changes in beam energy and ion species.

  13. High-performance control system for a heavy-ion medical accelerator

    International Nuclear Information System (INIS)

    Lancaster, H.D.; Magyary, S.B.; Sah, R.C.

    1983-03-01

    A high performance control system is being designed as part of a heavy ion medical accelerator. The accelerator will be a synchrotron dedicated to clinical and other biomedical uses of heavy ions, and it will deliver fully stripped ions at energies up to 800 MeV/nucleon. A key element in the design of an accelerator which will operate in a hospital environment is to provide a high performance control system. This control system will provide accelerator modeling to facilitate changes in operating mode, provide automatic beam tuning to simplify accelerator operations, and provide diagnostics to enhance reliability. The control system being designed utilizes many microcomputers operating in parallel to collect and transmit data; complex numerical computations are performed by a powerful minicomputer. In order to provide the maximum operational flexibility, the Medical Accelerator control system will be capable of dealing with pulse-to-pulse changes in beam energy and ion species

  14. Performance of different radiotherapy workload models

    International Nuclear Information System (INIS)

    Barbera, Lisa; Jackson, Lynda D.; Schulze, Karleen; Groome, Patti A.; Foroudi, Farshad; Delaney, Geoff P.; Mackillop, William J.

    2003-01-01

    Purpose: The purpose of this study was to evaluate the performance of different radiotherapy workload models using a prospectively collected dataset of patient and treatment information from a single center. Methods and Materials: Information about all individual radiotherapy treatments was collected for 2 weeks from the three linear accelerators (linacs) in our department. This information included diagnosis code, treatment site, treatment unit, treatment time, fields per fraction, technique, beam type, blocks, wedges, junctions, port films, and Eastern Cooperative Oncology Group (ECOG) performance status. We evaluated the accuracy and precision of the original and revised basic treatment equivalent (BTE) model, the simple and complex Addenbrooke models, the equivalent simple treatment visit (ESTV) model, fields per hour, and two local standards of workload measurement. Results: Data were collected for 2 weeks in June 2001. During this time, 151 patients were treated with 857 fractions. The revised BTE model performed better than the other models with a mean vertical bar observed - predicted vertical bar of 2.62 (2.44-2.80). It estimated 88.0% of treatment times within 5 min, which is similar to the previously reported accuracy of the model. Conclusion: The revised BTE model had similar accuracy and precision for data collected in our center as it did for the original dataset and performed the best of the models assessed. This model would have uses for patient scheduling, and describing workloads and case complexity

  15. Assessing Ecosystem Model Performance in Semiarid Systems

    Science.gov (United States)

    Thomas, A.; Dietze, M.; Scott, R. L.; Biederman, J. A.

    2017-12-01

    In ecosystem process modelling, comparing outputs to benchmark datasets observed in the field is an important way to validate models, allowing the modelling community to track model performance over time and compare models at specific sites. Multi-model comparison projects as well as models themselves have largely been focused on temperate forests and similar biomes. Semiarid regions, on the other hand, are underrepresented in land surface and ecosystem modelling efforts, and yet will be disproportionately impacted by disturbances such as climate change due to their sensitivity to changes in the water balance. Benchmarking models at semiarid sites is an important step in assessing and improving models' suitability for predicting the impact of disturbance on semiarid ecosystems. In this study, several ecosystem models were compared at a semiarid grassland in southwestern Arizona using PEcAn, or the Predictive Ecosystem Analyzer, an open-source eco-informatics toolbox ideal for creating the repeatable model workflows necessary for benchmarking. Models included SIPNET, DALEC, JULES, ED2, GDAY, LPJ-GUESS, MAESPA, CLM, CABLE, and FATES. Comparison between model output and benchmarks such as net ecosystem exchange (NEE) tended to produce high root mean square error and low correlation coefficients, reflecting poor simulation of seasonality and the tendency for models to create much higher carbon sources than observed. These results indicate that ecosystem models do not currently adequately represent semiarid ecosystem processes.

  16. The implementation of sea ice model on a regional high-resolution scale

    Science.gov (United States)

    Prasad, Siva; Zakharov, Igor; Bobby, Pradeep; McGuire, Peter

    2015-09-01

    The availability of high-resolution atmospheric/ocean forecast models, satellite data and access to high-performance computing clusters have provided capability to build high-resolution models for regional ice condition simulation. The paper describes the implementation of the Los Alamos sea ice model (CICE) on a regional scale at high resolution. The advantage of the model is its ability to include oceanographic parameters (e.g., currents) to provide accurate results. The sea ice simulation was performed over Baffin Bay and the Labrador Sea to retrieve important parameters such as ice concentration, thickness, ridging, and drift. Two different forcing models, one with low resolution and another with a high resolution, were used for the estimation of sensitivity of model results. Sea ice behavior over 7 years was simulated to analyze ice formation, melting, and conditions in the region. Validation was based on comparing model results with remote sensing data. The simulated ice concentration correlated well with Advanced Microwave Scanning Radiometer for EOS (AMSR-E) and Ocean and Sea Ice Satellite Application Facility (OSI-SAF) data. Visual comparison of ice thickness trends estimated from the Soil Moisture and Ocean Salinity satellite (SMOS) agreed with the simulation for year 2010-2011.

  17. High pressure common rail injection system modeling and control.

    Science.gov (United States)

    Wang, H P; Zheng, D; Tian, Y

    2016-07-01

    In this paper modeling and common-rail pressure control of high pressure common rail injection system (HPCRIS) is presented. The proposed mathematical model of high pressure common rail injection system which contains three sub-systems: high pressure pump sub-model, common rail sub-model and injector sub-model is a relative complicated nonlinear system. The mathematical model is validated by the software Matlab and a virtual detailed simulation environment. For the considered HPCRIS, an effective model free controller which is called Extended State Observer - based intelligent Proportional Integral (ESO-based iPI) controller is designed. And this proposed method is composed mainly of the referred ESO observer, and a time delay estimation based iPI controller. Finally, to demonstrate the performances of the proposed controller, the proposed ESO-based iPI controller is compared with a conventional PID controller and ADRC. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Model Performance Evaluation and Scenario Analysis (MPESA)

    Science.gov (United States)

    Model Performance Evaluation and Scenario Analysis (MPESA) assesses the performance with which models predict time series data. The tool was developed Hydrological Simulation Program-Fortran (HSPF) and the Stormwater Management Model (SWMM)

  19. Techniques for Modeling Human Performance in Synthetic Environments: A Supplementary Review

    National Research Council Canada - National Science Library

    Ritter, Frank E; Shadbolt, Nigel R; Elliman, David; Young, Richard M; Gobet, Fernand; Baxter, Gordon D

    2003-01-01

    ... architectures including hybrid architectures, and agent and Belief, Desires and Intentions (BDI) architectures. A list of projects with high payoff for modeling human performance in synthetic environments is provided as a conclusion.

  20. Stability at high performance in the MAST spherical tokamak

    International Nuclear Information System (INIS)

    Buttery, R.J.; Akers, R.; Arends, E. =

    2003-01-01

    The development of reliable H-modes on MAST, together with advances in heating power and a range of powerful diagnostics, has provided a platform to enable MAST to address some of he most important issues of tokamak stability. In particular the high β potential of the ST is highlighted with stable operation at β N ∼5-6 , β T ∼ 16% and β p as high as 1.9, confirmed by a range of profile diagnostics. Calculations indicate that β N levels are in the vicinity of no-wall stability limits. Studies have provided the first identification of the Neoclassical Tearing Mode (NTM) in the ST, using its behaviour to quantitatively validate predictions of NTM theory, previously only applied to conventional tokamaks. Experiments have demonstrated that sawteeth play a strong role in triggering NTMs - by avoiding large sawteeth much higher β N can, and has, been reached. Further studies have confirmed the NTM's significance, with large islands observed using the 300 point Thomson diagnostic, and locking of large n=1 modes frequently leading to disruptions. H-mode plasmas are also limited by ELMs, with confinement degraded as ELM frequency rises. However, unlike the conventional tokamak, the ELMs in high performing regimes on MAST (H IPB98Y2 ∼1) appear to be type III in nature. Modelling identifies instability to peeling modes, consistent with a type III interpretation, and shows considerable scope to raise pressure gradients (despite n=∞ ballooning theory predictions of instability) before ballooning type modes (perhaps associated with type I ELMs) occur. Finally sawteeth are shown not to remove the q=1 surface in the ST - other promising models are being explored. Thus research on MAST is not only demonstrating stable operation at high performance levels, and developing methods to control instabilities; it is also providing detailed tests of the stability physics and models applicable to conventional tokamaks, such as ITER. (author)

  1. High performance MRI simulations of motion on multi-GPU systems.

    Science.gov (United States)

    Xanthis, Christos G; Venetis, Ioannis E; Aletras, Anthony H

    2014-07-04

    MRI physics simulators have been developed in the past for optimizing imaging protocols and for training purposes. However, these simulators have only addressed motion within a limited scope. The purpose of this study was the incorporation of realistic motion, such as cardiac motion, respiratory motion and flow, within MRI simulations in a high performance multi-GPU environment. Three different motion models were introduced in the Magnetic Resonance Imaging SIMULator (MRISIMUL) of this study: cardiac motion, respiratory motion and flow. Simulation of a simple Gradient Echo pulse sequence and a CINE pulse sequence on the corresponding anatomical model was performed. Myocardial tagging was also investigated. In pulse sequence design, software crushers were introduced to accommodate the long execution times in order to avoid spurious echoes formation. The displacement of the anatomical model isochromats was calculated within the Graphics Processing Unit (GPU) kernel for every timestep of the pulse sequence. Experiments that would allow simulation of custom anatomical and motion models were also performed. Last, simulations of motion with MRISIMUL on single-node and multi-node multi-GPU systems were examined. Gradient Echo and CINE images of the three motion models were produced and motion-related artifacts were demonstrated. The temporal evolution of the contractility of the heart was presented through the application of myocardial tagging. Better simulation performance and image quality were presented through the introduction of software crushers without the need to further increase the computational load and GPU resources. Last, MRISIMUL demonstrated an almost linear scalable performance with the increasing number of available GPU cards, in both single-node and multi-node multi-GPU computer systems. MRISIMUL is the first MR physics simulator to have implemented motion with a 3D large computational load on a single computer multi-GPU configuration. The incorporation

  2. The eGo grid model: An open source approach towards a model of German high and extra-high voltage power grids

    Science.gov (United States)

    Mueller, Ulf Philipp; Wienholt, Lukas; Kleinhans, David; Cussmann, Ilka; Bunke, Wolf-Dieter; Pleßmann, Guido; Wendiggensen, Jochen

    2018-02-01

    There are several power grid modelling approaches suitable for simulations in the field of power grid planning. The restrictive policies of grid operators, regulators and research institutes concerning their original data and models lead to an increased interest in open source approaches of grid models based on open data. By including all voltage levels between 60 kV (high voltage) and 380kV (extra high voltage), we dissolve the common distinction between transmission and distribution grid in energy system models and utilize a single, integrated model instead. An open data set for primarily Germany, which can be used for non-linear, linear and linear-optimal power flow methods, was developed. This data set consists of an electrically parameterised grid topology as well as allocated generation and demand characteristics for present and future scenarios at high spatial and temporal resolution. The usability of the grid model was demonstrated by the performance of exemplary power flow optimizations. Based on a marginal cost driven power plant dispatch, being subject to grid restrictions, congested power lines were identified. Continuous validation of the model is nescessary in order to reliably model storage and grid expansion in progressing research.

  3. Data and analytics to inform energy retrofit of high performance buildings

    International Nuclear Information System (INIS)

    Hong, Tianzhen; Yang, Le; Hill, David; Feng, Wei

    2014-01-01

    Highlights: • High performance buildings can be retrofitted using measured data and analytics. • Data of energy use, systems operating and environmental conditions are needed. • An energy data model based on the ISO Standard 12655 is key for energy benchmarking. • Three types of analytics are used: energy profiling, benchmarking, and diagnostics. • The case study shows 20% of electricity can be saved by retrofit. - Abstract: Buildings consume more than one-third of the world’s primary energy. Reducing energy use in buildings with energy efficient technologies is feasible and also driven by energy policies such as energy benchmarking, disclosure, rating, and labeling in both the developed and developing countries. Current energy retrofits focus on the existing building stocks, especially older buildings, but the growing number of new high performance buildings built around the world raises a question that how these buildings perform and whether there are retrofit opportunities to further reduce their energy use. This is a new and unique problem for the building industry. Traditional energy audit or analysis methods are inadequate to look deep into the energy use of the high performance buildings. This study aims to tackle this problem with a new holistic approach powered by building performance data and analytics. First, three types of measured data are introduced, including the time series energy use, building systems operating conditions, and indoor and outdoor environmental parameters. An energy data model based on the ISO Standard 12655 is used to represent the energy use in buildings in a three-level hierarchy. Secondly, a suite of analytics were proposed to analyze energy use and to identify retrofit measures for high performance buildings. The data-driven analytics are based on monitored data at short time intervals, and cover three levels of analysis – energy profiling, benchmarking and diagnostics. Thirdly, the analytics were applied to a high

  4. Methods of calculating the post-closure performance of high-level waste repositories

    Energy Technology Data Exchange (ETDEWEB)

    Ross, B. (ed.)

    1989-02-01

    This report is intended as an overview of post-closure performance assessment methods for high-level radioactive waste repositories and is designed to give the reader a broad sense of the state of the art of this technology. As described here, ''the state of the art'' includes only what has been reported in report, journal, and conference proceedings literature through August 1987. There is a very large literature on the performance of high-level waste repositories. In order to make a review of this breadth manageable, its scope must be carefully defined. The essential principle followed is that only methods of calculating the long-term performance of waste repositories are described. The report is organized to reflect, in a generalized way, the logical order to steps that would be taken in a typical performance assessment. Chapter 2 describes ways of identifying scenarios and estimating their probabilities. Chapter 3 presents models used to determine the physical and chemical environment of a repository, including models of heat transfer, radiation, geochemistry, rock mechanics, brine migration, radiation effects on chemistry, and coupled processes. The next two chapters address the performance of specific barriers to release of radioactivity. Chapter 4 treats engineered barriers, including containers, waste forms, backfills around waste packages, shaft and borehole seals, and repository design features. Chapter 5 discusses natural barriers, including ground water systems and stability of salt formations. The final chapters address optics of general applicability to performance assessment models. Methods of sensitivity and uncertainty analysis are described in Chapter 6, and natural analogues of repositories are treated in Chapter 7. 473 refs., 19 figs., 2 tabs.

  5. Methods of calculating the post-closure performance of high-level waste repositories

    International Nuclear Information System (INIS)

    Ross, B.

    1989-02-01

    This report is intended as an overview of post-closure performance assessment methods for high-level radioactive waste repositories and is designed to give the reader a broad sense of the state of the art of this technology. As described here, ''the state of the art'' includes only what has been reported in report, journal, and conference proceedings literature through August 1987. There is a very large literature on the performance of high-level waste repositories. In order to make a review of this breadth manageable, its scope must be carefully defined. The essential principle followed is that only methods of calculating the long-term performance of waste repositories are described. The report is organized to reflect, in a generalized way, the logical order to steps that would be taken in a typical performance assessment. Chapter 2 describes ways of identifying scenarios and estimating their probabilities. Chapter 3 presents models used to determine the physical and chemical environment of a repository, including models of heat transfer, radiation, geochemistry, rock mechanics, brine migration, radiation effects on chemistry, and coupled processes. The next two chapters address the performance of specific barriers to release of radioactivity. Chapter 4 treats engineered barriers, including containers, waste forms, backfills around waste packages, shaft and borehole seals, and repository design features. Chapter 5 discusses natural barriers, including ground water systems and stability of salt formations. The final chapters address optics of general applicability to performance assessment models. Methods of sensitivity and uncertainty analysis are described in Chapter 6, and natural analogues of repositories are treated in Chapter 7. 473 refs., 19 figs., 2 tabs

  6. Ground Glass Pozzolan in Conventional, High, and Ultra-High Performance Concrete

    OpenAIRE

    Tagnit-Hamou Arezki; Zidol Ablam; Soliman Nancy; Deschamps Joris; Omran Ahmed

    2018-01-01

    Ground-glass pozzolan (G) obtained by grinding the mixed-waste glass to same fineness of cement can act as a supplementary-cementitious material (SCM), given that it is an amorphous and a pozzolanic material. The G showed promising performances in different concrete types such as conventional concrete (CC), high-performance concrete (HPC), and ultra-high performance concrete (UHPC). The current paper reports on the characteristics and performance of G in these concrete types. The use of G pro...

  7. Living high-training low: effect on erythropoiesis and aerobic performance in highly-trained swimmers

    DEFF Research Database (Denmark)

    Robach, P.; Schmitt, L.; Brugniaux, J.V.

    2006-01-01

    LHTL enhances aerobic performance in athletes, and if any positive effect may last for up to 2 weeks after LHTL intervention. Eighteen swimmers trained for 13 days at 1,200 m while sleeping/living at 1,200 m in ambient air (control, n=9) or in hypoxic rooms (LHTL, n=9, 5 days at simulated altitude of 2......The "living high-training low" model (LHTL), i.e., training in normoxia but sleeping/living in hypoxia, is designed to improve the athletes performance. However, LHTL efficacy still remains controversial and also little is known about the duration of its potential benefit. This study tested whether......,500 m followed by 8 days at simulated altitude of 3,000 m, 16 h day(-1)). Measures were done before 1-2 days (POST-1) and 2 weeks after intervention (POST-15). Aerobic performance was assessed from two swimming trials, exploring .VO(2max) and endurance performance (2,000-m time trial), respectively...

  8. Beyond Performance: A Motivational Experiences Model of Stereotype Threat

    Science.gov (United States)

    Thoman, Dustin B.; Smith, Jessi L.; Brown, Elizabeth R.; Chase, Justin; Lee, Joo Young K.

    2013-01-01

    The contributing role of stereotype threat (ST) to learning and performance decrements for stigmatized students in highly evaluative situations has been vastly documented and is now widely known by educators and policy makers. However, recent research illustrates that underrepresented and stigmatized students’ academic and career motivations are influenced by ST more broadly, particularly through influences on achievement orientations, sense of belonging, and intrinsic motivation. Such a focus moves conceptualizations of ST effects in education beyond the influence on a student’s performance, skill level, and feelings of self-efficacy per se to experiencing greater belonging uncertainty and lower interest in stereotyped tasks and domains. These negative experiences are associated with important outcomes such as decreased persistence and domain identification, even among students who are high in achievement motivation. In this vein, we present and review support for the Motivational Experience Model of ST, a self-regulatory model framework for integrating research on ST, achievement goals, sense of belonging, and intrinsic motivation to make predictions for how stigmatized students’ motivational experiences are maintained or disrupted, particularly over long periods of time. PMID:23894223

  9. Performance analysis of high-concentrated multi-junction solar cells in hot climate

    Science.gov (United States)

    Ghoneim, Adel A.; Kandil, Kandil M.; Alzanki, Talal H.; Alenezi, Mohammad R.

    2018-03-01

    Multi-junction concentrator solar cells are a promising technology as they can fulfill the increasing energy demand with renewable sources. Focusing sunlight upon the aperture of multi-junction photovoltaic (PV) cells can generate much greater power densities than conventional PV cells. So, concentrated PV multi-junction solar cells offer a promising way towards achieving minimum cost per kilowatt-hour. However, these cells have many aspects that must be fixed to be feasible for large-scale energy generation. In this work, a model is developed to analyze the impact of various atmospheric factors on concentrator PV performance. A single-diode equivalent circuit model is developed to examine multi-junction cells performance in hot weather conditions, considering the impacts of both temperature and concentration ratio. The impacts of spectral variations of irradiance on annual performance of various high-concentrated photovoltaic (HCPV) panels are examined, adapting spectra simulations using the SMARTS model. Also, the diode shunt resistance neglected in the existing models is considered in the present model. The present results are efficiently validated against measurements from published data to within 2% accuracy. Present predictions show that the single-diode model considering the shunt resistance gives accurate and reliable results. Also, aerosol optical depth (AOD) and air mass are most important atmospheric parameters having a significant impact on HCPV cell performance. In addition, the electrical efficiency (η) is noticed to increase with concentration to a certain concentration degree after which it decreases. Finally, based on the model predictions, let us conclude that the present model could be adapted properly to examine HCPV cells' performance over a broad range of operating conditions.

  10. Strategic Culture Change: The Door to Achieving High Performance and Inclusion.

    Science.gov (United States)

    Miller, Frederick A.

    1998-01-01

    Presents diversity as a resource to create a high performing work culture that enables all employees to do their best work. Distinguishes between diversity and inclusion, describes a model for diagnosing an organization's culture, sets forth steps for implementing a organizational change, and discusses the human resource professional's role.…

  11. Development of low-cost high-performance multispectral camera system at Banpil

    Science.gov (United States)

    Oduor, Patrick; Mizuno, Genki; Olah, Robert; Dutta, Achyut K.

    2014-05-01

    Banpil Photonics (Banpil) has developed a low-cost high-performance multispectral camera system for Visible to Short- Wave Infrared (VIS-SWIR) imaging for the most demanding high-sensitivity and high-speed military, commercial and industrial applications. The 640x512 pixel InGaAs uncooled camera system is designed to provide a compact, smallform factor to within a cubic inch, high sensitivity needing less than 100 electrons, high dynamic range exceeding 190 dB, high-frame rates greater than 1000 frames per second (FPS) at full resolution, and low power consumption below 1W. This is practically all the feature benefits highly desirable in military imaging applications to expand deployment to every warfighter, while also maintaining a low-cost structure demanded for scaling into commercial markets. This paper describes Banpil's development of the camera system including the features of the image sensor with an innovation integrating advanced digital electronics functionality, which has made the confluence of high-performance capabilities on the same imaging platform practical at low cost. It discusses the strategies employed including innovations of the key components (e.g. focal plane array (FPA) and Read-Out Integrated Circuitry (ROIC)) within our control while maintaining a fabless model, and strategic collaboration with partners to attain additional cost reductions on optics, electronics, and packaging. We highlight the challenges and potential opportunities for further cost reductions to achieve a goal of a sub-$1000 uncooled high-performance camera system. Finally, a brief overview of emerging military, commercial and industrial applications that will benefit from this high performance imaging system and their forecast cost structure is presented.

  12. Optimised performance of industrial high resolution computerised tomography

    International Nuclear Information System (INIS)

    Maangaard, M.

    2000-01-01

    The purpose of non-destructive evaluation (NDE) is to acquire knowledge of the investigated sample. Digital x-ray imaging techniques such as radiography or computerised tomography (CT) produce images of the interior of a sample. The obtained image quality determines the possibility of detecting sample related features, e.g. details and flaws. This thesis presents a method of optimising the performance of industrial X-ray equipment for the imaging task at issue in order to obtain images with high quality. CT produces maps of the X-ray linear attenuation of the sample's interior. CT can produce two dimensional cross-section images or three-dimensional images with volumetric information on the investigated sample. The image contrast and noise depend on both the investigated sample and the equipment and settings used (X-ray tube potential, X-ray filtration, exposure time, etc.). Hence, it is vital to find the optimal equipment settings in order to obtain images of high quality. To be able to mathematically optimise the image quality, it is necessary to have a model of the X-ray imaging system together with an appropriate measure of image quality. The optimisation is performed with a developed model for an X-ray image-intensifier-based radiography system. The model predicts the mean value and variance of the measured signal level in the collected radiographic images. The traditionally used measure of physical image quality is the signal-to-noise ratio (SNR). To calculate the signal-to-noise ratio, a well-defined detail (flaw) is required. It was found that maximising the SNR leads to ambiguities, the optimised settings found by maximising the SNR were dependent on the material in the detail. When CT is performed on irregular shaped samples containing density and compositional variations, it is difficult to define which SNR to use for optimisation. This difficulty is solved by the measures of physical image quality proposed here, the ratios geometry

  13. Developing Performance Management in State Government: An Exploratory Model for Danish State Institutions

    DEFF Research Database (Denmark)

    Nielsen, Steen; Rikhardsson, Pall M.

    . The question remains how and if accounting departments in central government can deal with these challenges. This exploratory study proposes and tests a model depicting different areas, elements and characteristics within a government accounting departments and their association with a perceived performance...... management model. The findings are built on a questionnaire study of 45 high level accounting officers in central governmental institutions. Our statistical model consists of five explored constructs: improvements; initiatives and reforms, incentives and contracts, the use of management accounting practices......, and cost allocations and their relations to performance management. Findings based on structural equation modelling and partial least squares regression (PLS) indicates a positive effect on the latent depending variable, called performance management results. The models/theories explain a significant...

  14. Performance Measurement Model A TarBase model with ...

    Indian Academy of Sciences (India)

    rohit

    Model A 8.0 2.0 94.52% 88.46% 76 108 12 12 0.86 0.91 0.78 0.94. Model B 2.0 2.0 93.18% 89.33% 64 95 10 9 0.88 0.90 0.75 0.98. The above results for TEST – 1 show details for our two models (Model A and Model B).Performance of Model A after adding of 32 negative dataset of MiRTif on our testing set(MiRecords) ...

  15. Characterisation of current and future GNSS performance in urban canyons using a high quality 3-D urban model of Melbourne, Australia

    Science.gov (United States)

    Gang-jun, Liu; Kefei, Zhang; Falin, Wu; Liam, Densley; Retscher, Günther

    2009-03-01

    Global Navigation Satellite System (GNSS) is a critical space-borne geospatial infrastructure providing essential positioning supports to a range of location-sensitive applications. GNSS is currently dominated by the US Global Positioning System (GPS) constellation. The next generation GNSS is expected to offer more satellites, better positioning provision, and improved availability and continuity of navigation support. However, GNSS performance in 3-D urban environments is problematic because GNSS signals are either completely blocked or severely degraded by high-rising geographic features like buildings. The aim of this study is to gain an in-depth understanding of the changing spatial patterns of GNSS performance, measured by the number of visible satellites (NVS) and position dilution-of-precision (PDOP), in the urban canyons of Melbourne, Australia. The methodology used includes the following steps: (1) determination of the dynamic orbital positions of current and future GNSS satellites; (2) development of a 3-D urban model of high geometric quality for Melbourne Central Business District (CBD); (3) evaluation of GNSS performance for every specified location in the urban canyons; and (4) visualisation and characterisation of the dynamic spatial patterns of GNSS performances in the urban canyons. As expected, the study shows that the integration of the GPS and Galileo constellations results in higher availability and stronger geometry, leading to significant improvement of GNSS performance in urban canyons of Melbourne CBD. Some conclusions are drawn and further research currently undertaken is also outlined.

  16. Local Electric Field Facilitates High-Performance Li-Ion Batteries.

    Science.gov (United States)

    Liu, Youwen; Zhou, Tengfei; Zheng, Yang; He, Zhihai; Xiao, Chong; Pang, Wei Kong; Tong, Wei; Zou, Youming; Pan, Bicai; Guo, Zaiping; Xie, Yi

    2017-08-22

    By scrutinizing the energy storage process in Li-ion batteries, tuning Li-ion migration behavior by atomic level tailoring will unlock great potential for pursuing higher electrochemical performance. Vacancy, which can effectively modulate the electrical ordering on the nanoscale, even in tiny concentrations, will provide tempting opportunities for manipulating Li-ion migratory behavior. Herein, taking CuGeO 3 as a model, oxygen vacancies obtained by reducing the thickness dimension down to the atomic scale are introduced in this work. As the Li-ion storage progresses, the imbalanced charge distribution emerging around the oxygen vacancies could induce a local built-in electric field, which will accelerate the ions' migration rate by Coulomb forces and thus have benefits for high-rate performance. Furthermore, the thus-obtained CuGeO 3 ultrathin nanosheets (CGOUNs)/graphene van der Waals heterojunctions are used as anodes in Li-ion batteries, which deliver a reversible specific capacity of 1295 mAh g -1 at 100 mA g -1 , with improved rate capability and cycling performance compared to their bulk counterpart. Our findings build a clear connection between the atomic/defect/electronic structure and intrinsic properties for designing high-efficiency electrode materials.

  17. Predicting the Consequences of Workload Management Strategies with Human Performance Modeling

    Science.gov (United States)

    Mitchell, Diane Kuhl; Samma, Charneta

    2011-01-01

    Human performance modelers at the US Army Research Laboratory have developed an approach for establishing Soldier high workload that can be used for analyses of proposed system designs. Their technique includes three key components. To implement the approach in an experiment, the researcher would create two experimental conditions: a baseline and a design alternative. Next they would identify a scenario in which the test participants perform all their representative concurrent interactions with the system. This scenario should include any events that would trigger a different set of goals for the human operators. They would collect workload values during both the control and alternative design condition to see if the alternative increased workload and decreased performance. They have successfully implemented this approach for military vehicle. designs using the human performance modeling tool, IMPRINT. Although ARL researches use IMPRINT to implement their approach, it can be applied to any workload analysis. Researchers using other modeling and simulations tools or conducting experiments or field tests can use the same approach.

  18. Understanding the Implementation of Knowledge Management in High-Performance Schools in Malaysia

    Directory of Open Access Journals (Sweden)

    Rahmad Sukor Ab. Samad

    2014-12-01

    Full Text Available This study intends to assess the implementation of policies in high-performance schools (HPS. One hundred fifty-two administrators in 52 HPS were selected using full sampling. Only two factors serve as contributors in knowledge management model for high-performing schools in Malaysia, which were school culture and school strategy. Whereas the correlation indicated that all 10 factors, namely, mission and vision, school strategy, school culture, intellectual modal, learning organization, leadership management, teamwork and learning community, knowledge sharing, new knowledge generation, and digital advancement, have significant relationships with the understanding of knowledge management, at different levels.

  19. Models for Experimental High Density Housing

    Science.gov (United States)

    Bradecki, Tomasz; Swoboda, Julia; Nowak, Katarzyna; Dziechciarz, Klaudia

    2017-10-01

    The article presents the effects of research on models of high density housing. The authors present urban projects for experimental high density housing estates. The design was based on research performed on 38 examples of similar housing in Poland that have been built after 2003. Some of the case studies show extreme density and that inspired the researchers to test individual virtual solutions that would answer the question: How far can we push the limits? The experimental housing projects show strengths and weaknesses of design driven only by such indexes as FAR (floor attenuation ratio - housing density) and DPH (dwellings per hectare). Although such projects are implemented, the authors believe that there are reasons for limits since high index values may be in contradiction to the optimum character of housing environment. Virtual models on virtual plots presented by the authors were oriented toward maximising the DPH index and DAI (dwellings area index) which is very often the main driver for developers. The authors also raise the question of sustainability of such solutions. The research was carried out in the URBAN model research group (Gliwice, Poland) that consists of academic researchers and architecture students. The models reflect architectural and urban regulations that are valid in Poland. Conclusions might be helpful for urban planners, urban designers, developers, architects and architecture students.

  20. Using Performance Assessment Model in Physics Laboratory to Increase Students’ Critical Thinking Disposition

    Science.gov (United States)

    Emiliannur, E.; Hamidah, I.; Zainul, A.; Wulan, A. R.

    2017-09-01

    Performance Assessment Model (PAM) has been developed to represent the physics concepts which able to be devided into five experiments: 1) acceleration due to gravity; 2) Hooke’s law; 3) simple harmonic motion; 4) work-energy concepts; and 5) the law of momentum conservation. The aim of this study was to determine the contribution of PAM in physics laboratory to increase students’ Critical Thinking Disposition (CTD) at senior high school. Subject of the study were 11th grade consist 32 students of a senior high school in Lubuk Sikaping, West Sumatera. The research used one group pretest-postest design. Data was collected through essay test and questionnaire about CTD. Data was analyzed using quantitative way with N-gain value. This study concluded that performance assessmet model effectively increases the N-gain at medium category. It means students’ critical thinking disposition significant increase after implementation of performance assessment model in physics laboratory.

  1. Code structure for U-Mo fuel performance analysis in high performance research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Gwan Yoon; Cho, Tae Won; Lee, Chul Min; Sohn, Dong Seong [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of); Lee, Kyu Hong; Park, Jong Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    A performance analysis modeling applicable to research reactor fuel is being developed with available models describing fuel performance phenomena observed from in-pile tests. We established the calculation algorithm and scheme to best predict fuel performance using radio-thermo-mechanically coupled system to consider fuel swelling, interaction layer growth, pore formation in the fuel meat, and creep fuel deformation and mass relocation, etc. In this paper, we present a general structure of the performance analysis code for typical research reactor fuel and advanced features such as a model to predict fuel failure induced by combination of breakaway swelling and pore growth in the fuel meat. Thermo-mechanical code dedicated to the modeling of U-Mo dispersion fuel plates is being under development in Korea to satisfy a demand for advanced performance analysis and safe assessment of the plates. The major physical phenomena during irradiation are considered in the code such that interaction layer formation by fuel-matrix interdiffusion, fission induced swelling of fuel particle, mass relocation by fission induced stress, and pore formation at the interface between the reaction product and Al matrix.

  2. Indoor Air Quality in High Performance Schools

    Science.gov (United States)

    High performance schools are facilities that improve the learning environment while saving energy, resources, and money. The key is understanding the lifetime value of high performance schools and effectively managing priorities, time, and budget.

  3. Advanced high performance solid wall blanket concepts

    International Nuclear Information System (INIS)

    Wong, C.P.C.; Malang, S.; Nishio, S.; Raffray, R.; Sagara, A.

    2002-01-01

    First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability

  4. High-performance OPCPA laser system

    International Nuclear Information System (INIS)

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J.

    2006-01-01

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  5. High-performance OPCPA laser system

    Energy Technology Data Exchange (ETDEWEB)

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J. [Rochester Univ., Lab. for Laser Energetics, NY (United States)

    2006-06-15

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  6. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  7. Planar junctionless phototransistor: A potential high-performance and low-cost device for optical-communications

    Science.gov (United States)

    Ferhati, H.; Djeffal, F.

    2017-12-01

    In this paper, a new junctionless optical controlled field effect transistor (JL-OCFET) and its comprehensive theoretical model is proposed to achieve high optical performance and low cost fabrication process. Exhaustive study of the device characteristics and comparison between the proposed junctionless design and the conventional inversion mode structure (IM-OCFET) for similar dimensions are performed. Our investigation reveals that the proposed design exhibits an outstanding capability to be an alternative to the IM-OCFET due to the high performance and the weak signal detection benefit offered by this design. Moreover, the developed analytical expressions are exploited to formulate the objective functions to optimize the device performance using Genetic Algorithms (GAs) approach. The optimized JL-OCFET not only demonstrates good performance in terms of derived drain current and responsivity, but also exhibits superior signal to noise ratio, low power consumption, high-sensitivity, high ION/IOFF ratio and high-detectivity as compared to the conventional IM-OCFET counterpart. These characteristics make the optimized JL-OCFET potentially suitable for developing low cost and ultrasensitive photodetectors for high-performance and low cost inter-chips data communication applications.

  8. Job stress models, depressive disorders and work performance of engineers in microelectronics industry.

    Science.gov (United States)

    Chen, Sung-Wei; Wang, Po-Chuan; Hsin, Ping-Lung; Oates, Anthony; Sun, I-Wen; Liu, Shen-Ing

    2011-01-01

    Microelectronic engineers are considered valuable human capital contributing significantly toward economic development, but they may encounter stressful work conditions in the context of a globalized industry. The study aims at identifying risk factors of depressive disorders primarily based on job stress models, the Demand-Control-Support and Effort-Reward Imbalance models, and at evaluating whether depressive disorders impair work performance in microelectronics engineers in Taiwan. The case-control study was conducted among 678 microelectronics engineers, 452 controls and 226 cases with depressive disorders which were defined by a score 17 or more on the Beck Depression Inventory and a psychiatrist's diagnosis. The self-administered questionnaires included the Job Content Questionnaire, Effort-Reward Imbalance Questionnaire, demography, psychosocial factors, health behaviors and work performance. Hierarchical logistic regression was applied to identify risk factors of depressive disorders. Multivariate linear regressions were used to determine factors affecting work performance. By hierarchical logistic regression, risk factors of depressive disorders are high demands, low work social support, high effort/reward ratio and low frequency of physical exercise. Combining the two job stress models may have better predictive power for depressive disorders than adopting either model alone. Three multivariate linear regressions provide similar results indicating that depressive disorders are associated with impaired work performance in terms of absence, role limitation and social functioning limitation. The results may provide insight into the applicability of job stress models in a globalized high-tech industry considerably focused in non-Western countries, and the design of workplace preventive strategies for depressive disorders in Asian electronics engineering population.

  9. ATR performance modeling concepts

    Science.gov (United States)

    Ross, Timothy D.; Baker, Hyatt B.; Nolan, Adam R.; McGinnis, Ryan E.; Paulson, Christopher R.

    2016-05-01

    Performance models are needed for automatic target recognition (ATR) development and use. ATRs consume sensor data and produce decisions about the scene observed. ATR performance models (APMs) on the other hand consume operating conditions (OCs) and produce probabilities about what the ATR will produce. APMs are needed for many modeling roles of many kinds of ATRs (each with different sensing modality and exploitation functionality combinations); moreover, there are different approaches to constructing the APMs. Therefore, although many APMs have been developed, there is rarely one that fits a particular need. Clarified APM concepts may allow us to recognize new uses of existing APMs and identify new APM technologies and components that better support coverage of the needed APMs. The concepts begin with thinking of ATRs as mapping OCs of the real scene (including the sensor data) to reports. An APM is then a mapping from explicit quantized OCs (represented with less resolution than the real OCs) and latent OC distributions to report distributions. The roles of APMs can be distinguished by the explicit OCs they consume. APMs used in simulations consume the true state that the ATR is attempting to report. APMs used online with the exploitation consume the sensor signal and derivatives, such as match scores. APMs used in sensor management consume neither of those, but estimate performance from other OCs. This paper will summarize the major building blocks for APMs, including knowledge sources, OC models, look-up tables, analytical and learned mappings, and tools for signal synthesis and exploitation.

  10. Building and Running the Yucca Mountain Total System Performance Model in a Quality Environment

    International Nuclear Information System (INIS)

    D.A. Kalinich; K.P. Lee; J.A. McNeish

    2005-01-01

    A Total System Performance Assessment (TSPA) model has been developed to support the Safety Analysis Report (SAR) for the Yucca Mountain High-Level Waste Repository. The TSPA model forecasts repository performance over a 20,000-year simulation period. It has a high degree of complexity due to the complexity of its underlying process and abstraction models. This is reflected in the size of the model (a 27,000 element GoldSim file), its use of dynamic-linked libraries (14 DLLs), the number and size of its input files (659 files totaling 4.7 GB), and the number of model input parameters (2541 input database entries). TSPA model development and subsequent simulations with the final version of the model were performed to a set of Quality Assurance (QA) procedures. Due to the complexity of the model, comments on previous TSPAs, and the number of analysts involved (22 analysts in seven cities across four time zones), additional controls for the entire life-cycle of the TSPA model, including management, physical, model change, and input controls were developed and documented. These controls did not replace the QA. procedures, rather they provided guidance for implementing the requirements of the QA procedures with the specific intent of ensuring that the model development process and the simulations performed with the final version of the model had sufficient checking, traceability, and transparency. Management controls were developed to ensure that only management-approved changes were implemented into the TSPA model and that only management-approved model runs were performed. Physical controls were developed to track the use of prototype software and preliminary input files, and to ensure that only qualified software and inputs were used in the final version of the TSPA model. In addition, a system was developed to name, file, and track development versions of the TSPA model as well as simulations performed with the final version of the model

  11. Principles of Sonar Performance Modeling

    NARCIS (Netherlands)

    Ainslie, M.A.

    2010-01-01

    Sonar performance modelling (SPM) is concerned with the prediction of quantitative measures of sonar performance, such as probability of detection. It is a multidisciplinary subject, requiring knowledge and expertise in the disparate fields of underwater acoustics, acoustical oceanography, sonar

  12. A Study of Performance in Low-Power Tokamak Reactor with Integrated Predictive Modeling Code

    International Nuclear Information System (INIS)

    Pianroj, Y.; Onjun, T.; Suwanna, S.; Picha, R.; Poolyarat, N.

    2009-07-01

    Full text: A fusion hybrid or a small fusion power output with low power tokamak reactor is presented as another useful application of nuclear fusion. Such tokamak can be used for fuel breeding, high-level waste transmutation, hydrogen production at high temperature, and testing of nuclear fusion technology components. In this work, an investigation of the plasma performance in a small fusion power output design is carried out using the BALDUR predictive integrated modeling code. The simulations of the plasma performance in this design are carried out using the empirical-based Mixed Bohm/gyro Bohm (B/gB) model, whereas the pedestal temperature model is based on magnetic and flow shear (δ α ρ ζ 2 ) stabilization pedestal width scaling. The preliminary results using this core transport model show that the central ion and electron temperatures are rather pessimistic. To improve the performance, the optimization approach are carried out by varying some parameters, such as plasma current and power auxiliary heating, which results in some improvement of plasma performance

  13. Academic motivation, self-concept, engagement, and performance in high school: key processes from a longitudinal perspective.

    Science.gov (United States)

    Green, Jasmine; Liem, Gregory Arief D; Martin, Andrew J; Colmar, Susan; Marsh, Herbert W; McInerney, Dennis

    2012-10-01

    The study tested three theoretically/conceptually hypothesized longitudinal models of academic processes leading to academic performance. Based on a longitudinal sample of 1866 high-school students across two consecutive years of high school (Time 1 and Time 2), the model with the most superior heuristic value demonstrated: (a) academic motivation and self-concept positively predicted attitudes toward school; (b) attitudes toward school positively predicted class participation and homework completion and negatively predicted absenteeism; and (c) class participation and homework completion positively predicted test performance whilst absenteeism negatively predicted test performance. Taken together, these findings provide support for the relevance of the self-system model and, particularly, the importance of examining the dynamic relationships amongst engagement factors of the model. The study highlights implications for educational and psychological theory, measurement, and intervention. Copyright © 2012 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  14. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  15. High-performance computing — an overview

    Science.gov (United States)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  16. Team Development for High Performance Management.

    Science.gov (United States)

    Schermerhorn, John R., Jr.

    1986-01-01

    The author examines a team development approach to management that creates shared commitments to performance improvement by focusing the attention of managers on individual workers and their task accomplishments. It uses the "high-performance equation" to help managers confront shared beliefs and concerns about performance and develop realistic…

  17. High performance as-grown and annealed high band gap tunnel junctions: Te behavior at the interface

    Energy Technology Data Exchange (ETDEWEB)

    Bedair, S. M., E-mail: bedair@ncsu.edu; Harmon, Jeffrey L.; Carlin, C. Zachary; Hashem Sayed, Islam E.; Colter, P. C. [Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, North Carolina 27695 (United States)

    2016-05-16

    The performance of n{sup +}-InGaP(Te)/p{sup +}-AlGaAs(C) high band gap tunnel junctions (TJ) is critical for achieving high efficiency in multijunction photovoltaics. Several limitations for as grown and annealed TJ can be attributed to the Te doping of InGaP and its behavior at the junction interface. Te atoms in InGaP tend to get attached at step edges, resulting in a Te memory effect. In this work, we use the peak tunneling current (J{sub pk}) in this TJ as a diagnostic tool to study the behavior of the Te dopant at the TJ interface. Additionally, we used our understanding of Te behavior at the interface, guided by device modeling, to modify the Te source shut-off procedure and the growth rate. These modifications lead to a record performance for both the as-grown (2000 A/cm{sup 2}) and annealed (1000 A/cm{sup 2}) high band gap tunnel junction.

  18. Modeling the Relations among Parental Involvement, School Engagement and Academic Performance of High School Students

    Science.gov (United States)

    Al-Alwan, Ahmed F.

    2014-01-01

    The author proposed a model to explain how parental involvement and school engagement related to academic performance. Participants were (671) 9th and 10th graders students who completed two scales of "parental involvement" and "school engagement" in their regular classrooms. Results of the path analysis suggested that the…

  19. Early age stress-crack opening relationships for high performance concrete

    DEFF Research Database (Denmark)

    Østergaard, Lennart; Lange, David A.; Stang, Henrik

    2004-01-01

    Stress–crack opening relationships for concrete in early age have been determined for two high performance concrete mixes with water to cementitious materials ratios of 0.307 and 0.48. The wedge splitting test setup was used experimentally and the cracked nonlinear hinge model based...... on the fictitious crack model was applied for the interpretation of the results. A newly developed inverse analysis algorithm was utilized for the extraction of the stress–crack opening relationships. Experiments were conducted at 8, 10, 13, 17, 22, 28, 48, 168 h (7 days) and 672 h (28 days). At the same ages...

  20. Modeling High Frequency Data with Long Memory and Structural Change: A-HYEGARCH Model

    Directory of Open Access Journals (Sweden)

    Yanlin Shi

    2018-03-01

    Full Text Available In this paper, we propose an Adaptive Hyperbolic EGARCH (A-HYEGARCH model to estimate the long memory of high frequency time series with potential structural breaks. Based on the original HYGARCH model, we use the logarithm transformation to ensure the positivity of conditional variance. The structural change is further allowed via a flexible time-dependent intercept in the conditional variance equation. To demonstrate its effectiveness, we perform a range of Monte Carlo studies considering various data generating processes with and without structural changes. Empirical testing of the A-HYEGARCH model is also conducted using high frequency returns of S&P 500, FTSE 100, ASX 200 and Nikkei 225. Our simulation and empirical evidence demonstrate that the proposed A-HYEGARCH model outperforms various competing specifications and can effectively control for structural breaks. Therefore, our model may provide more reliable estimates of long memory and could be a widely useful tool for modelling financial volatility in other contexts.

  1. Rotary engine performance limits predicted by a zero-dimensional model

    Science.gov (United States)

    Bartrand, Timothy A.; Willis, Edward A.

    1992-01-01

    A parametric study was performed to determine the performance limits of a rotary combustion engine. This study shows how well increasing the combustion rate, insulating, and turbocharging increase brake power and decrease fuel consumption. Several generalizations can be made from the findings. First, it was shown that the fastest combustion rate is not necessarily the best combustion rate. Second, several engine insulation schemes were employed for a turbocharged engine. Performance improved only for a highly insulated engine. Finally, the variability of turbocompounding and the influence of exhaust port shape were calculated. Rotary engines performance was predicted by an improved zero-dimensional computer model based on a model developed at the Massachusetts Institute of Technology in the 1980's. Independent variables in the study include turbocharging, manifold pressures, wall thermal properties, leakage area, and exhaust port geometry. Additions to the computer programs since its results were last published include turbocharging, manifold modeling, and improved friction power loss calculation. The baseline engine for this study is a single rotor 650 cc direct-injection stratified-charge engine with aluminum housings and a stainless steel rotor. Engine maps are provided for the baseline and turbocharged versions of the engine.

  2. Advancing Replicable Solutions for High-Performance Homes in the Southeast

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, S. G. [Partnership for Home Innovation, Upper Marlboro, MD (United States). Southface Energy Inst.; Sweet, M. L. [Partnership for Home Innovation, Upper Marlboro, MD (United States). Southface Energy Inst.; Francisco, A. [Partnership for Home Innovation, Upper Marlboro, MD (United States). Southface Energy Inst.

    2016-03-01

    The work presented in this report advances the goals of the U.S. Department of Energy Building America program by improving the energy performance of affordable and market-rate housing. Southface Energy Institute (Southface), part of the U.S. Department of Energy Building America research team Partnership for Home Innovation, worked with owners and builders with various market constraints and ultimate goals for three projects in three climate zones (CZs): Savannah Gardens in Savannah, Georgia (CZ 2); JMC Patrick Square in Clemson, South Carolina (CZ 3); and LaFayette in LaFayette, Georgia (CZ 4). This report documents the design process, computational energy modeling, construction, envelope performance metrics, long-term monitoring results, and successes and failures of the design and execution of these high-performance homes.

  3. Analysing the temporal dynamics of model performance for hydrological models

    NARCIS (Netherlands)

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or

  4. 8th International Workshop on Parallel Tools for High Performance Computing

    CERN Document Server

    Gracia, José; Knüpfer, Andreas; Resch, Michael; Nagel, Wolfgang

    2015-01-01

    Numerical simulation and modelling using High Performance Computing has evolved into an established technique in academic and industrial research. At the same time, the High Performance Computing infrastructure is becoming ever more complex. For instance, most of the current top systems around the world use thousands of nodes in which classical CPUs are combined with accelerator cards in order to enhance their compute power and energy efficiency. This complexity can only be mastered with adequate development and optimization tools. Key topics addressed by these tools include parallelization on heterogeneous systems, performance optimization for CPUs and accelerators, debugging of increasingly complex scientific applications, and optimization of energy usage in the spirit of green IT. This book represents the proceedings of the 8th International Parallel Tools Workshop, held October 1-2, 2014 in Stuttgart, Germany – which is a forum to discuss the latest advancements in the parallel tools.

  5. Performance prediction of a proton exchange membrane fuel cell using the ANFIS model

    Energy Technology Data Exchange (ETDEWEB)

    Vural, Yasemin; Ingham, Derek B.; Pourkashanian, Mohamed [Centre for Computational Fluid Dynamics, University of Leeds, Houldsworth Building, LS2 9JT Leeds (United Kingdom)

    2009-11-15

    In this study, the performance (current-voltage curve) prediction of a Proton Exchange Membrane Fuel Cell (PEMFC) is performed for different operational conditions using an Adaptive Neuro-Fuzzy Inference System (ANFIS). First, ANFIS is trained with a set of input and output data. The trained model is then tested with an independent set of experimental data. The trained and tested model is then used to predict the performance curve of the PEMFC under various operational conditions. The model shows very good agreement with the experimental data and this indicates that ANFIS is capable of predicting fuel cell performance (in terms of cell voltage) with a high accuracy in an easy, rapid and cost effective way for the case presented. Finally, the capabilities and the limitations of the model for the application in fuel cells have been discussed. (author)

  6. Coal-fired high performance power generating system

    Energy Technology Data Exchange (ETDEWEB)

    1992-07-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of > 47% thermal efficiency; NO[sub x] SO [sub x] and Particulates < 25% NSPS; Cost of electricity 10% lower; coal > 65% of heat input and all solid wastes benign. In order to achieve these goals our team has outlined a research plan based on an optimized analysis of a 250 MW[sub e] combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (HITAF) which integrates several combustor and air heater designs with appropriate ash management procedures. Most of this report discusses the details of work on these components, and the R D Plan for future work. The discussion of the combustor designs illustrates how detailed modeling can be an effective tool to estimate NO[sub x] production, minimum burnout lengths, combustion temperatures and even particulate impact on the combustor walls. When our model is applied to the long flame concept it indicates that fuel bound nitrogen will limit the range of coals that can use this approach. For high nitrogen coals a rapid mixing, rich-lean, deep staging combustor will be necessary. The air heater design has evolved into two segments: a convective heat exchanger downstream of the combustion process; a radiant panel heat exchanger, located in the combustor walls; The relative amount of heat transferred either radiatively or convectively will depend on the combustor type and the ash properties.

  7. Tailored model abstraction in performance assessments

    International Nuclear Information System (INIS)

    Kessler, J.H.

    1995-01-01

    Total System Performance Assessments (TSPAs) are likely to be one of the most significant parts of making safety cases for the continued development and licensing of geologic repositories for the disposal of spent fuel and HLW. Thus, it is critical that the TSPA model capture the 'essence' of the physical processes relevant to demonstrating the appropriate regulation is met. But how much detail about the physical processes must be modeled and understood before there is enough confidence that the appropriate essence has been captured? In this summary the level of model abstraction that is required is discussed. Approaches for subsystem and total system performance analyses are outlined, and the role of best estimate models is examined. It is concluded that a conservative approach for repository performance, based on limited amount of field and laboratory data, can provide sufficient confidence for a regulatory decision

  8. Multiscale Modeling and Uncertainty Quantification for Nuclear Fuel Performance

    Energy Technology Data Exchange (ETDEWEB)

    Estep, Donald [Colorado State Univ., Fort Collins, CO (United States); El-Azab, Anter [Florida State Univ., Tallahassee, FL (United States); Pernice, Michael [Idaho National Lab. (INL), Idaho Falls, ID (United States); Peterson, John W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Polyakov, Peter [Univ. of Wyoming, Laramie, WY (United States); Tavener, Simon [Colorado State Univ., Fort Collins, CO (United States); Xiu, Dongbin [Purdue Univ., West Lafayette, IN (United States); Univ. of Utah, Salt Lake City, UT (United States)

    2017-03-23

    In this project, we will address the challenges associated with constructing high fidelity multiscale models of nuclear fuel performance. We (*) propose a novel approach for coupling mesoscale and macroscale models, (*) devise efficient numerical methods for simulating the coupled system, and (*) devise and analyze effective numerical approaches for error and uncertainty quantification for the coupled multiscale system. As an integral part of the project, we will carry out analysis of the effects of upscaling and downscaling, investigate efficient methods for stochastic sensitivity analysis of the individual macroscale and mesoscale models, and carry out a posteriori error analysis for computed results. We will pursue development and implementation of solutions in software used at Idaho National Laboratories on models of interest to the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program.

  9. High Performance Walls in Hot-Dry Climates

    Energy Technology Data Exchange (ETDEWEB)

    Hoeschele, Marc [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States); Springer, David [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States); Dakin, Bill [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States); German, Alea [Alliance for Residential Building Innovation (ARBI), Davis, CA (United States)

    2015-01-01

    High performance walls represent a high priority measure for moving the next generation of new homes to the Zero Net Energy performance level. The primary goal in improving wall thermal performance revolves around increasing the wall framing from 2x4 to 2x6, adding more cavity and exterior rigid insulation, achieving insulation installation criteria meeting ENERGY STAR's thermal bypass checklist. To support this activity, in 2013 the Pacific Gas & Electric Company initiated a project with Davis Energy Group (lead for the Building America team, Alliance for Residential Building Innovation) to solicit builder involvement in California to participate in field demonstrations of high performance wall systems. Builders were given incentives and design support in exchange for providing site access for construction observation, cost information, and builder survey feedback. Information from the project was designed to feed into the 2016 Title 24 process, but also to serve as an initial mechanism to engage builders in more high performance construction strategies. This Building America project utilized information collected in the California project.

  10. Analysis and modeling of social influence in high performance computing workloads

    KAUST Repository

    Zheng, Shuai

    2011-01-01

    Social influence among users (e.g., collaboration on a project) creates bursty behavior in the underlying high performance computing (HPC) workloads. Using representative HPC and cluster workload logs, this paper identifies, analyzes, and quantifies the level of social influence across HPC users. We show the existence of a social graph that is characterized by a pattern of dominant users and followers. This pattern also follows a power-law distribution, which is consistent with those observed in mainstream social networks. Given its potential impact on HPC workloads prediction and scheduling, we propose a fast-converging, computationally-efficient online learning algorithm for identifying social groups. Extensive evaluation shows that our online algorithm can (1) quickly identify the social relationships by using a small portion of incoming jobs and (2) can efficiently track group evolution over time. © 2011 Springer-Verlag.

  11. Flexible nanoscale high-performance FinFETs

    KAUST Repository

    Sevilla, Galo T.

    2014-10-28

    With the emergence of the Internet of Things (IoT), flexible high-performance nanoscale electronics are more desired. At the moment, FinFET is the most advanced transistor architecture used in the state-of-the-art microprocessors. Therefore, we show a soft-etch based substrate thinning process to transform silicon-on-insulator (SOI) based nanoscale FinFET into flexible FinFET and then conduct comprehensive electrical characterization under various bending conditions to understand its electrical performance. Our study shows that back-etch based substrate thinning process is gentler than traditional abrasive back-grinding process; it can attain ultraflexibility and the electrical characteristics of the flexible nanoscale FinFET show no performance degradation compared to its rigid bulk counterpart indicating its readiness to be used for flexible high-performance electronics.

  12. Calibrating mechanistic-empirical pavement performance models with an expert matrix

    Energy Technology Data Exchange (ETDEWEB)

    Tighe, S.; AlAssar, R.; Haas, R. [Waterloo Univ., ON (Canada). Dept. of Civil Engineering; Zhiwei, H. [Stantec Consulting Ltd., Cambridge, ON (Canada)

    2001-07-01

    Proper management of pavement infrastructure requires pavement performance modelling. For the past 20 years, the Ontario Ministry of Transportation has used the Ontario Pavement Analysis of Costs (OPAC) system for pavement design. Pavement needs, however, have changed substantially during that time. To address this need, a new research contract is underway to enhance the model and verify the predictions, particularly at extreme points such as low and high traffic volume pavement design. This initiative included a complete evaluation of the existing OPAC pavement design method, the construction of a new set of pavement performance prediction models, and the development of the flexible pavement design procedure that incorporates reliability analysis. The design was also expanded to include rigid pavement designs and modification of the existing life cycle cost analysis procedure which includes both the agency cost and road user cost. Performance prediction and life-cycle costs were developed based on several factors, including material properties, traffic loads and climate. Construction and maintenance schedules were also considered. The methodology for the calibration and validation of a mechanistic-empirical flexible pavement performance model was described. Mechanistic-empirical design methods combine theory based design such as calculated stresses, strains or deflections with empirical methods, where a measured response is associated with thickness and pavement performance. Elastic layer analysis was used to determine pavement response to determine the most effective design using cumulative Equivalent Single Axle Loads (ESALs), below grade type and layer thickness.The new mechanistic-empirical model separates the environment and traffic effects on performance. This makes it possible to quantify regional differences between Southern and Northern Ontario. In addition, roughness can be calculated in terms of the International Roughness Index or Riding comfort Index

  13. The Role of Performance Management in Creating and Maintaining a High-Performance Organization

    Directory of Open Access Journals (Sweden)

    André A. de Waal

    2015-04-01

    Full Text Available There is still a good deal of confusion in the literature about how the use of a performance management system affects overall organizational performance. Some researchers find that performance management enhances both the financial and non-financial results of an organization, while others do not find any positive effects or, at most, ambiguous effects. An important step toward getting more clarity in this relationship is to investigate the role performance management plays in creating and maintaining a high-performance organization (HPO. The purpose of this study is to integrate performance management analysis (PMA and high-performance organization (HPO. A questionnaire combining questions on PMA dimensions and HPO factors was administered to two European-based multinational firms. Based on 468 valid questionnaires, a correlation analysis was performed on the PMA dimensions and the HPO factors in order to test the impact of performance management on the factors of high organizational performance. The results show strong and significant correlations between all the PMA dimensions and all the HPO factors, indicating that a performance management system that fosters performance-driven behavior in the organization is of critical importance to strengthen overall financial and non-financial performance.

  14. IMITATION MODEL OF A HIGH-SPEED INDUCTION MOTOR WITH FREQUENCY CONTROL

    Directory of Open Access Journals (Sweden)

    V. E. Pliugin

    2017-12-01

    Full Text Available Purpose. To develop the imitation model of the frequency converter controlled high-speed induction motor with a squirrel-cage rotor in order to determine reasons causes electric motor vibrations and noises in starting modes. Methodology. We have applied the mathematical simulation of electromagnetic field in transient mode and imported obtained field model as an independent object in frequency converter circuit. We have correlated the simulated result with the experimental data obtained by means of the PID regulator factors. Results. We have made the simulation model of the high-speed induction motor with a squirrel-cage rotor speed control in AnsysRMxprt, Ansys Maxwell and Ansys Simplorer, approximated to their physical prototype. We have made models modifications allows to provide high-performance computing (HPC in dedicated server and computer cluster to reduce the simulation time. We have obtained motor characteristics in starting and rated modes. This allows to make recommendations on determination of high-speed electric motor optimal deign, having minimum indexes of vibrations and noises. Originality. For the first time, we have carried out the integrated research of induction motor using simultaneously simulation models both in Ansys Maxwell (2D field model and in Ansys Simplorer (transient circuit model with the control low realization for the motor soft start. For the first time the correlation between stator and rotor slots, allows to obtain minimal vibrations and noises, was defined. Practical value. We have tested manufactured high-speed motor based on the performed calculation. The experimental studies have confirmed the adequacy of the model, which allows designing such motors for new high-speed construction, and upgrade the existing ones.

  15. Development of high-performance transparent conducting oxides and their impact on the performance of CdS/CdTe solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Coutts, T.J.; Wu, X.; Sheldon, P.; Rose, D.H. [National Renewable Energy Lab., Golden, CO (United States)

    1998-09-01

    This paper begins with a review of the modeled performance of transparent conducting oxides (TCOs) as a function of their free-carrier concentration, mobility, and film thickness. It is shown that it is vital to make a film with high mobility to minimize the width and height of the free-carrier absorption band, and to optimize the optical properties. The free-carrier concentration must be kept sufficiently small that the absorption band does not extend into that part of the spectrum to which the solar cell responds. Despite this consideration, a high electrical conductivity is essential to minimize series resistance losses. Hence, a high mobility is vital for these materials. The fabrication of thin-films of cadmium stannate is then discussed, and their performance is compared with that of tin oxide, both optically and as these materials influence the performance of CdTe solar cells.

  16. Development of new high-performance stainless steels

    International Nuclear Information System (INIS)

    Park, Yong Soo

    2002-01-01

    This paper focused on high-performance stainless steels and their development status. Effect of nitrogen addition on super-stainless steel was discussed. Research activities at Yonsei University, on austenitic and martensitic high-performance stainless, steels, and the next-generation duplex stainless steels were introduced

  17. High Performance Multiphase Combustion Tool Using Level Set-Based Primary Atomization Coupled with Flamelet Models, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovative methodologies proposed in this STTR Phase 2 project will enhance Loci-STREAM which is a high performance, high fidelity simulation tool already being...

  18. High Performance Multiphase Combustion Tool Using Level Set-Based Primary Atomization Coupled with Flamelet Models, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovative methodologies proposed in this STTR Phase 1 project will enhance Loci-STREAM which is a high performance, high fidelity simulation tool already being...

  19. vSphere high performance cookbook

    CERN Document Server

    Sarkar, Prasenjit

    2013-01-01

    vSphere High Performance Cookbook is written in a practical, helpful style with numerous recipes focusing on answering and providing solutions to common, and not-so common, performance issues and problems.The book is primarily written for technical professionals with system administration skills and some VMware experience who wish to learn about advanced optimization and the configuration features and functions for vSphere 5.1.

  20. High Burnup Fuel Performance and Safety Research

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Je Keun; Lee, Chan Bok; Kim, Dae Ho (and others)

    2007-03-15

    The worldwide trend of nuclear fuel development is to develop a high burnup and high performance nuclear fuel with high economies and safety. Because the fuel performance evaluation code, INFRA, has a patent, and the superiority for prediction of fuel performance was proven through the IAEA CRP FUMEX-II program, the INFRA code can be utilized with commercial purpose in the industry. The INFRA code was provided and utilized usefully in the universities and relevant institutes domesticallly and it has been used as a reference code in the industry for the development of the intrinsic fuel rod design code.

  1. Danish High Performance Concretes

    DEFF Research Database (Denmark)

    Nielsen, M. P.; Christoffersen, J.; Frederiksen, J.

    1994-01-01

    In this paper the main results obtained in the research program High Performance Concretes in the 90's are presented. This program was financed by the Danish government and was carried out in cooperation between The Technical University of Denmark, several private companies, and Aalborg University...... concretes, workability, ductility, and confinement problems....

  2. Aging analysis of high performance FinFET flip-flop under Dynamic NBTI simulation configuration

    Science.gov (United States)

    Zainudin, M. F.; Hussin, H.; Halim, A. K.; Karim, J.

    2018-03-01

    A mechanism known as Negative-bias Temperature Instability (NBTI) degrades a main electrical parameters of a circuit especially in terms of performance. So far, the circuit design available at present are only focussed on high performance circuit without considering the circuit reliability and robustness. In this paper, the main circuit performances of high performance FinFET flip-flop such as delay time, and power were studied with the presence of the NBTI degradation. The aging analysis was verified using a 16nm High Performance Predictive Technology Model (PTM) based on different commands available at Synopsys HSPICE. The results shown that the circuit under the longer dynamic NBTI simulation produces the highest impact in the increasing of gate delay and decrease in the average power reduction from a fresh simulation until the aged stress time under a nominal condition. In addition, the circuit performance under a varied stress condition such as temperature and negative stress gate bias were also studied.

  3. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  4. Edge density profiles in high-performance JET plasmas

    International Nuclear Information System (INIS)

    Summers, D.D.R.; Viaccoz, B.; Vince, J.

    1997-01-01

    Detailed electron density profiles of the scrape-off layer in high-performance JET plasmas (plasma current, I p nbi ∝17 MW) have been measured by means of a lithium beam diagnostic system featuring high spatial resolution [Kadota (1978)[. Measurements were taken over a period of several seconds, allowing examination of the evolution of the edge profile at a location upstream from the divertor target. The data clearly show the effects of the H-mode transition - an increase in density near the plasma separatrix and a reduction in density scrape-off length. The profiles obtained under various plasma conditions are compared firstly with data from other diagnostics, located elsewhere in the vessel, and also with the predictions of an 'onion-skin' model (DIVIMP), which used, as initial parameters, data from an array of probes located in the divertor target. (orig.)

  5. Promising high monetary rewards for future task performance increases intermediate task performance.

    Directory of Open Access Journals (Sweden)

    Claire M Zedelius

    Full Text Available In everyday life contexts and work settings, monetary rewards are often contingent on future performance. Based on research showing that the anticipation of rewards causes improved task performance through enhanced task preparation, the present study tested the hypothesis that the promise of monetary rewards for future performance would not only increase future performance, but also performance on an unrewarded intermediate task. Participants performed an auditory Simon task in which they responded to two consecutive tones. While participants could earn high vs. low monetary rewards for fast responses to every second tone, their responses to the first tone were not rewarded. Moreover, we compared performance under conditions in which reward information could prompt strategic performance adjustments (i.e., when reward information was presented for a relatively long duration to conditions preventing strategic performance adjustments (i.e., when reward information was presented very briefly. Results showed that high (vs. low rewards sped up both rewarded and intermediate, unrewarded responses, and the effect was independent of the duration of reward presentation. Moreover, long presentation led to a speed-accuracy trade-off for both rewarded and unrewarded tones, whereas short presentation sped up responses to rewarded and unrewarded tones without this trade-off. These results suggest that high rewards for future performance boost intermediate performance due to enhanced task preparation, and they do so regardless whether people respond to rewards in a strategic or non-strategic manner.

  6. Promising high monetary rewards for future task performance increases intermediate task performance.

    Science.gov (United States)

    Zedelius, Claire M; Veling, Harm; Bijleveld, Erik; Aarts, Henk

    2012-01-01

    In everyday life contexts and work settings, monetary rewards are often contingent on future performance. Based on research showing that the anticipation of rewards causes improved task performance through enhanced task preparation, the present study tested the hypothesis that the promise of monetary rewards for future performance would not only increase future performance, but also performance on an unrewarded intermediate task. Participants performed an auditory Simon task in which they responded to two consecutive tones. While participants could earn high vs. low monetary rewards for fast responses to every second tone, their responses to the first tone were not rewarded. Moreover, we compared performance under conditions in which reward information could prompt strategic performance adjustments (i.e., when reward information was presented for a relatively long duration) to conditions preventing strategic performance adjustments (i.e., when reward information was presented very briefly). Results showed that high (vs. low) rewards sped up both rewarded and intermediate, unrewarded responses, and the effect was independent of the duration of reward presentation. Moreover, long presentation led to a speed-accuracy trade-off for both rewarded and unrewarded tones, whereas short presentation sped up responses to rewarded and unrewarded tones without this trade-off. These results suggest that high rewards for future performance boost intermediate performance due to enhanced task preparation, and they do so regardless whether people respond to rewards in a strategic or non-strategic manner.

  7. Modeling Hot-Spot Contributions in Shocked High Explosives at the Mesoscale

    Energy Technology Data Exchange (ETDEWEB)

    Harrier, Danielle [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-08-12

    When looking at performance of high explosives, the defects within the explosive become very important. Plastic bonded explosives, or PBXs, contain voids of air and bonder between the particles of explosive material that aid in the ignition of the explosive. These voids collapse in high pressure shock conditions, which leads to the formation of hot spots. Hot spots are localized high temperature and high pressure regions that cause significant changes in the way the explosive material detonates. Previously hot spots have been overlooked with modeling, but now scientists are realizing their importance and new modeling systems that can accurately model hot spots are underway.

  8. Base Station Performance Model

    OpenAIRE

    Walsh, Barbara; Farrell, Ronan

    2005-01-01

    At present the testing of power amplifiers within base station transmitters is limited to testing at component level as opposed to testing at the system level. While the detection of catastrophic failure is possible, that of performance degradation is not. This paper proposes a base station model with respect to transmitter output power with the aim of introducing system level monitoring of the power amplifier behaviour within the base station. Our model reflects the expe...

  9. High-level PC-based laser system modeling

    Science.gov (United States)

    Taylor, Michael S.

    1991-05-01

    Since the inception of the Strategic Defense Initiative (SDI) there have been a multitude of comparison studies done in an attempt to evaluate the effectiveness and relative sizes of complementary, and sometimes competitive, laser weapon systems. It became more and more apparent that what the systems analyst needed was not only a fast, but a cost effective way to perform high-level trade studies. In the present investigation, a general procedure is presented for the development of PC-based algorithmic systems models for laser systems. This procedure points out all of the major issues that should be addressed in the design and development of such a model. Issues addressed include defining the problem to be modeled, defining a strategy for development, and finally, effective use of the model once developed. Being a general procedure, it will allow a systems analyst to develop a model to meet specific needs. To illustrate this method of model development, a description of the Strategic Defense Simulation - Design To (SDS-DT) model developed and used by Science Applications International Corporation (SAIC) is presented. SDS-DT is a menu-driven, fast executing, PC-based program that can be used to either calculate performance, weight, volume, and cost values for a particular design or, alternatively, to run parametrics on particular system parameters to perhaps optimize a design.

  10. Comparison of ultra high performance supercritical fluid chromatography, ultra high performance liquid chromatography, and gas chromatography for the separation of synthetic cathinones.

    Science.gov (United States)

    Carnes, Stephanie; O'Brien, Stacey; Szewczak, Angelica; Tremeau-Cayel, Lauriane; Rowe, Walter F; McCord, Bruce; Lurie, Ira S

    2017-09-01

    A comparison of ultra high performance supercritical fluid chromatography, ultra high performance liquid chromatography, and gas chromatography for the separation of synthetic cathinones has been conducted. Nine different mixtures of bath salts were analyzed in this study. The three different chromatographic techniques were examined using a general set of controlled synthetic cathinones as well as a variety of other synthetic cathinones that exist as positional isomers. Overall 35 different synthetic cathinones were analyzed. A variety of column types and chromatographic modes were examined for developing each separation. For the ultra high performance supercritical fluid chromatography separations, analyses were performed using a series of Torus and Trefoil columns with either ammonium formate or ammonium hydroxide as additives, and methanol, ethanol or isopropanol organic solvents as modifiers. Ultra high performance liquid chromatographic separations were performed in both reversed phase and hydrophilic interaction chromatographic modes using SPP C18 and SPP HILIC columns. Gas chromatography separations were performed using an Elite-5MS capillary column. The orthogonality of ultra high performance supercritical fluid chromatography, ultra high performance liquid chromatography, and gas chromatography was examined using principal component analysis. For the best overall separation of synthetic cathinones, the use of ultra high performance supercritical fluid chromatography in combination with gas chromatography is recommended. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. High Performance Electronics on Flexible Silicon

    KAUST Repository

    Sevilla, Galo T.

    2016-09-01

    Over the last few years, flexible electronic systems have gained increased attention from researchers around the world because of their potential to create new applications such as flexible displays, flexible energy harvesters, artificial skin, and health monitoring systems that cannot be integrated with conventional wafer based complementary metal oxide semiconductor processes. Most of the current efforts to create flexible high performance devices are based on the use of organic semiconductors. However, inherent material\\'s limitations make them unsuitable for big data processing and high speed communications. The objective of my doctoral dissertation is to develop integration processes that allow the transformation of rigid high performance electronics into flexible ones while maintaining their performance and cost. In this work, two different techniques to transform inorganic complementary metal-oxide-semiconductor electronics into flexible ones have been developed using industry compatible processes. Furthermore, these techniques were used to realize flexible discrete devices and circuits which include metal-oxide-semiconductor field-effect-transistors, the first demonstration of flexible Fin-field-effect-transistors, and metal-oxide-semiconductors-based circuits. Finally, this thesis presents a new technique to package, integrate, and interconnect flexible high performance electronics using low cost additive manufacturing techniques such as 3D printing and inkjet printing. This thesis contains in depth studies on electrical, mechanical, and thermal properties of the fabricated devices.

  12. Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance

    Science.gov (United States)

    Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.

    2014-01-01

    This paper presents recent thermal model results of the Advanced Stirling Radioisotope Generator (ASRG). The three-dimensional (3D) ASRG thermal power model was built using the Thermal Desktop(trademark) thermal analyzer. The model was correlated with ASRG engineering unit test data and ASRG flight unit predictions from Lockheed Martin's (LM's) I-deas(trademark) TMG thermal model. The auxiliary cooling system (ACS) of the ASRG is also included in the ASRG thermal model. The ACS is designed to remove waste heat from the ASRG so that it can be used to heat spacecraft components. The performance of the ACS is reported under nominal conditions and during a Venus flyby scenario. The results for the nominal case are validated with data from Lockheed Martin. Transient thermal analysis results of ASRG for a Venus flyby with a representative trajectory are also presented. In addition, model results of an ASRG mounted on a Cassini-like spacecraft with a sunshade are presented to show a way to mitigate the high temperatures of a Venus flyby. It was predicted that the sunshade can lower the temperature of the ASRG alternator by 20 C for the representative Venus flyby trajectory. The 3D model also was modified to predict generator performance after a single Advanced Stirling Convertor failure. The geometry of the Microtherm HT insulation block on the outboard side was modified to match deformation and shrinkage observed during testing of a prototypic ASRG test fixture by LM. Test conditions and test data were used to correlate the model by adjusting the thermal conductivity of the deformed insulation to match the post-heat-dump steady state temperatures. Results for these conditions showed that the performance of the still-functioning inboard ACS was unaffected.

  13. Acoustic Performance of Novel Fan Noise Reduction Technologies for a High Bypass Model Turbofan at Simulated Flights Conditions

    Science.gov (United States)

    Elliott, David M.; Woodward, Richard P.; Podboy, Gary G.

    2010-01-01

    Two novel fan noise reduction technologies, over the rotor acoustic treatment and soft stator vane technologies, were tested in an ultra-high bypass ratio turbofan model in the NASA Glenn Research Center s 9- by 15-Foot Low-Speed Wind Tunnel. The performance of these technologies was compared to that of the baseline fan configuration, which did not have these technologies. Sideline acoustic data and hot film flow data were acquired and are used to determine the effectiveness of the various treatments. The material used for the over the rotor treatment was foam metal and two different types were used. The soft stator vanes had several internal cavities tuned to target certain frequencies. In order to accommodate the cavities it was necessary to use a cut-on stator to demonstrate the soft vane concept.

  14. Wall modeling for the simulation of highly non-isothermal unsteady flows

    International Nuclear Information System (INIS)

    Devesa, A.

    2006-12-01

    Nuclear industry flows are most of the time characterized by their high Reynolds number, density variations (at low Mach numbers) and a highly unsteady behaviour (low to moderate frequencies). High Reynolds numbers are un-affordable by direct simulation (DNS), and simulations must either be performed by solving averaged equations (RANS), or by solving only the large eddies (LES), both using a wall model. A first investigation of this thesis dealt with the derivation and test of two variable density wall models: an algebraic law (CWM) and a zonal approach dedicated to LES (TBLE-ρ). These models were validated in quasi-isothermal cases, before being used in academic and industrial non-isothermal flows with satisfactory results. Then, a numerical experiment of pulsed passive scalars was performed by DNS, were two forcing conditions were considered: oscillations are imposed in the outer flow; oscillations come from the wall. Several frequencies and amplitudes of oscillations were taken into account in order to gain insights in unsteady effects in the boundary layer, and to create a database for validating wall models in such context. The temporal behaviour of two wall models (algebraic and zonal wall models) were studied and showed that a zonal model produced better results when used in the simulation of unsteady flows. (author)

  15. Critical Factors Explaining the Leadership Performance of High-Performing Principals

    Science.gov (United States)

    Hutton, Disraeli M.

    2018-01-01

    The study explored critical factors that explain leadership performance of high-performing principals and examined the relationship between these factors based on the ratings of school constituents in the public school system. The principal component analysis with the use of Varimax Rotation revealed that four components explain 51.1% of the…

  16. High Performance Walls in Hot-Dry Climates

    Energy Technology Data Exchange (ETDEWEB)

    Hoeschele, Marc [National Renewable Energy Lab. (NREL), Golden, CO (United States); Springer, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dakin, Bill [National Renewable Energy Lab. (NREL), Golden, CO (United States); German, Alea [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-01-01

    High performance walls represent a high priority measure for moving the next generation of new homes to the Zero Net Energy performance level. The primary goal in improving wall thermal performance revolves around increasing the wall framing from 2x4 to 2x6, adding more cavity and exterior rigid insulation, achieving insulation installation criteria meeting ENERGY STAR's thermal bypass checklist, and reducing the amount of wood penetrating the wall cavity.

  17. Performance Evaluation Model for Application Layer Firewalls.

    Science.gov (United States)

    Xuan, Shichang; Yang, Wu; Dong, Hui; Zhang, Jiangchuan

    2016-01-01

    Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  18. Performance Evaluation Model for Application Layer Firewalls.

    Directory of Open Access Journals (Sweden)

    Shichang Xuan

    Full Text Available Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers. Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  19. Whole School Improvement and Restructuring as Prevention and Promotion: Lessons from STEP and the Project on High Performance Learning Communities.

    Science.gov (United States)

    Felner, Robert D.; Favazza, Antoinette; Shim, Minsuk; Brand, Stephen; Gu, Kenneth; Noonan, Nancy

    2001-01-01

    Describes the School Transitional Environment Project and its successor, the Project on High Performance Learning Communities, that have contributed to building a model for school improvement called the High Performance Learning Communities. The model seeks to build the principles of prevention into whole school change. Presents findings from…

  20. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  1. High-performance liquid chromatography of oligoguanylates at high pH

    Science.gov (United States)

    Stribling, R.; Deamer, D. (Principal Investigator)

    1991-01-01

    Because of the stable self-structures formed by oligomers of guanosine, standard high-performance liquid chromatography techniques for oligonucleotide fractionation are not applicable. Previously, oligoguanylate separations have been carried out at pH 12 using RPC-5 as the packing material. While RPC-5 provides excellent separations, there are several limitations, including the lack of a commercially available source. This report describes a new anion-exchange high-performance liquid chromatography method using HEMA-IEC BIO Q, which successfully separates different forms of the guanosine monomer as well as longer oligoguanylates. The reproducibility and stability at high pH suggests a versatile role for this material.

  2. Performance and robustness of hybrid model predictive control for controllable dampers in building models

    Science.gov (United States)

    Johnson, Erik A.; Elhaddad, Wael M.; Wojtkiewicz, Steven F.

    2016-04-01

    A variety of strategies have been developed over the past few decades to determine controllable damping device forces to mitigate the response of structures and mechanical systems to natural hazards and other excitations. These "smart" damping devices produce forces through passive means but have properties that can be controlled in real time, based on sensor measurements of response across the structure, to dramatically reduce structural motion by exploiting more than the local "information" that is available to purely passive devices. A common strategy is to design optimal damping forces using active control approaches and then try to reproduce those forces with the smart damper. However, these design forces, for some structures and performance objectives, may achieve high performance by selectively adding energy, which cannot be replicated by a controllable damping device, causing the smart damper performance to fall far short of what an active system would provide. The authors have recently demonstrated that a model predictive control strategy using hybrid system models, which utilize both continuous and binary states (the latter to capture the switching behavior between dissipative and non-dissipative forces), can provide reductions in structural response on the order of 50% relative to the conventional clipped-optimal design strategy. This paper explores the robustness of this newly proposed control strategy through evaluating controllable damper performance when the structure model differs from the nominal one used to design the damping strategy. Results from the application to a two-degree-of-freedom structure model confirms the robustness of the proposed strategy.

  3. Modelling and evaluation of surgical performance using hidden Markov models.

    Science.gov (United States)

    Megali, Giuseppe; Sinigaglia, Stefano; Tonet, Oliver; Dario, Paolo

    2006-10-01

    Minimally invasive surgery has become very widespread in the last ten years. Since surgeons experience difficulties in learning and mastering minimally invasive techniques, the development of training methods is of great importance. While the introduction of virtual reality-based simulators has introduced a new paradigm in surgical training, skill evaluation methods are far from being objective. This paper proposes a method for defining a model of surgical expertise and an objective metric to evaluate performance in laparoscopic surgery. Our approach is based on the processing of kinematic data describing movements of surgical instruments. We use hidden Markov model theory to define an expert model that describes expert surgical gesture. The model is trained on kinematic data related to exercises performed on a surgical simulator by experienced surgeons. Subsequently, we use this expert model as a reference model in the definition of an objective metric to evaluate performance of surgeons with different abilities. Preliminary results show that, using different topologies for the expert model, the method can be efficiently used both for the discrimination between experienced and novice surgeons, and for the quantitative assessment of surgical ability.

  4. System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Janine [National Renewable Energy Lab. (NREL), Golden, CO (United States); Whitmore, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Kaffine, Leah [National Renewable Energy Lab. (NREL), Golden, CO (United States); Blair, Nate [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dobos, Aron P. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    The System Advisor Model (SAM) is a free software tool that performs detailed analysis of both system performance and system financing for a variety of renewable energy technologies. This report provides detailed validation of the SAM flat plate photovoltaic performance model by comparing SAM-modeled PV system generation data to actual measured production data for nine PV systems ranging from 75 kW to greater than 25 MW in size. The results show strong agreement between SAM predictions and field data, with annualized prediction error below 3% for all fixed tilt cases and below 8% for all one axis tracked cases. The analysis concludes that snow cover and system outages are the primary sources of disagreement, and other deviations resulting from seasonal biases in the irradiation models and one axis tracking issues are discussed in detail.

  5. Modeling High-Dimensional Multichannel Brain Signals

    KAUST Repository

    Hu, Lechuan

    2017-12-12

    Our goal is to model and measure functional and effective (directional) connectivity in multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The difficulties from analyzing these data mainly come from two aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with potentially high lag order so that complex lead-lag temporal dynamics between the channels can be captured. Estimates of the VAR model will be obtained by our proposed hybrid LASSLE (LASSO + LSE) method which combines regularization (to control for sparsity) and least squares estimation (to improve bias and mean-squared error). Then we employ some measures of connectivity but put an emphasis on partial directed coherence (PDC) which can capture the directional connectivity between channels. PDC is a frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative to all possible receivers in the network. The proposed modeling approach provided key insights into potential functional relationships among simultaneously recorded sites during performance of a complex memory task. Specifically, this novel method was successful in quantifying patterns of effective connectivity across electrode locations, and in capturing how these patterns varied across trial epochs and trial types.

  6. Toward an ultra-high resolution community climate system model for the BlueGene platform

    International Nuclear Information System (INIS)

    Dennis, John M; Jacob, Robert; Vertenstein, Mariana; Craig, Tony; Loy, Raymond

    2007-01-01

    Global climate models need to simulate several small, regional-scale processes which affect the global circulation in order to accurately simulate the climate. This is particularly important in the ocean where small scale features such as oceanic eddies are currently represented with adhoc parameterizations. There is also a need for higher resolution to provide climate predictions at small, regional scales. New high-performance computing platforms such as the IBM BlueGene can provide the necessary computational power to perform ultra-high resolution climate model integrations. We have begun to investigate the scaling of the individual components of the Community Climate System Model to prepare it for integrations on BlueGene and similar platforms. Our investigations show that it is possible to successfully utilize O(32K) processors. We describe the scalability of five models: the Parallel Ocean Program (POP), the Community Ice CodE (CICE), the Community Land Model (CLM), and the new CCSM sequential coupler (CPL7) which are components of the next generation Community Climate System Model (CCSM); as well as the High-Order Method Modeling Environment (HOMME) which is a dynamical core currently being evaluated within the Community Atmospheric Model. For our studies we concentrate on 1/10 0 resolution for CICE, POP, and CLM models and 1/4 0 resolution for HOMME. The ability to simulate high resolutions on the massively parallel petascale systems that will dominate high-performance computing for the foreseeable future is essential to the advancement of climate science

  7. Advances in HTGR fuel performance models

    International Nuclear Information System (INIS)

    Stansfield, O.M.; Goodin, D.T.; Hanson, D.L.; Turner, R.F.

    1985-01-01

    Advances in HTGR fuel performance models have improved the agreement between observed and predicted performance and contributed to an enhanced position of the HTGR with regard to investment risk and passive safety. Heavy metal contamination is the source of about 55% of the circulating activity in the HTGR during normal operation, and the remainder comes primarily from particles which failed because of defective or missing buffer coatings. These failed particles make up about 5 x 10 -4 fraction of the total core inventory. In addition to prediction of fuel performance during normal operation, the models are used to determine fuel failure and fission product release during core heat-up accident conditions. The mechanistic nature of the models, which incorporate all important failure modes, permits the prediction of performance from the relatively modest accident temperatures of a passively safe HTGR to the much more severe accident conditions of the larger 2240-MW/t HTGR. (author)

  8. Thermodynamic analysis on optimum performance of scramjet engine at high Mach numbers

    International Nuclear Information System (INIS)

    Zhang, Duo; Yang, Shengbo; Zhang, Silong; Qin, Jiang; Bao, Wen

    2015-01-01

    In order to predict the maximum performance of scramjet engine at flight conditions with high freestream Mach numbers, a thermodynamic model of Brayton cycle was utilized to analyze the effects of inlet pressure ratio, fuel equivalence ratio and the upper limit of gas temperature to the specific thrust and the fuel impulse of the scramjet considering the characteristics of non-isentropic compression in the inlet. The results show that both the inlet efficiency and the temperature limit in the combustor have remarkable effects on the overall engine performances. Different with the ideal Brayton cycles assuming isentropic compression without upper limit of gas temperature, both the maximum specific thrust and the maximum fuel impulse of a scramjet present non-monotonic trends against the fuel equivalence ratio in this study. Considering the empirical design efficiencies of inlet, there is a wide range of fuel equivalence ratios in which the fuel impulses remain at high values. Moreover, the maximum specific thrust can also be achieved with a fuel equivalence ratio near this range. Therefore, it is possible to achieve an overall high performance in a scramjet at high Mach numbers. - Highlights: • Thermodynamic analysis with Brayton cycle on overall performances of scramjet. • The compression loss in the inlet was considered in predicting scram-mode operation. • Non-monotonic trends of engine performances against fuel equivalence ratio.

  9. Determination of performance characteristics of robotic manipulator's permanent magnet synchronous motor by learning its FEM model

    International Nuclear Information System (INIS)

    Bharadvaj, Bimmi; Saini, Surendra Singh; Swaroop, Teja Tumapala; Sarkar, Ushnish; Ray, Debashish Datta

    2016-01-01

    Permanent Magnet Synchronous Motors (PMSM) are widely used as actuators because of high torque density, high efficiency and reliability. Robotic Manipulator designed for specific task generally requires actuators with very high intermittent torque and speed for their operation in limited space. Hence accurate performance characteristics of PMSM must be known beforehand under these conditions as it may damage the motor. Therefore an advanced mathematical model of PMSM is required for its control synthesis and performance analysis over wide operating range. The existing mathematical models are developed considering ideal motor without including the geometrical deviations that occur during manufacturing process of the motor or its components. These manufacturing tolerance affect torque ripple, operating current range etc. thereby affecting motor performance. In this work, the magnetically non-linear dynamic model is further exploited to refine the FE model using a proposed algorithm to iteratively compensate for the experimentally observed deviations due to manufacturing. (author)

  10. High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing (HPC) Environment: Packet-Level Analysis

    Science.gov (United States)

    2015-09-01

    individual fragments using the hash-based method. In general, fragments 6 appear in order and relatively close to each other in the file. A fragment...data product derived from the data model is shown in Fig. 5, a Google Earth12 Keyhole Markup Language (KML) file. This product includes aggregate...System BLOb binary large object FPGA field-programmable gate array HPC high-performance computing IP Internet Protocol KML Keyhole Markup Language

  11. Multiscale modeling and characterization for performance and safety of lithium-ion batteries

    International Nuclear Information System (INIS)

    Pannala, S.; Turner, J. A.; Allu, S.; Elwasif, W. R.; Kalnaus, S.; Simunovic, S.; Kumar, A.; Billings, J. J.; Wang, H.; Nanda, J.

    2015-01-01

    Lithium-ion batteries are highly complex electrochemical systems whose performance and safety are governed by coupled nonlinear electrochemical-electrical-thermal-mechanical processes over a range of spatiotemporal scales. Gaining an understanding of the role of these processes as well as development of predictive capabilities for design of better performing batteries requires synergy between theory, modeling, and simulation, and fundamental experimental work to support the models. This paper presents the overview of the work performed by the authors aligned with both experimental and computational efforts. In this paper, we describe a new, open source computational environment for battery simulations with an initial focus on lithium-ion systems but designed to support a variety of model types and formulations. This system has been used to create a three-dimensional cell and battery pack models that explicitly simulate all the battery components (current collectors, electrodes, and separator). The models are used to predict battery performance under normal operations and to study thermal and mechanical safety aspects under adverse conditions. This paper also provides an overview of the experimental techniques to obtain crucial validation data to benchmark the simulations at various scales for performance as well as abuse. We detail some initial validation using characterization experiments such as infrared and neutron imaging and micro-Raman mapping. In addition, we identify opportunities for future integration of theory, modeling, and experiments

  12. Network Bandwidth Utilization Forecast Model on High Bandwidth Network

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wucherl; Sim, Alex

    2014-07-07

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2percent. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  13. Network bandwidth utilization forecast model on high bandwidth networks

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wuchert (William) [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-03-30

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2%. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  14. Development of high-performance concrete having high resistance to chloride penetration

    International Nuclear Information System (INIS)

    Oh, Byung Hwan; Cha, Soo Won; Jang, Bong Seok; Jang, Seung Yup

    2002-01-01

    The resistance to chloride penetration is one of the simplest measures to determine the durability of concrete, e.g. resistance to freezing and thawing, corrosion of steel in concrete and other chemical attacks. Thus, high-performance concrete may be defined as the concrete having high resistance to chloride penetration as well as high strength. The purpose of this paper is to investigate the resistance to chloride penetration of different types of concrete and to develop high-performance concrete that has very high resistance to chloride penetration, and thus, can guarantee high durability. A large number of concrete specimens have been tested by the rapid chloride permeability test method as designated in AASHTO T 277 and ASTM C 1202. The major test variables include water-to-binder ratios, type of cement, type and amount of mineral admixtures (silica fume, fly ash and blast-furnace slag), maximum size of aggregates and air-entrainment. Test results show that concrete containing optimal amount of silica fume shows very high resistance to chloride penetration, and high-performance concrete developed in this study can be efficiently employed to enhance the durability of concrete structures in severe environments such as nuclear power plants, water-retaining structures and other offshore structures

  15. Performance modeling of Beamlet

    International Nuclear Information System (INIS)

    Auerbach, J.M.; Lawson, J.K.; Rotter, M.D.; Sacks, R.A.; Van Wonterghem, B.W.; Williams, W.H.

    1995-01-01

    Detailed modeling of beam propagation in Beamlet has been made to predict system performance. New software allows extensive use of optical component characteristics. This inclusion of real optical component characteristics has resulted in close agreement between calculated and measured beam distributions

  16. Modeling the effect of spacers and biofouling on forward osmosis performance

    KAUST Repository

    Mosqueira Santillán, María José

    2014-11-01

    Currently, the most utilized desalination technology is reverse osmosis (RO), where a membrane is used as a physical barrier to separate the salts from the seawater, using high hydraulic pressure as driving force. A major problem in RO systems is biofouling, caused by severe growth of bacterial biofilms. Both, the need of an external energy input, as well as biofouling, impose a high cost on RO operation. Forward osmosis (FO) is an alternative membrane process that uses an osmotic pressure difference as driving force. FO uses a concentrated draw solution to generate high osmotic pressure, which extracts water across a semi permeable membrane from a feed solution. One of the main advantages of FO is the limited amount of external energy required to extract water from the feed solution. The objective of this research is the assessment of the impact of spacers, separating the membrane sheets, and biofouling on the FO system performance. This type of studies allow the optimization of membrane devices and operational conditions. For this, a two dimensional numerical model for FO systems was developed using computational fluid dynamics (CFD). This model allowed the evaluation of the impact of (i) spacers and (ii) biofilm, and (iii) the combined impact of spacers and biofilm on the performance of FO systems. The results obtained showed that the presence of spacers improved the performance of FO systems. Cavity configuration spacer gave the higher water flux across the membrane in clean systems; whereas for biofouled systems, the submerged configuration showed a better performance. In absence of spacers, the thickness or amount of biofilm is inversely proportional with the water flux. Furthermore, membrane surface coverage of the biofilm is more important than the amount of biofilm in terms of the impact on the performance. The numerical model can be adapted with other parameters (e.g. membrane and spacer thickness, feed and draw solution, solution concentration, etc.) to

  17. High burnup models in computer code fair

    Energy Technology Data Exchange (ETDEWEB)

    Dutta, B K; Swami Prasad, P; Kushwaha, H S; Mahajan, S C; Kakodar, A [Bhabha Atomic Research Centre, Bombay (India)

    1997-08-01

    An advanced fuel analysis code FAIR has been developed for analyzing the behavior of fuel rods of water cooled reactors under severe power transients and high burnups. The code is capable of analyzing fuel pins of both collapsible clad, as in PHWR and free standing clad as in LWR. The main emphasis in the development of this code is on evaluating the fuel performance at extended burnups and modelling of the fuel rods for advanced fuel cycles. For this purpose, a number of suitable models have been incorporated in FAIR. For modelling the fission gas release, three different models are implemented, namely Physically based mechanistic model, the standard ANS 5.4 model and the Halden model. Similarly the pellet thermal conductivity can be modelled by the MATPRO equation, the SIMFUEL relation or the Halden equation. The flux distribution across the pellet is modelled by using the model RADAR. For modelling pellet clad interaction (PCMI)/ stress corrosion cracking (SCC) induced failure of sheath, necessary routines are provided in FAIR. The validation of the code FAIR is based on the analysis of fuel rods of EPRI project ``Light water reactor fuel rod modelling code evaluation`` and also the analytical simulation of threshold power ramp criteria of fuel rods of pressurized heavy water reactors. In the present work, a study is carried out by analysing three CRP-FUMEX rods to show the effect of various combinations of fission gas release models and pellet conductivity models, on the fuel analysis parameters. The satisfactory performance of FAIR may be concluded through these case studies. (author). 12 refs, 5 figs.

  18. High burnup models in computer code fair

    International Nuclear Information System (INIS)

    Dutta, B.K.; Swami Prasad, P.; Kushwaha, H.S.; Mahajan, S.C.; Kakodar, A.

    1997-01-01

    An advanced fuel analysis code FAIR has been developed for analyzing the behavior of fuel rods of water cooled reactors under severe power transients and high burnups. The code is capable of analyzing fuel pins of both collapsible clad, as in PHWR and free standing clad as in LWR. The main emphasis in the development of this code is on evaluating the fuel performance at extended burnups and modelling of the fuel rods for advanced fuel cycles. For this purpose, a number of suitable models have been incorporated in FAIR. For modelling the fission gas release, three different models are implemented, namely Physically based mechanistic model, the standard ANS 5.4 model and the Halden model. Similarly the pellet thermal conductivity can be modelled by the MATPRO equation, the SIMFUEL relation or the Halden equation. The flux distribution across the pellet is modelled by using the model RADAR. For modelling pellet clad interaction (PCMI)/ stress corrosion cracking (SCC) induced failure of sheath, necessary routines are provided in FAIR. The validation of the code FAIR is based on the analysis of fuel rods of EPRI project ''Light water reactor fuel rod modelling code evaluation'' and also the analytical simulation of threshold power ramp criteria of fuel rods of pressurized heavy water reactors. In the present work, a study is carried out by analysing three CRP-FUMEX rods to show the effect of various combinations of fission gas release models and pellet conductivity models, on the fuel analysis parameters. The satisfactory performance of FAIR may be concluded through these case studies. (author). 12 refs, 5 figs

  19. Performance of the first short model 150 mm aperture Nb$_3$Sn Quadrupole MQXFS for the High-Luminosity LHC upgrade

    CERN Document Server

    Chlachidze, G; Anerella, M; Bossert, R; Cavanna, E; Cheng, D; Dietderich, D; DiMarco, J; Felice, H; Ferracin, P; Ghosh, A; Grosclaude, P; Guinchard, M; Hafalia, A R; Holik, E; Izquierdo Bermudez, S; Krave, S; Marchevsky, M; Nobrega, F; Orris, D; Pan, H; Perez, J C; Prestemon, S; Ravaioli, E; Sabbi, G L; Salmi, T; Schmalzle, J; Stoynev, S; Strauss, T; Sylvester, C; Tartaglia, M; Todesco, E; Vallone, G; Velev, G; Wanderer, P; Wang, X; Yu, M

    2017-01-01

    The US LHC Accelerator Research Program (LARP) and CERN combined their efforts in developing Nb$_{3}$Sn magnets for the High-Luminosity LHC upgrade. The ultimate goal of this collaboration is to fabricate large aperture Nb$_{3}$Sn quadrupoles for the LHC interaction regions (IR). These magnets will replace the present 70 mm aperture NbTi quadrupole triplets for expected increase of the LHC peak luminosity by a factor of 5. Over the past decade LARP successfully fabricated and tested short and long models of 90 mm and 120 mm aperture Nb$_{3}$Sn quadrupoles. Recently the first short model of 150 mm diameter quadrupole MQXFS was built with coils fabricated both by the LARP and CERN. The magnet performance was tested at Fermilab’s vertical magnet test facility. This paper reports the test results, including the quench training at 1.9 K, ramp rate and temperature dependence studies.

  20. Performance of the first short model 150 mm aperture Nb$_3$Sn Quadrupole MQXFS for the High- Luminosity LHC upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Chlachidze, G.; et al.

    2016-08-30

    The US LHC Accelerator Research Program (LARP) and CERN combined their efforts in developing Nb3Sn magnets for the High-Luminosity LHC upgrade. The ultimate goal of this collaboration is to fabricate large aperture Nb3Sn quadrupoles for the LHC interaction regions (IR). These magnets will replace the present 70 mm aperture NbTi quadrupole triplets for expected increase of the LHC peak luminosity by a factor of 5. Over the past decade LARP successfully fabricated and tested short and long models of 90 mm and 120 mm aperture Nb3Sn quadrupoles. Recently the first short model of 150 mm diameter quadrupole MQXFS was built with coils fabricated both by the LARP and CERN. The magnet performance was tested at Fermilab’s vertical magnet test facility. This paper reports the test results, including the quench training at 1.9 K, ramp rate and temperature dependence studies.

  1. Mathematical modeling of high-rate Anammox UASB reactor based on granular packing patterns

    International Nuclear Information System (INIS)

    Tang, Chong-Jian; He, Rui; Zheng, Ping; Chai, Li-Yuan; Min, Xiao-Bo

    2013-01-01

    Highlights: ► A novel model was conducted to estimate volumetric nitrogen conversion rates. ► The packing patterns of the granules in Anammox reactor are investigated. ► The simple cubic packing pattern was simulated in high-rate Anammox UASB reactor. ► Operational strategies concerning sludge concentration were proposed by the modeling. -- Abstract: A novel mathematical model was developed to estimate the volumetric nitrogen conversion rates of a high-rate Anammox UASB reactor based on the packing patterns of granular sludge. A series of relationships among granular packing density, sludge concentration, hydraulic retention time and volumetric conversion rate were constructed to correlate Anammox reactor performance with granular packing patterns. It was suggested that the Anammox granules packed as the equivalent simple cubic pattern in high-rate UASB reactor with packing density of 50–55%, which not only accommodated a high concentration of sludge inside the reactor, but also provided large pore volume, thus prolonging the actual substrate conversion time. Results also indicated that it was necessary to improve Anammox reactor performance by enhancing substrate loading when sludge concentration was higher than 37.8 gVSS/L. The established model was carefully calibrated and verified, and it well simulated the performance of granule-based high-rate Anammox UASB reactor

  2. Mathematical modeling of high-rate Anammox UASB reactor based on granular packing patterns

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Chong-Jian, E-mail: chjtangzju@yahoo.com.cn [Department of Environmental Engineering, School of Metallurgical Science and Engineering, Central South University, Changsha 410083 (China); National Engineering Research Center for Control and Treatment of Heavy Metal Pollution, Changsha 410083 (China); He, Rui; Zheng, Ping [Department of Environmental Engineering, Zhejiang University, Zijingang Campus, Hangzhou 310058 (China); Chai, Li-Yuan; Min, Xiao-Bo [Department of Environmental Engineering, School of Metallurgical Science and Engineering, Central South University, Changsha 410083 (China); National Engineering Research Center for Control and Treatment of Heavy Metal Pollution, Changsha 410083 (China)

    2013-04-15

    Highlights: ► A novel model was conducted to estimate volumetric nitrogen conversion rates. ► The packing patterns of the granules in Anammox reactor are investigated. ► The simple cubic packing pattern was simulated in high-rate Anammox UASB reactor. ► Operational strategies concerning sludge concentration were proposed by the modeling. -- Abstract: A novel mathematical model was developed to estimate the volumetric nitrogen conversion rates of a high-rate Anammox UASB reactor based on the packing patterns of granular sludge. A series of relationships among granular packing density, sludge concentration, hydraulic retention time and volumetric conversion rate were constructed to correlate Anammox reactor performance with granular packing patterns. It was suggested that the Anammox granules packed as the equivalent simple cubic pattern in high-rate UASB reactor with packing density of 50–55%, which not only accommodated a high concentration of sludge inside the reactor, but also provided large pore volume, thus prolonging the actual substrate conversion time. Results also indicated that it was necessary to improve Anammox reactor performance by enhancing substrate loading when sludge concentration was higher than 37.8 gVSS/L. The established model was carefully calibrated and verified, and it well simulated the performance of granule-based high-rate Anammox UASB reactor.

  3. Economic Model For a Return on Investment Analysis of United States Government High Performance Computing (HPC) Research and Development (R & D) Investment

    Energy Technology Data Exchange (ETDEWEB)

    Joseph, Earl C. [IDC Research Inc., Framingham, MA (United States); Conway, Steve [IDC Research Inc., Framingham, MA (United States); Dekate, Chirag [IDC Research Inc., Framingham, MA (United States)

    2013-09-30

    This study investigated how high-performance computing (HPC) investments can improve economic success and increase scientific innovation. This research focused on the common good and provided uses for DOE, other government agencies, industry, and academia. The study created two unique economic models and an innovation index: 1 A macroeconomic model that depicts the way HPC investments result in economic advancements in the form of ROI in revenue (GDP), profits (and cost savings), and jobs. 2 A macroeconomic model that depicts the way HPC investments result in basic and applied innovations, looking at variations by sector, industry, country, and organization size. A new innovation index that provides a means of measuring and comparing innovation levels. Key findings of the pilot study include: IDC collected the required data across a broad set of organizations, with enough detail to create these models and the innovation index. The research also developed an expansive list of HPC success stories.

  4. Sensitivity of hydrological performance assessment analysis to variations in material properties, conceptual models, and ventilation models

    Energy Technology Data Exchange (ETDEWEB)

    Sobolik, S.R.; Ho, C.K.; Dunn, E. [Sandia National Labs., Albuquerque, NM (United States); Robey, T.H. [Spectra Research Inst., Albuquerque, NM (United States); Cruz, W.T. [Univ. del Turabo, Gurabo (Puerto Rico)

    1996-07-01

    The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level nuclear waste repository. Site characterization includes surface- based and underground testing. Analyses have been performed to support the design of an Exploratory Studies Facility (ESF) and the design of the tests performed as part of the characterization process, in order to ascertain that they have minimal impact on the natural ability of the site to isolate waste. The information in this report pertains to sensitivity studies evaluating previous hydrological performance assessment analyses to variation in the material properties, conceptual models, and ventilation models, and the implications of this sensitivity on previous recommendations supporting ESF design. This document contains information that has been used in preparing recommendations for Appendix I of the Exploratory Studies Facility Design Requirements document.

  5. Sensitivity of hydrological performance assessment analysis to variations in material properties, conceptual models, and ventilation models

    International Nuclear Information System (INIS)

    Sobolik, S.R.; Ho, C.K.; Dunn, E.; Robey, T.H.; Cruz, W.T.

    1996-07-01

    The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level nuclear waste repository. Site characterization includes surface- based and underground testing. Analyses have been performed to support the design of an Exploratory Studies Facility (ESF) and the design of the tests performed as part of the characterization process, in order to ascertain that they have minimal impact on the natural ability of the site to isolate waste. The information in this report pertains to sensitivity studies evaluating previous hydrological performance assessment analyses to variation in the material properties, conceptual models, and ventilation models, and the implications of this sensitivity on previous recommendations supporting ESF design. This document contains information that has been used in preparing recommendations for Appendix I of the Exploratory Studies Facility Design Requirements document

  6. Bayesian calibration of power plant models for accurate performance prediction

    International Nuclear Information System (INIS)

    Boksteen, Sowande Z.; Buijtenen, Jos P. van; Pecnik, Rene; Vecht, Dick van der

    2014-01-01

    Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions

  7. High Performance Multi-GPU SpMV for Multi-component PDE-Based Applications

    KAUST Repository

    Abdelfattah, Ahmad

    2015-07-25

    Leveraging optimization techniques (e.g., register blocking and double buffering) introduced in the context of KBLAS, a Level 2 BLAS high performance library on GPUs, the authors implement dense matrix-vector multiplications within a sparse-block structure. While these optimizations are important for high performance dense kernel executions, they are even more critical when dealing with sparse linear algebra operations. The most time-consuming phase of many multicomponent applications, such as models of reacting flows or petroleum reservoirs, is the solution at each implicit time step of large, sparse spatially structured or unstructured linear systems. The standard method is a preconditioned Krylov solver. The Sparse Matrix-Vector multiplication (SpMV) is, in turn, one of the most time-consuming operations in such solvers. Because there is no data reuse of the elements of the matrix within a single SpMV, kernel performance is limited by the speed at which data can be transferred from memory to registers, making the bus bandwidth the major bottleneck. On the other hand, in case of a multi-species model, the resulting Jacobian has a dense block structure. For contemporary petroleum reservoir simulations, the block size typically ranges from three to a few dozen among different models, and still larger blocks are relevant within adaptively model-refined regions of the domain, though generally the size of the blocks, related to the number of conserved species, is constant over large regions within a given model. This structure can be exploited beyond the convenience of a block compressed row data format, because it offers opportunities to hide the data motion with useful computations. The new SpMV kernel outperforms existing state-of-the-art implementations on single and multi-GPUs using matrices with dense block structure representative of porous media applications with both structured and unstructured multi-component grids.

  8. High Performance Programming Using Explicit Shared Memory Model on the Cray T3D

    Science.gov (United States)

    Saini, Subhash; Simon, Horst D.; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    The Cray T3D is the first-phase system in Cray Research Inc.'s (CRI) three-phase massively parallel processing program. In this report we describe the architecture of the T3D, as well as the CRAFT (Cray Research Adaptive Fortran) programming model, and contrast it with PVM, which is also supported on the T3D We present some performance data based on the NAS Parallel Benchmarks to illustrate both architectural and software features of the T3D.

  9. Highlighting High Performance: Blackstone Valley Regional Vocational Technical High School; Upton, Massachusetts

    Energy Technology Data Exchange (ETDEWEB)

    2006-10-01

    This brochure describes the key high-performance building features of the Blackstone Valley High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar energy, building envelope, heating and cooling systems, and water conservation. Energy cost savings are also discussed.

  10. Ground Glass Pozzolan in Conventional, High, and Ultra-High Performance Concrete

    Directory of Open Access Journals (Sweden)

    Tagnit-Hamou Arezki

    2018-01-01

    Full Text Available Ground-glass pozzolan (G obtained by grinding the mixed-waste glass to same fineness of cement can act as a supplementary-cementitious material (SCM, given that it is an amorphous and a pozzolanic material. The G showed promising performances in different concrete types such as conventional concrete (CC, high-performance concrete (HPC, and ultra-high performance concrete (UHPC. The current paper reports on the characteristics and performance of G in these concrete types. The use of G provides several advantages (technological, economical, and environmental. It reduces the production cost of concrete and decrease the carbon footprint of a traditional concrete structures. The rheology of fresh concrete can be improved due to the replacement of cement by non-absorptive glass particles. Strength and rigidity improvements in the concrete containing G are due to the fact that glass particles act as inclusions having a very high strength and elastic modulus that have a strengthening effect on the overall hardened matrix.

  11. The COD Model: Simulating Workgroup Performance

    Science.gov (United States)

    Biggiero, Lucio; Sevi, Enrico

    Though the question of the determinants of workgroup performance is one of the most central in organization science, precise theoretical frameworks and formal demonstrations are still missing. In order to fill in this gap the COD agent-based simulation model is here presented and used to study the effects of task interdependence and bounded rationality on workgroup performance. The first relevant finding is an algorithmic demonstration of the ordering of interdependencies in terms of complexity, showing that the parallel mode is the most simplex, followed by the sequential and then by the reciprocal. This result is far from being new in organization science, but what is remarkable is that now it has the strength of an algorithmic demonstration instead of being based on the authoritativeness of some scholar or on some episodic empirical finding. The second important result is that the progressive introduction of realistic limits to agents' rationality dramatically reduces workgroup performance and addresses to a rather interesting result: when agents' rationality is severely bounded simple norms work better than complex norms. The third main finding is that when the complexity of interdependence is high, then the appropriate coordination mechanism is agents' direct and active collaboration, which means teamwork.

  12. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    Science.gov (United States)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the

  13. Mood states and motor performance: a study with high performance voleybol athletes

    Directory of Open Access Journals (Sweden)

    Lenamar Fiorese Vieira

    2008-07-01

    Full Text Available http://dx.doi.org/10.5007/1980-0037.2008v10n1p62 The objective of this research was to investigate the relationship between the sporting performance and mood states of high performance volleyball athletes. Twenty-three adult athletes of both sexes were assessed. The measurement instrument adopted was the POMS questionnaire. Data collection was carried out individually during the state championships. Dada were analyzed using descriptive statistics; the Friedman test for analysis of variance and the Mann-Whitney test for differences between means. The results demonstrated that both teams exhibited the mood state profi le corresponding to the “iceberg” profile. In the male team, vigor remained constant throughout all phases of the competition, while in the female team this element was unstable. The male team’s fatigue began low, during the training phase, with rates that rose as the competition progressed, with statistically significant differences between the fi rst and last matches the team played. In the female team, the confusion factor, which was at a high level during training, reduced progressively throughout the competition, with a difference that was signifi cant to p ≤ 0.05. With relation to performance and mood profi le, the female team exhibited statistically significant differences between the mean vigor and fatigue factors of high and low performance athletes. It is therefore concluded that the mood state profi le is a factor that impacts on the motor performance of these high performance teams.

  14. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  15. High performance leadership in unusually challenging educational circumstances

    Directory of Open Access Journals (Sweden)

    Andy Hargreaves

    2015-04-01

    Full Text Available This paper draws on findings from the results of a study of leadership in high performing organizations in three sectors. Organizations were sampled and included on the basis of high performance in relation to no performance, past performance, performance among similar peers and performance in the face of limited resources or challenging circumstances. The paper concentrates on leadership in four schools that met the sample criteria.  It draws connections to explanations of the high performance ofEstoniaon the OECD PISA tests of educational achievement. The article argues that leadership in these four schools that performed above expectations comprised more than a set of competencies. Instead, leadership took the form of a narrative or quest that pursued an inspiring dream with relentless determination; took improvement pathways that were more innovative than comparable peers; built collaboration and community including with competing schools; and connected short-term success to long-term sustainability.

  16. Identify High-Quality Protein Structural Models by Enhanced K-Means.

    Science.gov (United States)

    Wu, Hongjie; Li, Haiou; Jiang, Min; Chen, Cheng; Lv, Qiang; Wu, Chuang

    2017-01-01

    Background. One critical issue in protein three-dimensional structure prediction using either ab initio or comparative modeling involves identification of high-quality protein structural models from generated decoys. Currently, clustering algorithms are widely used to identify near-native models; however, their performance is dependent upon different conformational decoys, and, for some algorithms, the accuracy declines when the decoy population increases. Results. Here, we proposed two enhanced K -means clustering algorithms capable of robustly identifying high-quality protein structural models. The first one employs the clustering algorithm SPICKER to determine the initial centroids for basic K -means clustering ( SK -means), whereas the other employs squared distance to optimize the initial centroids ( K -means++). Our results showed that SK -means and K -means++ were more robust as compared with SPICKER alone, detecting 33 (59%) and 42 (75%) of 56 targets, respectively, with template modeling scores better than or equal to those of SPICKER. Conclusions. We observed that the classic K -means algorithm showed a similar performance to that of SPICKER, which is a widely used algorithm for protein-structure identification. Both SK -means and K -means++ demonstrated substantial improvements relative to results from SPICKER and classical K -means.

  17. Development and Performance of the Modularized, High-performance Computing and Hybrid-architecture Capable GEOS-Chem Chemical Transport Model

    Science.gov (United States)

    Long, M. S.; Yantosca, R.; Nielsen, J.; Linford, J. C.; Keller, C. A.; Payer Sulprizio, M.; Jacob, D. J.

    2014-12-01

    The GEOS-Chem global chemical transport model (CTM), used by a large atmospheric chemistry research community, has been reengineered to serve as a platform for a range of computational atmospheric chemistry science foci and applications. Development included modularization for coupling to general circulation and Earth system models (ESMs) and the adoption of co-processor capable atmospheric chemistry solvers. This was done using an Earth System Modeling Framework (ESMF) interface that operates independently of GEOS-Chem scientific code to permit seamless transition from the GEOS-Chem stand-alone serial CTM to deployment as a coupled ESM module. In this manner, the continual stream of updates contributed by the CTM user community is automatically available for broader applications, which remain state-of-science and directly referenceable to the latest version of the standard GEOS-Chem CTM. These developments are now available as part of the standard version of the GEOS-Chem CTM. The system has been implemented as an atmospheric chemistry module within the NASA GEOS-5 ESM. The coupled GEOS-5/GEOS-Chem system was tested for weak and strong scalability and performance with a tropospheric oxidant-aerosol simulation. Results confirm that the GEOS-Chem chemical operator scales efficiently for any number of processes. Although inclusion of atmospheric chemistry in ESMs is computationally expensive, the excellent scalability of the chemical operator means that the relative cost goes down with increasing number of processes, making fine-scale resolution simulations possible.

  18. High Performance Human Resource Practices, Identification with Organizational Values and Goals, and Service-Oriented Organizational Citizenship Behavior: A Review of Literature and Proposed Model

    Directory of Open Access Journals (Sweden)

    Nasurdin Aizzat Mohd.

    2015-01-01

    Full Text Available Increasing competition within the hospitality industry has recognized the importance of service quality as a key business differentiation strategy. Proactive involvement of employees is a vital component of the service delivery, which in turn, enhances customer satisfaction and loyalty. Hence, hospitality organizations, particularly hotels, need to encourage their employees to perform voluntary behaviors that go “beyond their call of duty”. These behaviors are referred to as service-oriented organizational citizenship behaviors (hereafter labeled as SO-OCBs. A review of the literature indicates that an organization’s human resource management (henceforth labeled as HRM practices are instrumental in establishing the tone of the employee-employer relationship, which subsequently affects employees’ display of discretionary functional service-related behaviors. Specifically, high-performance HRM practices can nurture a relational employment relationship, leading to internalization of organizational values and goals. This, in turn, would induce employees to engage in greater SO-OCBs. However, conceptual and empirical work explaining the mechanism by which high-performance HRM practices relate to SO-OCBs remains scarce. Therefore, this paper aims to construct a model linking a set of high-performance HRM practices (selective hiring, communication, appraisal, and reward and SO-OCBs. Identification with organizational values and goals is posited as a mediator in the proposed relationship. A discussion of the literature to support the proposed framework is furnished.

  19. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  20. High performance flexible CMOS SOI FinFETs

    KAUST Repository

    Fahad, Hossain M.

    2014-06-01

    We demonstrate the first ever CMOS compatible soft etch back based high performance flexible CMOS SOI FinFETs. The move from planar to non-planar FinFETs has enabled continued scaling down to the 14 nm technology node. This has been possible due to the reduction in off-state leakage and reduced short channel effects on account of the superior electrostatic charge control of multiple gates. At the same time, flexible electronics is an exciting expansion opportunity for next generation electronics. However, a fully integrated low-cost system will need to maintain ultra-large-scale-integration density, high performance and reliability - same as today\\'s traditional electronics. Up until recently, this field has been mainly dominated by very weak performance organic electronics enabled by low temperature processes, conducive to low melting point plastics. Now however, we show the world\\'s highest performing flexible version of 3D FinFET CMOS using a state-of-the-art CMOS compatible fabrication technique for high performance ultra-mobile consumer applications with stylish design. © 2014 IEEE.

  1. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  2. Reference Manual for the System Advisor Model's Wind Power Performance Model

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, J.; Jorgenson, J.; Gilman, P.; Ferguson, T.

    2014-08-01

    This manual describes the National Renewable Energy Laboratory's System Advisor Model (SAM) wind power performance model. The model calculates the hourly electrical output of a single wind turbine or of a wind farm. The wind power performance model requires information about the wind resource, wind turbine specifications, wind farm layout (if applicable), and costs. In SAM, the performance model can be coupled to one of the financial models to calculate economic metrics for residential, commercial, or utility-scale wind projects. This manual describes the algorithms used by the wind power performance model, which is available in the SAM user interface and as part of the SAM Simulation Core (SSC) library, and is intended to supplement the user documentation that comes with the software.

  3. Architecting Web Sites for High Performance

    Directory of Open Access Journals (Sweden)

    Arun Iyengar

    2002-01-01

    Full Text Available Web site applications are some of the most challenging high-performance applications currently being developed and deployed. The challenges emerge from the specific combination of high variability in workload characteristics and of high performance demands regarding the service level, scalability, availability, and costs. In recent years, a large body of research has addressed the Web site application domain, and a host of innovative software and hardware solutions have been proposed and deployed. This paper is an overview of recent solutions concerning the architectures and the software infrastructures used in building Web site applications. The presentation emphasizes three of the main functions in a complex Web site: the processing of client requests, the control of service levels, and the interaction with remote network caches.

  4. Toward an ultra-high resolution community climate system model for the BlueGene platform

    Energy Technology Data Exchange (ETDEWEB)

    Dennis, John M [Computer Science Section, National Center for Atmospheric Research, Boulder, CO (United States); Jacob, Robert [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL (United States); Vertenstein, Mariana [Climate and Global Dynamics Division, National Center for Atmospheric Research, Boulder, CO (United States); Craig, Tony [Climate and Global Dynamics Division, National Center for Atmospheric Research, Boulder, CO (United States); Loy, Raymond [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL (United States)

    2007-07-15

    Global climate models need to simulate several small, regional-scale processes which affect the global circulation in order to accurately simulate the climate. This is particularly important in the ocean where small scale features such as oceanic eddies are currently represented with adhoc parameterizations. There is also a need for higher resolution to provide climate predictions at small, regional scales. New high-performance computing platforms such as the IBM BlueGene can provide the necessary computational power to perform ultra-high resolution climate model integrations. We have begun to investigate the scaling of the individual components of the Community Climate System Model to prepare it for integrations on BlueGene and similar platforms. Our investigations show that it is possible to successfully utilize O(32K) processors. We describe the scalability of five models: the Parallel Ocean Program (POP), the Community Ice CodE (CICE), the Community Land Model (CLM), and the new CCSM sequential coupler (CPL7) which are components of the next generation Community Climate System Model (CCSM); as well as the High-Order Method Modeling Environment (HOMME) which is a dynamical core currently being evaluated within the Community Atmospheric Model. For our studies we concentrate on 1/10{sup 0} resolution for CICE, POP, and CLM models and 1/4{sup 0} resolution for HOMME. The ability to simulate high resolutions on the massively parallel petascale systems that will dominate high-performance computing for the foreseeable future is essential to the advancement of climate science.

  5. Multiphysics modelling and experimental validation of high concentration photovoltaic modules

    International Nuclear Information System (INIS)

    Theristis, Marios; Fernández, Eduardo F.; Sumner, Mike; O'Donovan, Tadhg S.

    2017-01-01

    Highlights: • A multiphysics modelling approach for concentrating photovoltaics was developed. • An experimental campaign was conducted to validate the models. • The experimental results were in good agreement with the models. • The multiphysics modelling allows the concentrator’s optimisation. - Abstract: High concentration photovoltaics, equipped with high efficiency multijunction solar cells, have great potential in achieving cost-effective and clean electricity generation at utility scale. Such systems are more complex compared to conventional photovoltaics because of the multiphysics effect that is present. Modelling the power output of such systems is therefore crucial for their further market penetration. Following this line, a multiphysics modelling procedure for high concentration photovoltaics is presented in this work. It combines an open source spectral model, a single diode electrical model and a three-dimensional finite element thermal model. In order to validate the models and the multiphysics modelling procedure against actual data, an outdoor experimental campaign was conducted in Albuquerque, New Mexico using a high concentration photovoltaic monomodule that is thoroughly described in terms of its geometry and materials. The experimental results were in good agreement (within 2.7%) with the predicted maximum power point. This multiphysics approach is relatively more complex when compared to empirical models, but besides the overall performance prediction it can also provide better understanding of the physics involved in the conversion of solar irradiance into electricity. It can therefore be used for the design and optimisation of high concentration photovoltaic modules.

  6. Impact of Loss Synchronization on Reliable High Speed Networks: A Model Based Simulation

    Directory of Open Access Journals (Sweden)

    Suman Kumar

    2014-01-01

    Full Text Available Contemporary nature of network evolution demands for simulation models which are flexible, scalable, and easily implementable. In this paper, we propose a fluid based model for performance analysis of reliable high speed networks. In particular, this paper aims to study the dynamic relationship between congestion control algorithms and queue management schemes, in order to develop a better understanding of the causal linkages between the two. We propose a loss synchronization module which is user configurable. We validate our model through simulations under controlled settings. Also, we present a performance analysis to provide insights into two important issues concerning 10 Gbps high speed networks: (i impact of bottleneck buffer size on the performance of 10 Gbps high speed network and (ii impact of level of loss synchronization on link utilization-fairness tradeoffs. The practical impact of the proposed work is to provide design guidelines along with a powerful simulation tool to protocol designers and network developers.

  7. Gender consequences of a national performance-based funding model

    DEFF Research Database (Denmark)

    Nielsen, Mathias Wullum

    2017-01-01

    -regarded’ and highly selective journals and book publishers, and 1 and 5 points for equivalent scientific contributions via ‘normal level’ channels. On the basis of bibliometric data, the study shows that the BRI considerably widens the existing gender gap in researcher performance, since men on average receive more......This article investigates the extent to which the Danish Bibliometric Research Indicator (BRI) reflects the performance of men and women differently. The model is based on a differentiated counting of peer-reviewed publications, awarding three and eight points for contributions to ‘well...... privileges collaborative research, which disadvantages women due to gender differences in collaborative network relations....

  8. Constrained bayesian inference of project performance models

    OpenAIRE

    Sunmola, Funlade

    2013-01-01

    Project performance models play an important role in the management of project success. When used for monitoring projects, they can offer predictive ability such as indications of possible delivery problems. Approaches for monitoring project performance relies on available project information including restrictions imposed on the project, particularly the constraints of cost, quality, scope and time. We study in this paper a Bayesian inference methodology for project performance modelling in ...

  9. In Silico Modeling of Gastrointestinal Drug Absorption: Predictive Performance of Three Physiologically Based Absorption Models.

    Science.gov (United States)

    Sjögren, Erik; Thörn, Helena; Tannergren, Christer

    2016-06-06

    particle size. In conclusion, it was shown that all three software packages are useful to guide formulation development. However, as a consequence of the high fraction of inaccurate predictions (prediction error >2-fold) and the clear trend toward decreased accuracy with decreased predicted fabs observed with Simcyp, the results indicate that GI-Sim and GastroPlus perform better than Simcyp in predicting the intestinal absorption of the incompletely absorbed drugs when a higher degree of accuracy is needed. In addition, this study suggests that modeling and simulation research groups should perform systematic model evaluations using their own input data to maximize confidence in model performance and output.

  10. Modeling and Performance Considerations for Automated Fault Isolation in Complex Systems

    Science.gov (United States)

    Ferrell, Bob; Oostdyk, Rebecca

    2010-01-01

    The purpose of this paper is to document the modeling considerations and performance metrics that were examined in the development of a large-scale Fault Detection, Isolation and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FDIR team members developed a set of operational requirements for the models that would be used for fault isolation and worked closely with the vendor of the software tools selected for fault isolation to ensure that the software was able to meet the requirements. Once the requirements were established, example models of sufficient complexity were used to test the performance of the software. The results of the performance testing demonstrated the need for enhancements to the software in order to meet the demands of the full-scale ground and vehicle FDIR system. The paper highlights the importance of the development of operational requirements and preliminary performance testing as a strategy for identifying deficiencies in highly scalable systems and rectifying those deficiencies before they imperil the success of the project

  11. Research on mechanical and sensoric set-up for high strain rate testing of high performance fibers

    Science.gov (United States)

    Unger, R.; Schegner, P.; Nocke, A.; Cherif, C.

    2017-10-01

    Within this research project, the tensile behavior of high performance fibers, such as carbon fibers, is investigated under high velocity loads. This contribution (paper) focuses on the clamp set-up of two testing machines. Based on a kinematic model, weight optimized clamps are designed and evaluated. By analyzing the complex dynamic behavior of conventional high velocity testing machines, it has been shown that the impact typically exhibits an elastic characteristic. This leads to barely predictable breaking speeds and will not work at higher speeds when acceleration force exceeds material specifications. Therefore, a plastic impact behavior has to be achieved, even at lower testing speeds. This type of impact behavior at lower speeds can be realized by means of some minor test set-up adaptions.

  12. HiGIS: An Open Framework for High Performance Geographic Information System

    Directory of Open Access Journals (Sweden)

    XIONG, W.

    2015-08-01

    Full Text Available Big data era expose many challenges to geospatial data management, geocomputation and cartography. There is no exception in geographic information systems (GIS community. Technologies and facilities of high performance computing (HPC become more and more feasible to researchers, while mobile computing, ubiquitous computing, and cloud computing are emerging. But traditional GIS need to be improved to take advantages of all these evolutions. We proposed and implemented a GIS married with high performance computing, which is called HiGIS. The goal of HiGIS is to promote the performance of geocomputation by leveraging the power of HPC, and to build an open framework for geospatial data storing, processing, displaying and sharing. In this paper the architecture, data model and modules of the HiGIS system are introduced. A geocomputation scheduling engine based on communicating sequential process was designed to exploit spatial analysis and processing. Parallel I/O strategy using file view was proposed to improve the performance of geospatial raster data access. In order to support web-based online mapping, an interactive cartographic script was provided to represent a map. A demostration of locating house was used to manifest the characteristics of HiGIS. Parallel and concurrency performance experiments show the feasibility of this system.

  13. Repository Integration Program: RIP performance assessment and strategy evaluation model theory manual and user's guide

    International Nuclear Information System (INIS)

    1995-11-01

    This report describes the theory and capabilities of RIP (Repository Integration Program). RIP is a powerful and flexible computational tool for carrying out probabilistic integrated total system performance assessments for geologic repositories. The primary purpose of RIP is to provide a management tool for guiding system design and site characterization. In addition, the performance assessment model (and the process of eliciting model input) can act as a mechanism for integrating the large amount of available information into a meaningful whole (in a sense, allowing one to keep the ''big picture'' and the ultimate aims of the project clearly in focus). Such an integration is useful both for project managers and project scientists. RIP is based on a '' top down'' approach to performance assessment that concentrates on the integration of the entire system, and utilizes relatively high-level descriptive models and parameters. The key point in the application of such a ''top down'' approach is that the simplified models and associated high-level parameters must incorporate an accurate representation of their uncertainty. RIP is designed in a very flexible manner such that details can be readily added to various components of the model without modifying the computer code. Uncertainty is also handled in a very flexible manner, and both parameter and model (process) uncertainty can be explicitly considered. Uncertainty is propagated through the integrated PA model using an enhanced Monte Carlo method. RIP must rely heavily on subjective assessment (expert opinion) for much of its input. The process of eliciting the high-level input parameters required for RIP is critical to its successful application. As a result, in order for any project to successfully apply a tool such as RIP, an enormous amount of communication and cooperation must exist between the data collectors, the process modelers, and the performance. assessment modelers

  14. Assessing the detectability of antioxidants in two-dimensional high-performance liquid chromatography.

    Science.gov (United States)

    Bassanese, Danielle N; Conlan, Xavier A; Barnett, Neil W; Stevenson, Paul G

    2015-05-01

    This paper explores the analytical figures of merit of two-dimensional high-performance liquid chromatography for the separation of antioxidant standards. The cumulative two-dimensional high-performance liquid chromatography peak area was calculated for 11 antioxidants by two different methods--the areas reported by the control software and by fitting the data with a Gaussian model; these methods were evaluated for precision and sensitivity. Both methods demonstrated excellent precision in regards to retention time in the second dimension (%RSD below 1.16%) and cumulative second dimension peak area (%RSD below 3.73% from the instrument software and 5.87% for the Gaussian method). Combining areas reported by the high-performance liquid chromatographic control software displayed superior limits of detection, in the order of 1 × 10(-6) M, almost an order of magnitude lower than the Gaussian method for some analytes. The introduction of the countergradient eliminated the strong solvent mismatch between dimensions, leading to a much improved peak shape and better detection limits for quantification. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Action Regulation Introducing Stress Management Techniques and High Performance in Soccer

    Directory of Open Access Journals (Sweden)

    Saha Soumendra

    2015-01-01

    Full Text Available Fifty-two high performing soccer players of South-East Asian contingent were selected by three expert soccer instructors on the basis of their consistent high performance and on the basis of their performance on psychomotor and psychobiological parameters. All of these players were subjected to pre-intervention analyses of Sc orienting reflex indices (phasic components of electrodermal activity as well as sympathovagal activity based on HRV indices which were assessed simultaneously while the players were engaged in psychomotor reaction ability performances. Structural equations were done to identify the path regression related to performance excellence, which were suggestive of incoherence between the predictors. Short-term intensive self-regulation as well as action-regulation training modules was developed to foster ideomotor orientation in the players, which however was found effective in modification of the intrinsic psychobiological mechanism leading towards excellence in performance in the high-performer soccer players. Thus they were randomly categorised into four groups, comprising of one no-intervention control group (N = 13; experimental group I (N = 13 who received action-regulation training; experimental group II (N = 13, who received training of electromyography (EMG biofeedback, and experimental group III (N = 13, who received combined training of action - regulation and electromyography (EMG biofeedback (for 15 min.s/day, for 3 days per week, for 12 weeks. Repeated measure of ANOVA and multiple linear and polynomial regression analyses along with the predictive structural analyses were done to identify relationships between the psychobiological processes, in relation to the cognitive-affective and affective-motivational aspects of sports behaviour, revealed by the projective analyses of emotionality. These models were aptly able to explain the efficacy of the action-regulation intervention techniques, in inducing the cognitive

  16. Comparison of turbulence measurements from DIII-D low-mode and high-performance plasmas to turbulence simulations and models

    International Nuclear Information System (INIS)

    Rhodes, T.L.; Leboeuf, J.-N.; Sydora, R.D.; Groebner, R.J.; Doyle, E.J.; McKee, G.R.; Peebles, W.A.; Rettig, C.L.; Zeng, L.; Wang, G.

    2002-01-01

    Measured turbulence characteristics (correlation lengths, spectra, etc.) in low-confinement (L-mode) and high-performance plasmas in the DIII-D tokamak [Luxon et al., Proceedings Plasma Physics and Controlled Nuclear Fusion Research 1986 (International Atomic Energy Agency, Vienna, 1987), Vol. I, p. 159] show many similarities with the characteristics determined from turbulence simulations. Radial correlation lengths Δr of density fluctuations from L-mode discharges are found to be numerically similar to the ion poloidal gyroradius ρ θ,s , or 5-10 times the ion gyroradius ρ s over the radial region 0.2 θ,s or 5-10 times ρ s , an experiment was performed which modified ρ θs while keeping other plasma parameters approximately fixed. It was found that the experimental Δr did not scale as ρ θ,s , which was similar to low-resolution UCAN simulations. Finally, both experimental measurements and gyrokinetic simulations indicate a significant reduction in the radial correlation length from high-performance quiescent double barrier discharges, as compared to normal L-mode, consistent with reduced transport in these high-performance plasmas

  17. 10th International Workshop on Parallel Tools for High Performance Computing

    CERN Document Server

    Gracia, José; Hilbrich, Tobias; Knüpfer, Andreas; Resch, Michael; Nagel, Wolfgang

    2017-01-01

    This book presents the proceedings of the 10th International Parallel Tools Workshop, held October 4-5, 2016 in Stuttgart, Germany – a forum to discuss the latest advances in parallel tools. High-performance computing plays an increasingly important role for numerical simulation and modelling in academic and industrial research. At the same time, using large-scale parallel systems efficiently is becoming more difficult. A number of tools addressing parallel program development and analysis have emerged from the high-performance computing community over the last decade, and what may have started as collection of small helper script has now matured to production-grade frameworks. Powerful user interfaces and an extensive body of documentation allow easy usage by non-specialists.

  18. TOWARD END-TO-END MODELING FOR NUCLEAR EXPLOSION MONITORING: SIMULATION OF UNDERGROUND NUCLEAR EXPLOSIONS AND EARTHQUAKES USING HYDRODYNAMIC AND ANELASTIC SIMULATIONS, HIGH-PERFORMANCE COMPUTING AND THREE-DIMENSIONAL EARTH MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Rodgers, A; Vorobiev, O; Petersson, A; Sjogreen, B

    2009-07-06

    This paper describes new research being performed to improve understanding of seismic waves generated by underground nuclear explosions (UNE) by using full waveform simulation, high-performance computing and three-dimensional (3D) earth models. The goal of this effort is to develop an end-to-end modeling capability to cover the range of wave propagation required for nuclear explosion monitoring (NEM) from the buried nuclear device to the seismic sensor. The goal of this work is to improve understanding of the physical basis and prediction capabilities of seismic observables for NEM including source and path-propagation effects. We are pursuing research along three main thrusts. Firstly, we are modeling the non-linear hydrodynamic response of geologic materials to underground explosions in order to better understand how source emplacement conditions impact the seismic waves that emerge from the source region and are ultimately observed hundreds or thousands of kilometers away. Empirical evidence shows that the amplitudes and frequency content of seismic waves at all distances are strongly impacted by the physical properties of the source region (e.g. density, strength, porosity). To model the near-source shock-wave motions of an UNE, we use GEODYN, an Eulerian Godunov (finite volume) code incorporating thermodynamically consistent non-linear constitutive relations, including cavity formation, yielding, porous compaction, tensile failure, bulking and damage. In order to propagate motions to seismic distances we are developing a one-way coupling method to pass motions to WPP (a Cartesian anelastic finite difference code). Preliminary investigations of UNE's in canonical materials (granite, tuff and alluvium) confirm that emplacement conditions have a strong effect on seismic amplitudes and the generation of shear waves. Specifically, we find that motions from an explosion in high-strength, low-porosity granite have high compressional wave amplitudes and weak

  19. Development of a high performance liquid chromatography method ...

    African Journals Online (AJOL)

    Development of a high performance liquid chromatography method for simultaneous ... Purpose: To develop and validate a new low-cost high performance liquid chromatography (HPLC) method for ..... Several papers have reported the use of ...

  20. Multi-resolution voxel phantom modeling: a high-resolution eye model for computational dosimetry.

    Science.gov (United States)

    Caracappa, Peter F; Rhodes, Ashley; Fiedler, Derek

    2014-09-21

    Voxel models of the human body are commonly used for simulating radiation dose with a Monte Carlo radiation transport code. Due to memory limitations, the voxel resolution of these computational phantoms is typically too large to accurately represent the dimensions of small features such as the eye. Recently reduced recommended dose limits to the lens of the eye, which is a radiosensitive tissue with a significant concern for cataract formation, has lent increased importance to understanding the dose to this tissue. A high-resolution eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and combined with an existing set of whole-body models to form a multi-resolution voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole-body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.

  1. Many-junction photovoltaic device performance under non-uniform high-concentration illumination

    Science.gov (United States)

    Valdivia, Christopher E.; Wilkins, Matthew M.; Chahal, Sanmeet S.; Proulx, Francine; Provost, Philippe-Olivier; Masson, Denis P.; Fafard, Simon; Hinzer, Karin

    2017-09-01

    A parameterized 3D distributed circuit model was developed to calculate the performance of III-V solar cells and photonic power converters (PPC) with a variable number of epitaxial vertically-stacked pn junctions. PPC devices are designed with many pn junctions to realize higher voltages and to operate under non-uniform illumination profiles from a laser or LED. Performance impacts of non-uniform illumination were greatly reduced with increasing number of junctions, with simulations comparing PPC devices with 3 to 20 junctions. Experimental results using Azastra Opto's 12- and 20-junction PPC illuminated by an 845 nm diode laser show high performance even with a small gap between the PPC and optical fiber output, until the local tunnel junction limit is reached.

  2. Integrated cost estimation methodology to support high-performance building design

    Energy Technology Data Exchange (ETDEWEB)

    Vaidya, Prasad; Greden, Lara; Eijadi, David; McDougall, Tom [The Weidt Group, Minnetonka (United States); Cole, Ray [Axiom Engineers, Monterey (United States)

    2007-07-01

    Design teams evaluating the performance of energy conservation measures (ECMs) calculate energy savings rigorously with established modelling protocols, accounting for the interaction between various measures. However, incremental cost calculations do not have a similar rigor. Often there is no recognition of cost reductions with integrated design, nor is there assessment of cost interactions amongst measures. This lack of rigor feeds the notion that high-performance buildings cost more, creating a barrier for design teams pursuing aggressive high-performance outcomes. This study proposes an alternative integrated methodology to arrive at a lower perceived incremental cost for improved energy performance. The methodology is based on the use of energy simulations as means towards integrated design and cost estimation. Various points along the spectrum of integration are identified and characterized by the amount of design effort invested, the scheduling of effort, and relative energy performance of the resultant design. It includes a study of the interactions between building system parameters as they relate to capital costs. Several cost interactions amongst energy measures are found to be significant.The value of this approach is demonstrated with alternatives in a case study that shows the differences between perceived costs for energy measures along various points on the integration spectrum. These alternatives show design tradeoffs and identify how decisions would have been different with a standard costing approach. Areas of further research to make the methodology more robust are identified. Policy measures to encourage the integrated approach and reduce the barriers towards improved energy performance are discussed.

  3. Design and experimentally measure a high performance metamaterial filter

    Science.gov (United States)

    Xu, Ya-wen; Xu, Jing-cheng

    2018-03-01

    Metamaterial filter is a kind of expecting optoelectronic device. In this paper, a metal/dielectric/metal (M/D/M) structure metamaterial filter is simulated and measured. Simulated results indicate that the perfect impedance matching condition between the metamaterial filter and the free space leads to the transmission band. Measured results show that the proposed metamaterial filter achieves high performance transmission on TM and TE polarization directions. Moreover, the high transmission rate is also can be obtained when the incident angle reaches to 45°. Further measured results show that the transmission band can be expanded through optimizing structural parameters. The central frequency of the transmission band is also can be adjusted through optimizing structural parameters. The physical mechanism behind the central frequency shifted is solved through establishing an equivalent resonant circuit model.

  4. Electromagnetic Modelling of MMIC CPWs for High Frequency Applications

    Science.gov (United States)

    Sinulingga, E. P.; Kyabaggu, P. B. K.; Rezazadeh, A. A.

    2018-02-01

    Realising the theoretical electrical characteristics of components through modelling can be carried out using computer-aided design (CAD) simulation tools. If the simulation model provides the expected characteristics, the fabrication process of Monolithic Microwave Integrated Circuit (MMIC) can be performed for experimental verification purposes. Therefore improvements can be suggested before mass fabrication takes place. This research concentrates on development of MMIC technology by providing accurate predictions of the characteristics of MMIC components using an improved Electromagnetic (EM) modelling technique. The knowledge acquired from the modelling and characterisation process in this work can be adopted by circuit designers for various high frequency applications.

  5. Model and Algorithm for Substantiating Solutions for Organization of High-Rise Construction Project

    Directory of Open Access Journals (Sweden)

    Anisimov Vladimir

    2018-01-01

    Full Text Available In the paper the models and the algorithm for the optimal plan formation for the organization of the material and logistical processes of the high-rise construction project and their financial support are developed. The model is based on the representation of the optimization procedure in the form of a non-linear problem of discrete programming, which consists in minimizing the execution time of a set of interrelated works by a limited number of partially interchangeable performers while limiting the total cost of performing the work. The proposed model and algorithm are the basis for creating specific organization management methodologies for the high-rise construction project.

  6. Model and Algorithm for Substantiating Solutions for Organization of High-Rise Construction Project

    Science.gov (United States)

    Anisimov, Vladimir; Anisimov, Evgeniy; Chernysh, Anatoliy

    2018-03-01

    In the paper the models and the algorithm for the optimal plan formation for the organization of the material and logistical processes of the high-rise construction project and their financial support are developed. The model is based on the representation of the optimization procedure in the form of a non-linear problem of discrete programming, which consists in minimizing the execution time of a set of interrelated works by a limited number of partially interchangeable performers while limiting the total cost of performing the work. The proposed model and algorithm are the basis for creating specific organization management methodologies for the high-rise construction project.

  7. Relational database hybrid model, of high performance and storage capacity for nuclear engineering applications

    International Nuclear Information System (INIS)

    Gomes Neto, Jose

    2008-01-01

    The objective of this work is to present the relational database, named FALCAO. It was created and implemented to support the storage of the monitored variables in the IEA-R1 research reactor, located in the Instituto de Pesquisas Energeticas e Nucleares, IPEN/CNEN-SP. The data logical model and its direct influence in the integrity of the provided information are carefully considered. The concepts and steps of normalization and de normalization including the entities and relations involved in the logical model are presented. It is also presented the effects of the model rules in the acquisition, loading and availability of the final information, under the performance concept since the acquisition process loads and provides lots of information in small intervals of time. The SACD application, through its functionalities, presents the information stored in the FALCAO database in a practical and optimized form. The implementation of the FALCAO database occurred successfully and its existence leads to a considerably favorable situation. It is now essential to the routine of the researchers involved, not only due to the substantial improvement of the process but also to the reliability associated to it. (author)

  8. Constructing and Validating High-Performance MIEC-SVM Models in Virtual Screening for Kinases: A Better Way for Actives Discovery.

    Science.gov (United States)

    Sun, Huiyong; Pan, Peichen; Tian, Sheng; Xu, Lei; Kong, Xiaotian; Li, Youyong; Dan Li; Hou, Tingjun

    2016-04-22

    The MIEC-SVM approach, which combines molecular interaction energy components (MIEC) derived from free energy decomposition and support vector machine (SVM), has been found effective in capturing the energetic patterns of protein-peptide recognition. However, the performance of this approach in identifying small molecule inhibitors of drug targets has not been well assessed and validated by experiments. Thereafter, by combining different model construction protocols, the issues related to developing best MIEC-SVM models were firstly discussed upon three kinase targets (ABL, ALK, and BRAF). As for the investigated targets, the optimized MIEC-SVM models performed much better than the models based on the default SVM parameters and Autodock for the tested datasets. Then, the proposed strategy was utilized to screen the Specs database for discovering potential inhibitors of the ALK kinase. The experimental results showed that the optimized MIEC-SVM model, which identified 7 actives with IC50 < 10 μM from 50 purchased compounds (namely hit rate of 14%, and 4 in nM level) and performed much better than Autodock (3 actives with IC50 < 10 μM from 50 purchased compounds, namely hit rate of 6%, and 2 in nM level), suggesting that the proposed strategy is a powerful tool in structure-based virtual screening.

  9. Model for measuring complex performance in an aviation environment

    International Nuclear Information System (INIS)

    Hahn, H.A.

    1988-01-01

    An experiment was conducted to identify models of pilot performance through the attainment and analysis of concurrent verbal protocols. Sixteen models were identified. Novice and expert pilots differed with respect to the models they used. Models were correlated to performance, particularly in the case of expert subjects. Models were not correlated to performance shaping factors (i.e. workload). 3 refs., 1 tab

  10. Performance of high level waste forms and engineered barriers under repository conditions

    International Nuclear Information System (INIS)

    1991-02-01

    The IAEA initiated in 1977 a co-ordinated research programme on the ''Evaluation of Solidified High-Level Waste Forms'' which was terminated in 1983. As there was a continuing need for international collaboration in research on solidified high-level waste form and spent fuel, the IAEA initiated a new programme in 1984. The new programme, besides including spent fuel and SYNROC, also placed greater emphasis on the effect of the engineered barriers of future repositories on the properties of the waste form. These engineered barriers included containers, overpacks, buffer and backfill materials etc. as components of the ''near-field'' of the repository. The Co-ordinated Research Programme on the Performance of High-Level Waste Forms and Engineered Barriers Under Repository Conditions had the objectives of promoting the exchange of information on the experience gained by different Member States in experimental performance data and technical model evaluation of solidified high level waste forms, components of the waste package and the complete waste management system under conditions relevant to final repository disposal. The programme includes studies on both irradiated spent fuel and glass and ceramic forms as the final solidified waste forms. The following topics were discussed: Leaching of vitrified high-level wastes, modelling of glass behaviour in clay, salt and granite repositories, environmental impacts of radionuclide release, synroc use for high--level waste solidification, leachate-rock interactions, spent fuel disposal in deep geologic repositories and radionuclide release mechanisms from various fuel types, radiolysis and selective leaching correlated with matrix alteration. Refs, figs and tabs

  11. Toward a theory of high performance.

    Science.gov (United States)

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance.

  12. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models

    KAUST Repository

    Liang, Faming

    2013-06-01

    This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  13. Radionuclide release rates from spent fuel for performance assessment modeling

    International Nuclear Information System (INIS)

    Curtis, D.B.

    1994-01-01

    In a scenario of aqueous transport from a high-level radioactive waste repository, the concentration of radionuclides in water in contact with the waste constitutes the source term for transport models, and as such represents a fundamental component of all performance assessment models. Many laboratory experiments have been done to characterize release rates and understand processes influencing radionuclide release rates from irradiated nuclear fuel. Natural analogues of these waste forms have been studied to obtain information regarding the long-term stability of potential waste forms in complex natural systems. This information from diverse sources must be brought together to develop and defend methods used to define source terms for performance assessment models. In this manuscript examples of measures of radionuclide release rates from spent nuclear fuel or analogues of nuclear fuel are presented. Each example represents a very different approach to obtaining a numerical measure and each has its limitations. There is no way to obtain an unambiguous measure of this or any parameter used in performance assessment codes for evaluating the effects of processes operative over many millennia. The examples are intended to suggest by example that in the absence of the ability to evaluate accuracy and precision, consistency of a broadly based set of data can be used as circumstantial evidence to defend the choice of parameters used in performance assessments

  14. Photovoltaic Reliability Performance Model v 2.0

    Energy Technology Data Exchange (ETDEWEB)

    2016-12-16

    PV-RPM is intended to address more “real world” situations by coupling a photovoltaic system performance model with a reliability model so that inverters, modules, combiner boxes, etc. can experience failures and be repaired (or left unrepaired). The model can also include other effects, such as module output degradation over time or disruptions such as electrical grid outages. In addition, PV-RPM is a dynamic probabilistic model that can be used to run many realizations (i.e., possible future outcomes) of a system’s performance using probability distributions to represent uncertain parameter inputs.

  15. The realistic performance achievable with mycobacterial automated culture systems in high and low prevalence settings

    Directory of Open Access Journals (Sweden)

    Klatser Paul R

    2010-04-01

    Full Text Available Abstract Background Diagnostic tests are generally used in situations with similar pre-test probability of disease to where they were developed. When these tests are applied in situations with very different pre-test probabilities of disease, it is informative to model the likely implications of known characteristics of test performance in the new situation. This is the case for automated Mycobacterium tuberculosis (MTB liquid culture systems for tuberculosis case detection which were developed and are widely used in low burden settings but are only beginning to be applied on a large scale in high burden settings. Methods Here we model the performance of MTB liquid culture systems in high and low tuberculosis (TB prevalence settings using detailed published data concentrating on the likely frequency of cross-contamination events. Results Our model predicts that as the TB prevalence in the suspect population increases there is an exponential increase in the risk of MTB cross-contamination events expected in otherwise negative samples, even with equivalent technical performance of the laboratories. Quality control and strict cross-contamination measures become increasingly critical as the burden of MTB infection among TB suspects increases. Even under optimal conditions the realistically achievable specificity of these systems in high burden settings will likely be significantly below that obtained in low TB burden laboratories. Conclusions Liquid culture systems can play a valuable role in TB case detection in laboratories in high burden settings, but laboratory workers, policy makers and clinicians should be aware of the increased risks, independent of laboratory proficiency, of cross-contamination events in high burden settings.

  16. Development of structural technology for a high performance spacer grid

    International Nuclear Information System (INIS)

    Song, Kee Nam; Kim, H. K.; Kang, H. S.

    2003-03-01

    A spacer grid in a LWR fuel assembly is a key structural component to support fuel rods and to enhance the heat transfer from the fuel rod to the coolant. In this research, the main research items are the development of inherent and high performance spacer grid shapes, the establishment of mechanical/structural analysis and test technology, and the set-up of basic test facilities for the spacer grid. The main research areas and results are as follows. 1. 14 different spacer grid candidates have been invented and applied for domestic and US patents. Among the candidates six are chosen from the patent. 2. Two kinds of spacer grids are finally selected for the advanced LWR fuel after detailed performance tests on the candidates and commercial spacer grids from a mechanical/structural point of view. According to the test results the features of the selected spacer grids are better than those of the commercial spacer grids. 3. Four kinds of basic test facilities are set up and the relevant test technologies are established. 4. Mechanical/structural analysis models and technology for spacer grid performance are developed and the analysis results are compared with the test results to enhance the reliability of the models

  17. Performance Modeling of Communication Networks with Markov Chains

    CERN Document Server

    Mo, Jeonghoon

    2010-01-01

    This book is an introduction to Markov chain modeling with applications to communication networks. It begins with a general introduction to performance modeling in Chapter 1 where we introduce different performance models. We then introduce basic ideas of Markov chain modeling: Markov property, discrete time Markov chain (DTMe and continuous time Markov chain (CTMe. We also discuss how to find the steady state distributions from these Markov chains and how they can be used to compute the system performance metric. The solution methodologies include a balance equation technique, limiting probab

  18. Rapid Prototyping of High Performance Signal Processing Applications

    Science.gov (United States)

    Sane, Nimish

    Advances in embedded systems for digital signal processing (DSP) are enabling many scientific projects and commercial applications. At the same time, these applications are key to driving advances in many important kinds of computing platforms. In this region of high performance DSP, rapid prototyping is critical for faster time-to-market (e.g., in the wireless communications industry) or time-to-science (e.g., in radio astronomy). DSP system architectures have evolved from being based on application specific integrated circuits (ASICs) to incorporate reconfigurable off-the-shelf field programmable gate arrays (FPGAs), the latest multiprocessors such as graphics processing units (GPUs), or heterogeneous combinations of such devices. We, thus, have a vast design space to explore based on performance trade-offs, and expanded by the multitude of possibilities for target platforms. In order to allow systematic design space exploration, and develop scalable and portable prototypes, model based design tools are increasingly used in design and implementation of embedded systems. These tools allow scalable high-level representations, model based semantics for analysis and optimization, and portable implementations that can be verified at higher levels of abstractions and targeted toward multiple platforms for implementation. The designer can experiment using such tools at an early stage in the design cycle, and employ the latest hardware at later stages. In this thesis, we have focused on dataflow-based approaches for rapid DSP system prototyping. This thesis contributes to various aspects of dataflow-based design flows and tools as follows: 1. We have introduced the concept of topological patterns, which exploits commonly found repetitive patterns in DSP algorithms to allow scalable, concise, and parameterizable representations of large scale dataflow graphs in high-level languages. We have shown how an underlying design tool can systematically exploit a high

  19. Atmospheric statistical dynamic models. Model performance: the Lawrence Livermore Laboratoy Zonal Atmospheric Model

    International Nuclear Information System (INIS)

    Potter, G.L.; Ellsaesser, H.W.; MacCracken, M.C.; Luther, F.M.

    1978-06-01

    Results from the zonal model indicate quite reasonable agreement with observation in terms of the parameters and processes that influence the radiation and energy balance calculations. The model produces zonal statistics similar to those from general circulation models, and has also been shown to produce similar responses in sensitivity studies. Further studies of model performance are planned, including: comparison with July data; comparison of temperature and moisture transport and wind fields for winter and summer months; and a tabulation of atmospheric energetics. Based on these preliminary performance studies, however, it appears that the zonal model can be used in conjunction with more complex models to help unravel the problems of understanding the processes governing present climate and climate change. As can be seen in the subsequent paper on model sensitivity studies, in addition to reduced cost of computation, the zonal model facilitates analysis of feedback mechanisms and simplifies analysis of the interactions between processes

  20. The application of DEA model in enterprise environmental performance auditing

    Science.gov (United States)

    Li, F.; Zhu, L. Y.; Zhang, J. D.; Liu, C. Y.; Qu, Z. G.; Xiao, M. S.

    2017-01-01

    As a part of society, enterprises have an inescapable responsibility for environmental protection and governance. This article discusses the feasibility and necessity of enterprises environmental performance auditing and uses DEA model calculate the environmental performance of Haier for example. The most of reference data are selected and sorted from Haier’s environmental reportspublished in 2008, 2009, 2011 and 2015, and some of the data from some published articles and fieldwork. All the calculation results are calculated by DEAP software andhave a high credibility. The analysis results of this article can give corporate managements an idea about using environmental performance auditing to adjust their corporate environmental investments capital quota and change their company’s environmental strategies.

  1. International Conference on Modern Mathematical Methods and High Performance Computing in Science and Technology

    CERN Document Server

    Srivastava, HM; Venturino, Ezio; Resch, Michael; Gupta, Vijay

    2016-01-01

    The book discusses important results in modern mathematical models and high performance computing, such as applied operations research, simulation of operations, statistical modeling and applications, invisibility regions and regular meta-materials, unmanned vehicles, modern radar techniques/SAR imaging, satellite remote sensing, coding, and robotic systems. Furthermore, it is valuable as a reference work and as a basis for further study and research. All contributing authors are respected academicians, scientists and researchers from around the globe. All the papers were presented at the international conference on Modern Mathematical Methods and High Performance Computing in Science & Technology (M3HPCST 2015), held at Raj Kumar Goel Institute of Technology, Ghaziabad, India, from 27–29 December 2015, and peer-reviewed by international experts. The conference provided an exceptional platform for leading researchers, academicians, developers, engineers and technocrats from a broad range of disciplines ...

  2. High performance computing in Windows Azure cloud

    OpenAIRE

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  3. High Performance Work System, HRD Climate and Organisational Performance: An Empirical Study

    Science.gov (United States)

    Muduli, Ashutosh

    2015-01-01

    Purpose: This paper aims to study the relationship between high-performance work system (HPWS) and organizational performance and to examine the role of human resource development (HRD) Climate in mediating the relationship between HPWS and the organizational performance in the context of the power sector of India. Design/methodology/approach: The…

  4. High-performance coupled poro-hydro-mechanical models to resolve fluid escape pipes

    Science.gov (United States)

    Räss, Ludovic; Makhnenko, Roman; Podladchikov, Yury

    2017-04-01

    Field observations and laboratory experiments exhibit inelastic deformation features arising in many coupled settings relevant to geo-applications. These irreversible deformations and their specific patterns suggest a rather ductile or brittle mechanism, such as viscous creep or micro cracks, taking place on both geological (long) and human (short) timescales. In order to understand the underlying mechanisms responsible for these deformation features, there is a current need to accurately resolve the non-linearities inherent to strongly coupled physical processes. Among the large variety of modelling tools and softwares available nowadays in the community, very few are capable to efficiently solve coupled systems with high accuracy in both space and time and run efficiently on modern hardware. Here, we propose a robust framework to solve coupled multi-physics hydro-mechanical processes on very high spatial and temporal resolution in both two and three dimensions. Our software relies on the Finite-Difference Method and a pseudo-transient scheme is used to converge to the implicit solution of the system of poro-visco-elasto-plastic equations at each physical time step. The rheology including viscosity estimates for major reservoir rock types is inferred from novel lab experiments and confirms the ease of flow of sedimentary rocks. Our results propose a physical mechanism responsible for the generation of high permeability pathways in fluid saturated porous media and predict their propagation in rates observable on operational timescales. Finally, our software scales linearly on more than 5000 GPUs.

  5. Modeling the Performance of Fast Mulipole Method on HPC platforms

    KAUST Repository

    Ibeid, Huda

    2012-04-06

    The current trend in high performance computing is pushing towards exascale computing. To achieve this exascale performance, future systems will have between 100 million and 1 billion cores assuming gigahertz cores. Currently, there are many efforts studying the hardware and software bottlenecks for building an exascale system. It is important to understand and meet these bottlenecks in order to attain 10 PFLOPS performance. On applications side, there is an urgent need to model application performance and to understand what changes need to be made to ensure continued scalability at this scale. Fast multipole methods (FMM) were originally developed for accelerating N-body problems for particle based methods. Nowadays, FMM is more than an N-body solver, recent trends in HPC have been to use FMMs in unconventional application areas. FMM is likely to be a main player in exascale due to its hierarchical nature and the techniques used to access the data via a tree structure which allow many operations to happen simultaneously at each level of the hierarchy. In this thesis , we discuss the challenges for FMM on current parallel computers and future exasclae architecture. Furthermore, we develop a novel performance model for FMM. Our ultimate aim of this thesis is to ensure the scalability of FMM on the future exascale machines.

  6. Generalized Characterization Methodology for Performance Modelling of Lithium-Ion Batteries

    DEFF Research Database (Denmark)

    Stroe, Daniel Loan; Swierczynski, Maciej Jozef; Stroe, Ana-Irina

    2016-01-01

    Lithium-ion (Li-ion) batteries are complex energy storage devices with their performance behavior highly dependent on the operating conditions (i.e., temperature, load current, and state-of-charge (SOC)). Thus, in order to evaluate their techno-economic viability for a certain application, detailed...... information about Li-ion battery performance behavior becomes necessary. This paper proposes a comprehensive seven-step methodology for laboratory characterization of Li-ion batteries, in which the battery’s performance parameters (i.e., capacity, open-circuit voltage (OCV), and impedance) are determined...... and their dependence on the operating conditions are obtained. Furthermore, this paper proposes a novel hybrid procedure for parameterizing the batteries’ equivalent electrical circuit (EEC), which is used to emulate the batteries’ dynamic behavior. Based on this novel parameterization procedure, the performance model...

  7. Transmutation Fuel Performance Code Thermal Model Verification

    Energy Technology Data Exchange (ETDEWEB)

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  8. Governance among Malaysian high performing companies

    Directory of Open Access Journals (Sweden)

    Asri Marsidi

    2016-07-01

    Full Text Available Well performed companies have always been linked with effective governance which is generally reflected through effective board of directors. However many issues concerning the attributes for effective board of directors remained unresolved. Nowadays diversity has been perceived as able to influence the corporate performance due to the likelihood of meeting variety of needs and demands from diverse customers and clients. The study therefore aims to provide a fundamental understanding on governance among high performing companies in Malaysia.

  9. Higs-instrument: design and demonstration of a high performance gas concentration imager

    Science.gov (United States)

    Verlaan, A. L.; Klop, W. A.; Visser, H.; van Brug, H.; Human, J.

    2017-09-01

    Climate change and environmental conditions are high on the political agenda of international governments. Laws and regulations are being setup all around the world to improve the air quality and to reduce the impact. The growth of a number of trace gasses, including CO2, Methane and NOx are especially interesting due to their environmental impact. The regulations made are being based on both models and measurements of the trend of those trace gases over the years. Now the regulations are in place also enforcement and therewith measurements become more and more important. Instruments enabling high spectral and spatial resolution as well as high accurate measurements of trace gases are required to deliver the necessary inputs. Nowadays those measurements are usually performed by space based spectrometers. The requirement for high spectral resolution and measurement accuracy significantly increases the size of the instruments. As a result the instrument and satellite becomes very expensive to develop and to launch. Specialized instruments with a small volume and the required performance will offer significant advantages in both cost and performance. Huib's Innovative Gas Sensor (HIGS, named after its inventor Huib Visser), currently being developed at TNO is an instrument that achieves exactly that. Designed to measure only a single gas concentration, opposed to deriving it from a spectrum, it achieves high performance within a small design volume. The instrument enables instantaneous imaging of the gas distribution of the selected gas. An instrument demonstrator has been developed for NO2 detection. Laboratory measurements proved the measurement technique to be successful. An on-sky measurement campaign is in preparation. This paper addresses both the instrument design as well as the demonstrated performances.

  10. Performance-Based Task Assessment of Higher-Order Proficiencies in Redesigned STEM High Schools

    Science.gov (United States)

    Ernst, Jeremy V.; Glennie, Elizabeth; Li, Songze

    2017-01-01

    This study explored student abilities in applying conceptual knowledge when presented with structured performance tasks. Specifically, the study gauged proficiency in higher-order applications of students enrolled in earth and environmental science or biology. The student sample was drawn from a Redesigned STEM high school model where a tested…

  11. Wave and Wind Model Performance Metrics Tools

    Science.gov (United States)

    Choi, J. K.; Wang, D. W.

    2016-02-01

    Continual improvements and upgrades of Navy ocean wave and wind models are essential to the assurance of battlespace environment predictability of ocean surface wave and surf conditions in support of Naval global operations. Thus, constant verification and validation of model performance is equally essential to assure the progress of model developments and maintain confidence in the predictions. Global and regional scale model evaluations may require large areas and long periods of time. For observational data to compare against, altimeter winds and waves along the tracks from past and current operational satellites as well as moored/drifting buoys can be used for global and regional coverage. Using data and model runs in previous trials such as the planned experiment, the Dynamics of the Adriatic in Real Time (DART), we demonstrated the use of accumulated altimeter wind and wave data over several years to obtain an objective evaluation of the performance the SWAN (Simulating Waves Nearshore) model running in the Adriatic Sea. The assessment provided detailed performance of wind and wave models by using cell-averaged statistical variables maps with spatial statistics including slope, correlation, and scatter index to summarize model performance. Such a methodology is easily generalized to other regions and at global scales. Operational technology currently used by subject matter experts evaluating the Navy Coastal Ocean Model and the Hybrid Coordinate Ocean Model can be expanded to evaluate wave and wind models using tools developed for ArcMAP, a GIS application developed by ESRI. Recent inclusion of altimeter and buoy data into a format through the Naval Oceanographic Office's (NAVOCEANO) quality control system and the netCDF standards applicable to all model output makes it possible for the fusion of these data and direct model verification. Also, procedures were developed for the accumulation of match-ups of modelled and observed parameters to form a data base

  12. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  13. SpF: Enabling Petascale Performance for Pseudospectral Dynamo Models

    Science.gov (United States)

    Jiang, W.; Clune, T.; Vriesema, J.; Gutmann, G.

    2013-12-01

    Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. High-level abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical 'kernels' that can be performed entirely in-processor. The granularity of domain-decomposition provided by SpF is only constrained by the data-locality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe the basic architecture of SpF as well as preliminary performance data and experience with adapting legacy dynamo codes

  14. Powder metallurgical high performance materials. Proceedings. Volume 1: high performance P/M metals

    International Nuclear Information System (INIS)

    Kneringer, G.; Roedhammer, P.; Wildner, H.

    2001-01-01

    The proceedings of this sequence of seminars form an impressive chronicle of the continued progress in the understanding of refractory metals and cemented carbides and in their manufacture and application. There the ingenuity and assiduous work of thousands of scientists and engineers striving for progress in the field of powder metallurgy is documented in more than 2000 contributions covering some 30000 pages. The 15th Plansee Seminar was convened under the general theme 'Powder Metallurgical High Performance Materials'. Under this broadened perspective the seminar will strive to look beyond the refractory metals and cemented carbides, which remain at its focus, to novel classes of materials, such as intermetallic compounds, with potential for high temperature applications. (author)

  15. Powder metallurgical high performance materials. Proceedings. Volume 1: high performance P/M metals

    Energy Technology Data Exchange (ETDEWEB)

    Kneringer, G; Roedhammer, P; Wildner, H [eds.

    2001-07-01

    The proceedings of this sequence of seminars form an impressive chronicle of the continued progress in the understanding of refractory metals and cemented carbides and in their manufacture and application. There the ingenuity and assiduous work of thousands of scientists and engineers striving for progress in the field of powder metallurgy is documented in more than 2000 contributions covering some 30000 pages. The 15th Plansee Seminar was convened under the general theme 'Powder Metallurgical High Performance Materials'. Under this broadened perspective the seminar will strive to look beyond the refractory metals and cemented carbides, which remain at its focus, to novel classes of materials, such as intermetallic compounds, with potential for high temperature applications. (author)

  16. Does model performance improve with complexity? A case study with three hydrological models

    Science.gov (United States)

    Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano

    2015-04-01

    In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).

  17. High-Speed, High-Performance DQPSK Optical Links with Reduced Complexity VDFE Equalizers

    Directory of Open Access Journals (Sweden)

    Maki Nanou

    2017-02-01

    Full Text Available Optical transmission technologies optimized for optical network segments sensitive to power consumption and cost, comprise modulation formats with direct detection technologies. Specifically, non-return to zero differential quaternary phase shift keying (NRZ-DQPSK in deployed fiber plants, combined with high-performance, low-complexity electronic equalizers to compensate residual impairments at the receiver end, can be proved as a viable solution for high-performance, high-capacity optical links. Joint processing of the constructive and the destructive signals at the single-ended DQPSK receiver provides improved performance compared to the balanced configuration, however, at the expense of higher hardware requirements, a fact that may not be neglected especially in the case of high-speed optical links. To overcome this bottleneck, the use of partially joint constructive/destructive DQPSK equalization is investigated in this paper. Symbol-by-symbol equalization is performed by means of Volterra decision feedback-type equalizers, driven by a reduced subset of signals selected from the constructive and the destructive ports of the optical detectors. The proposed approach offers a low-complexity alternative for electronic equalization, without sacrificing much of the performance compared to the fully-deployed counterpart. The efficiency of the proposed equalizers is demonstrated by means of computer simulation in a typical optical transmission scenario.

  18. Modeling the Mechanical Performance of Die Casting Dies

    Energy Technology Data Exchange (ETDEWEB)

    R. Allen Miller

    2004-02-27

    The following report covers work performed at Ohio State on modeling the mechanical performance of dies. The focus of the project was development and particularly verification of finite element techniques used to model and predict displacements and stresses in die casting dies. The work entails a major case study performed with and industrial partner on a production die and laboratory experiments performed at Ohio State.

  19. Design of JMTR high-performance fuel element

    International Nuclear Information System (INIS)

    Sakurai, Fumio; Shimakawa, Satoshi; Komori, Yoshihiro; Tsuchihashi, Keiichiro; Kaminaga, Fumito

    1999-01-01

    For test and research reactors, the core conversion to low-enriched uranium fuel is required from the viewpoint of non-proliferation of nuclear weapon material. Improvements of core performance are also required in order to respond to recent advanced utilization needs. In order to meet both requirements, a high-performance fuel element of high uranium density with Cd wires as burnable absorbers was adopted for JMTR core conversion to low-enriched uranium fuel. From the result of examination of an adaptability of a few group constants generated by a conventional transport-theory calculation with an isotropic scattering approximation to a few group diffusion-theory core calculation for design of the JMTR high-performance fuel element, it was clear that the depletion of Cd wires was not able to be predicted accurately using group constants generated by the conventional method. Therefore, a new generation method of a few group constants in consideration of an incident neutron spectrum at Cd wire was developed. As the result, the most suitable high-performance fuel element for JMTR was designed successfully, and that allowed extension of operation duration without refueling to almost twice as long and offer of irradiation field with constant neutron flux. (author)

  20. Performance prediction for a magnetostrictive actuator using a simplified model

    Science.gov (United States)

    Yoo, Jin-Hyeong; Jones, Nicholas J.

    2018-03-01

    Iron-Gallium alloys (Galfenol) are promising transducer materials that combine high magnetostriction, desirable mechanical properties, high permeability, and a wide operational temperature range. Most of all, the material is capable of operating under tensile stress, and is relatively resistant to shock. These materials are generally characterized using a solid, cylindrically-shaped specimen under controlled compressive stress and magnetization conditions. Because the magnetostriction strongly depends on both the applied stress and magnetization, the characterization of the material is usually conducted under controlled conditions so each parameter is varied independently of the other. However, in a real application the applied stress and magnetization will not be maintained constant during operation. Even though the controlled characterization measurement gives insight into standard material properties, usage of this data in an application, while possible, is not straight forward. This study presents an engineering modeling methodology for magnetostrictive materials based on a piezo-electric governing equation. This model suggests phenomenological, nonlinear, three-dimensional functions for strain and magnetic flux density responses as functions of applied stress and magnetic field. Load line performances as a function of maximum magnetic field input were simulated based on the model. To verify the modeling performance, a polycrystalline magnetostrictive rod (Fe-Ga alloy, Galfenol) was characterized under compressive loads using a dead-weight test setup, with strain gages on the rod and a magnetic field driving coil around the sample. The magnetic flux density through the Galfenol rod was measured with a sensing coil; the compressive loads were measured using a load cell on the bottom of the Galfenol rod. The experimental results are compared with the simulation results using the suggested model, showing good agreement.