WorldWideScience

Sample records for high model performance

  1. Utilities for high performance dispersion model PHYSIC

    Yamazawa, Hiromi

    1992-09-01

    The description and usage of the utilities for the dispersion calculation model PHYSIC were summarized. The model was developed in the study of developing high performance SPEEDI with the purpose of introducing meteorological forecast function into the environmental emergency response system. The procedure of PHYSIC calculation consists of three steps; preparation of relevant files, creation and submission of JCL, and graphic output of results. A user can carry out the above procedure with the help of the Geographical Data Processing Utility, the Model Control Utility, and the Graphic Output Utility. (author)

  2. High-performance phase-field modeling

    Vignal, Philippe

    2015-04-27

    Many processes in engineering and sciences involve the evolution of interfaces. Among the mathematical frameworks developed to model these types of problems, the phase-field method has emerged as a possible solution. Phase-fields nonetheless lead to complex nonlinear, high-order partial differential equations, whose solution poses mathematical and computational challenges. Guaranteeing some of the physical properties of the equations has lead to the development of efficient algorithms and discretizations capable of recovering said properties by construction [2, 5]. This work builds-up on these ideas, and proposes novel discretization strategies that guarantee numerical energy dissipation for both conserved and non-conserved phase-field models. The temporal discretization is based on a novel method which relies on Taylor series and ensures strong energy stability. It is second-order accurate, and can also be rendered linear to speed-up the solution process [4]. The spatial discretization relies on Isogeometric Analysis, a finite element method that possesses the k-refinement technology and enables the generation of high-order, high-continuity basis functions. These basis functions are well suited to handle the high-order operators present in phase-field models. Two-dimensional and three dimensional results of the Allen-Cahn, Cahn-Hilliard, Swift-Hohenberg and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  3. High-performance phase-field modeling

    Vignal, Philippe; Sarmiento, Adel; Cortes, Adriano Mauricio; Dalcin, L.; Collier, N.; Calo, Victor M.

    2015-01-01

    and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  4. Fracture modelling of a high performance armour steel

    Skoglund, P.; Nilsson, M.; Tjernberg, A.

    2006-08-01

    The fracture characteristics of the high performance armour steel Armox 500T is investigated. Tensile mechanical experiments using samples with different notch geometries are used to investigate the effect of multi-axial stress states on the strain to fracture. The experiments are numerically simulated and from the simulation the stress at the point of fracture initiation is determined as a function of strain and these data are then used to extract parameters for fracture models. A fracture model based on quasi-static experiments is suggested and the model is tested against independent experiments done at both static and dynamic loading. The result show that the fracture model give reasonable good agreement between simulations and experiments at both static and dynamic loading condition. This indicates that multi-axial loading is more important to the strain to fracture than the deformation rate in the investigated loading range. However on-going work will further characterise the fracture behaviour of Armox 500T.

  5. A performance model for the communication in fast multipole methods on high-performance computing platforms

    Ibeid, Huda; Yokota, Rio; Keyes, David E.

    2016-01-01

    model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization

  6. Modeling and design of a high-performance hybrid actuator

    Aloufi, Badr; Behdinan, Kamran; Zu, Jean

    2016-12-01

    This paper presents the model and design of a novel hybrid piezoelectric actuator which provides high active and passive performances for smart structural systems. The actuator is composed of a pair of curved pre-stressed piezoelectric actuators, so-called commercially THUNDER actuators, installed opposite each other using two clamping mechanisms constructed of in-plane fixable hinges, grippers and solid links. A fully mathematical model is developed to describe the active and passive dynamics of the actuator and investigate the effects of its geometrical parameters on the dynamic stiffness, free displacement and blocked force properties. Among the literature that deals with piezoelectric actuators in which THUNDER elements are used as a source of electromechanical power, the proposed study is unique in that it presents a mathematical model that has the ability to predict the actuator characteristics and achieve other phenomena, such as resonances, mode shapes, phase shifts, dips, etc. For model validation, the measurements of the free dynamic response per unit voltage and passive acceleration transmissibility of a particular actuator design are used to check the accuracy of the results predicted by the model. The results reveal that there is a good agreement between the model and experiment. Another experiment is performed to teste the linearity of the actuator system by examining the variation of the output dynamic responses with varying forces and voltages at different frequencies. From the results, it can be concluded that the actuator acts approximately as a linear system at frequencies up to 1000 Hz. A parametric study is achieved here by applying the developed model to analyze the influence of the geometrical parameters of the fixable hinges on the active and passive actuator properties. The model predictions in the frequency range of 0-1000 Hz show that the hinge thickness, radius, and opening angle parameters have great effects on the frequency dynamic

  7. Behavioral Model of High Performance Camera for NIF Optics Inspection

    Hackel, B M

    2007-01-01

    The purpose of this project was to develop software that will model the behavior of the high performance Spectral Instruments 1000 series Charge-Coupled Device (CCD) camera located in the Final Optics Damage Inspection (FODI) system on the National Ignition Facility. NIF's target chamber will be mounted with 48 Final Optics Assemblies (FOAs) to convert the laser light from infrared to ultraviolet and focus it precisely on the target. Following a NIF shot, the optical components of each FOA must be carefully inspected for damage by the FODI to ensure proper laser performance during subsequent experiments. Rapid image capture and complex image processing (to locate damage sites) will reduce shot turnaround time; thus increasing the total number of experiments NIF can conduct during its 30 year lifetime. Development of these rapid processes necessitates extensive offline software automation -- especially after the device has been deployed in the facility. Without access to the unique real device or an exact behavioral model, offline software testing is difficult. Furthermore, a software-based behavioral model allows for many instances to be running concurrently; this allows multiple developers to test their software at the same time. Thus it is beneficial to construct separate software that will exactly mimic the behavior and response of the real SI-1000 camera

  8. Simulation model of a twin-tail, high performance airplane

    Buttrill, Carey S.; Arbuckle, P. Douglas; Hoffler, Keith D.

    1992-01-01

    The mathematical model and associated computer program to simulate a twin-tailed high performance fighter airplane (McDonnell Douglas F/A-18) are described. The simulation program is written in the Advanced Continuous Simulation Language. The simulation math model includes the nonlinear six degree-of-freedom rigid-body equations, an engine model, sensors, and first order actuators with rate and position limiting. A simplified form of the F/A-18 digital control laws (version 8.3.3) are implemented. The simulated control law includes only inner loop augmentation in the up and away flight mode. The aerodynamic forces and moments are calculated from a wind-tunnel-derived database using table look-ups with linear interpolation. The aerodynamic database has an angle-of-attack range of -10 to +90 and a sideslip range of -20 to +20 degrees. The effects of elastic deformation are incorporated in a quasi-static-elastic manner. Elastic degrees of freedom are not actively simulated. In the engine model, the throttle-commanded steady-state thrust level and the dynamic response characteristics of the engine are based on airflow rate as determined from a table look-up. Afterburner dynamics are switched in at a threshold based on the engine airflow and commanded thrust.

  9. High-Level Performance Modeling of SAR Systems

    Chen, Curtis

    2006-01-01

    SAUSAGE (Still Another Utility for SAR Analysis that s General and Extensible) is a computer program for modeling (see figure) the performance of synthetic- aperture radar (SAR) or interferometric synthetic-aperture radar (InSAR or IFSAR) systems. The user is assumed to be familiar with the basic principles of SAR imaging and interferometry. Given design parameters (e.g., altitude, power, and bandwidth) that characterize a radar system, the software predicts various performance metrics (e.g., signal-to-noise ratio and resolution). SAUSAGE is intended to be a general software tool for quick, high-level evaluation of radar designs; it is not meant to capture all the subtleties, nuances, and particulars of specific systems. SAUSAGE was written to facilitate the exploration of engineering tradeoffs within the multidimensional space of design parameters. Typically, this space is examined through an iterative process of adjusting the values of the design parameters and examining the effects of the adjustments on the overall performance of the system at each iteration. The software is designed to be modular and extensible to enable consideration of a variety of operating modes and antenna beam patterns, including, for example, strip-map and spotlight SAR acquisitions, polarimetry, burst modes, and squinted geometries.

  10. DiamondTorre Algorithm for High-Performance Wave Modeling

    Vadim Levchenko

    2016-08-01

    Full Text Available Effective algorithms of physical media numerical modeling problems’ solution are discussed. The computation rate of such problems is limited by memory bandwidth if implemented with traditional algorithms. The numerical solution of the wave equation is considered. A finite difference scheme with a cross stencil and a high order of approximation is used. The DiamondTorre algorithm is constructed, with regard to the specifics of the GPGPU’s (general purpose graphical processing unit memory hierarchy and parallelism. The advantages of these algorithms are a high level of data localization, as well as the property of asynchrony, which allows one to effectively utilize all levels of GPGPU parallelism. The computational intensity of the algorithm is greater than the one for the best traditional algorithms with stepwise synchronization. As a consequence, it becomes possible to overcome the above-mentioned limitation. The algorithm is implemented with CUDA. For the scheme with the second order of approximation, the calculation performance of 50 billion cells per second is achieved. This exceeds the result of the best traditional algorithm by a factor of five.

  11. High-performance speech recognition using consistency modeling

    Digalakis, Vassilios; Murveit, Hy; Monaco, Peter; Neumeyer, Leo; Sankar, Ananth

    1994-12-01

    The goal of SRI's consistency modeling project is to improve the raw acoustic modeling component of SRI's DECIPHER speech recognition system and develop consistency modeling technology. Consistency modeling aims to reduce the number of improper independence assumptions used in traditional speech recognition algorithms so that the resulting speech recognition hypotheses are more self-consistent and, therefore, more accurate. At the initial stages of this effort, SRI focused on developing the appropriate base technologies for consistency modeling. We first developed the Progressive Search technology that allowed us to perform large-vocabulary continuous speech recognition (LVCSR) experiments. Since its conception and development at SRI, this technique has been adopted by most laboratories, including other ARPA contracting sites, doing research on LVSR. Another goal of the consistency modeling project is to attack difficult modeling problems, when there is a mismatch between the training and testing phases. Such mismatches may include outlier speakers, different microphones and additive noise. We were able to either develop new, or transfer and evaluate existing, technologies that adapted our baseline genonic HMM recognizer to such difficult conditions.

  12. A performance model for the communication in fast multipole methods on high-performance computing platforms

    Ibeid, Huda

    2016-03-04

    Exascale systems are predicted to have approximately 1 billion cores, assuming gigahertz cores. Limitations on affordable network topologies for distributed memory systems of such massive scale bring new challenges to the currently dominant parallel programing model. Currently, there are many efforts to evaluate the hardware and software bottlenecks of exascale designs. It is therefore of interest to model application performance and to understand what changes need to be made to ensure extrapolated scalability. The fast multipole method (FMM) was originally developed for accelerating N-body problems in astrophysics and molecular dynamics but has recently been extended to a wider range of problems. Its high arithmetic intensity combined with its linear complexity and asynchronous communication patterns make it a promising algorithm for exascale systems. In this paper, we discuss the challenges for FMM on current parallel computers and future exascale architectures, with a focus on internode communication. We focus on the communication part only; the efficiency of the computational kernels are beyond the scope of the present study. We develop a performance model that considers the communication patterns of the FMM and observe a good match between our model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization of internode communication in FMM that validates the model against actual measurements of communication time. The ultimate communication model is predictive in an absolute sense; however, on complex systems, this objective is often out of reach or of a difficulty out of proportion to its benefit when there exists a simpler model that is inexpensive and sufficient to guide coding decisions leading to improved scaling. The current model provides such guidance.

  13. High performance modeling of atmospheric re-entry vehicles

    Martin, Alexandre; Scalabrin, Leonardo C; Boyd, Iain D

    2012-01-01

    Re-entry vehicles designed for space exploration are usually equipped with thermal protection systems made of ablative material. In order to properly model and predict the aerothermal environment of the vehicle, it is imperative to account for the gases produced by ablation processes. In the case of charring ablators, where an inner resin is pyrolyzed at a relatively low temperature, the composition of the gas expelled into the boundary layer is complex and may lead to thermal chemical reactions that cannot be captured with simple flow chemistry models. In order to obtain better predictions, an appropriate gas flow chemistry model needs to be included in the CFD calculations. Using a recently developed chemistry model for ablating carbon-phenolic-in-air species, a CFD calculation of the Stardust re-entry at 71 km is presented. The code used for that purpose has been designed to take advantage of the nature of the problem and therefore remains very efficient when a high number of chemical species are involved. The CFD result demonstrates the need for such chemistry model when modeling the flow field around an ablative material. Modeling of the nonequilibrium radiation spectra is also presented, and compared to the experimental data obtained during Stardust re-entry by the Echelle instrument. The predicted emission from the CN lines compares quite well with the experimental results, demonstrating the validity of the current approach.

  14. Thermal modelling of PV module performance under high ambient temperatures

    Diarra, D.C.; Harrison, S.J. [Queen' s Univ., Kingston, ON (Canada). Dept. of Mechanical and Materials Engineering Solar Calorimetry Lab; Akuffo, F.O. [Kwame Nkrumah Univ. of Science and Technology, Kumasi (Ghana). Dept. of Mechanical Engineering

    2005-07-01

    When predicting the performance of photovoltaic (PV) generators, the actual performance is typically lower than test results conducted under standard test conditions because the radiant energy absorbed in the module under normal operation raises the temperature of the cell and other multilayer components. The increase in temperature translates to a lower conversion efficiency of the solar cells. In order to address these discrepancies, a thermal model of a characteristic PV module was developed to assess and predict its performance under real field-conditions. The PV module consisted of monocrystalline silicon cells in EVA between a glass cover and a tedlar backing sheet. The EES program was used to compute the equilibrium temperature profile in the PV module. It was shown that heat is dissipated towards the bottom and the top of the module, and that its temperature can be much higher than the ambient temperature. Modelling results indicate that 70-75 per cent of the absorbed solar radiation is dissipated from the solar cells as heat, while 4.7 per cent of the solar energy is absorbed in the glass cover and the EVA. It was also shown that the operating temperature of the PV module decreases with increased wind speed. 2 refs.

  15. Algorithms and Methods for High-Performance Model Predictive Control

    Frison, Gianluca

    routines employed in the numerical tests. The main focus of this thesis is on linear MPC problems. In this thesis, both the algorithms and their implementation are equally important. About the implementation, a novel implementation strategy for the dense linear algebra routines in embedded optimization...... is proposed, aiming at improving the computational performance in case of small matrices. About the algorithms, they are built on top of the proposed linear algebra, and they are tailored to exploit the high-level structure of the MPC problems, with special care on reducing the computational complexity....

  16. Electromagnetic Modeling of Human Body Using High Performance Computing

    Ng, Cho-Kuen; Beall, Mark; Ge, Lixin; Kim, Sanghoek; Klaas, Ottmar; Poon, Ada

    Realistic simulation of electromagnetic wave propagation in the actual human body can expedite the investigation of the phenomenon of harvesting implanted devices using wireless powering coupled from external sources. The parallel electromagnetics code suite ACE3P developed at SLAC National Accelerator Laboratory is based on the finite element method for high fidelity accelerator simulation, which can be enhanced to model electromagnetic wave propagation in the human body. Starting with a CAD model of a human phantom that is characterized by a number of tissues, a finite element mesh representing the complex geometries of the individual tissues is built for simulation. Employing an optimal power source with a specific pattern of field distribution, the propagation and focusing of electromagnetic waves in the phantom has been demonstrated. Substantial speedup of the simulation is achieved by using multiple compute cores on supercomputers.

  17. Comparative performance of high-fidelity training models for flexible ureteroscopy: Are all models effective?

    Shashikant Mishra

    2011-01-01

    Full Text Available Objective: We performed a comparative study of high-fidelity training models for flexible ureteroscopy (URS. Our objective was to determine whether high-fidelity non-virtual reality (VR models are as effective as the VR model in teaching flexible URS skills. Materials and Methods: Twenty-one trained urologists without clinical experience of flexible URS underwent dry lab simulation practice. After a warm-up period of 2 h, tasks were performed on a high-fidelity non-VR (Uro-scopic Trainer TM ; Endo-Urologie-Modell TM and a high-fidelity VR model (URO Mentor TM . The participants were divided equally into three batches with rotation on each of the three stations for 30 min. Performance of the trainees was evaluated by an expert ureteroscopist using pass rating and global rating score (GRS. The participants rated a face validity questionnaire at the end of each session. Results: The GRS improved statistically at evaluation performed after second rotation (P<0.001 for batches 1, 2 and 3. Pass ratings also improved significantly for all training models when the third and first rotations were compared (P<0.05. The batch that was trained on the VR-based model had more improvement on pass ratings on second rotation but could not achieve statistical significance. Most of the realistic domains were higher for a VR model as compared with the non-VR model, except the realism of the flexible endoscope. Conclusions: All the models used for training flexible URS were effective in increasing the GRS and pass ratings irrespective of the VR status.

  18. Transport modelling and gyrokinetic analysis of advanced high performance discharges

    Kinsey, J.E.; Imbeaux, F.; Staebler, G.M.; Budny, R.; Bourdelle, C.; Fukuyama, A.; Garbet, X.; Tala, T.; Parail, V.

    2005-01-01

    Predictive transport modelling and gyrokinetic stability analyses of demonstration hybrid (HYBRID) and advanced tokamak (AT) discharges from the International Tokamak Physics Activity (ITPA) profile database are presented. Both regimes have exhibited enhanced core confinement (above the conventional ITER reference H-mode scenario) but differ in their current density profiles. Recent contributions to the ITPA database have facilitated an effort to study the underlying physics governing confinement in these advanced scenarios. In this paper, we assess the level of commonality of the turbulent transport physics and the relative roles of the transport suppression mechanisms (i.e. E x B shear and Shafranov shift (α) stabilization) using data for select HYBRID and AT discharges from the DIII-D, JET and AUG tokamaks. GLF23 transport modelling and gyrokinetic stability analysis indicate that E x B shear and Shafranov shift stabilization play essential roles in producing the improved core confinement in both HYBRID and AT discharges. Shafranov shift stabilization is found to be more important in AT discharges than in HYBRID discharges. We have also examined the competition between the stabilizing effects of E x B shear and Shafranov shift stabilization and the destabilizing effects of higher safety factors and parallel velocity shear. Linear and nonlinear gyrokinetic simulations of idealized low and high safety factor cases reveal some interesting consequences. A low safety factor (i.e. HYBRID relevant) is directly beneficial in reducing the transport, and E x B shear stabilization can dominate parallel velocity shear destabilization allowing the turbulence to be quenched. However, at low-q/high current, Shafranov shift stabilization plays less of a role. Higher safety factors (as found in AT discharges), on the other hand, have larger amounts of Shafranov shift stabilization, but parallel velocity shear destabilization can prevent E x B shear quenching of the turbulent

  19. Transport modeling and gyrokinetic analysis of advanced high performance discharges

    Kinsey, J.; Imbeaux, F.; Bourdelle, C.; Garbet, X.; Staebler, G.; Budny, R.; Fukuyama, A.; Tala, T.; Parail, V.

    2005-01-01

    Predictive transport modeling and gyrokinetic stability analyses of demonstration hybrid (HYBRID) and Advanced Tokamak (AT) discharges from the International Tokamak Physics Activity (ITPA) profile database are presented. Both regimes have exhibited enhanced core confinement (above the conventional ITER reference H-mode scenario) but differ in their current density profiles. Recent contributions to the ITPA database have facilitated an effort to study the underlying physics governing confinement in these advanced scenarios. In this paper, we assess the level of commonality of the turbulent transport physics and the relative roles of the transport suppression mechanisms (i.e. ExB shear and Shafranov shift (α) stabilization) using data for select HYBRID and AT discharges from the DIII-D, JET, and AUG tokamaks. GLF23 transport modeling and gyrokinetic stability analysis indicates that ExB shear and Shafranov shift stabilization play essential roles in producing the improved core confinement in both HYBRID and AT discharges. Shafranov shift stabilization is found to be more important in AT discharges than in HYBRID discharges. We have also examined the competition between the stabilizing effects of ExB shear and Shafranov shift stabilization and the destabilizing effects of higher safety factors and parallel velocity shear. Linear and nonlinear gyrokinetic simulations of idealized low and high safety factor cases reveals some interesting consequences. A low safety factor (i.e. HYBRID relevant) is directly beneficial in reducing the transport, and ExB shear stabilization can win out over parallel velocity shear destabilization allowing the turbulence to be quenched. However, at low-q/high current, Shafranov shift stabilization plays less of a role. Higher safety factors (as found in AT discharges), on the other hand, have larger amounts of Shafranov shift stabilization, but parallel velocity shear destabilization can prevent ExB shear quenching of the turbulent

  20. Architecture and Programming Models for High Performance Intensive Computation

    2016-06-29

    commands from the data processing center to the sensors is needed. It has been noted that the ubiquity of mobile communication devices offers the...commands from a Processing Facility by way of mobile Relay Stations. The activity of each component of this model other than the Merge module can be...evaluation of the initial system implementation. Gao also was in charge of the development of Fresh Breeze architecture backend on new many-core computers

  1. Using High-Dimensional Image Models to Perform Highly Undetectable Steganography

    Pevný, Tomáš; Filler, Tomáš; Bas, Patrick

    This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.

  2. The contribution of high-performance computing and modelling for industrial development

    Sithole, Happy

    2017-10-01

    Full Text Available Performance Computing and Modelling for Industrial Development Dr Happy Sithole and Dr Onno Ubbink 2 Strategic context • High-performance computing (HPC) combined with machine Learning and artificial intelligence present opportunities to non...

  3. Effects of Modeling and Tempo Patterns as Practice Techniques on the Performance of High School Instrumentalists.

    Henley, Paul T.

    2001-01-01

    Examines the effect of modeling conditions and tempo patterns on high school instrumentalists' performance. Focuses on high school students (n=60) who play wind instruments. Reports that the with-model condition was superior in rhythm and tempo percentage gain when compared to the no-model condition. Includes references. (CMK)

  4. Comprehensive Simulation Lifecycle Management for High Performance Computing Modeling and Simulation, Phase I

    National Aeronautics and Space Administration — There are significant logistical barriers to entry-level high performance computing (HPC) modeling and simulation (M IllinoisRocstar) sets up the infrastructure for...

  5. A high performance finite element model for wind farm modeling in forested areas

    Owen, Herbert; Avila, Matias; Folch, Arnau; Cosculluela, Luis; Prieto, Luis

    2015-04-01

    Wind energy has grown significantly during the past decade and is expected to continue growing in the fight against climate change. In the search for new land where the impact of the wind turbines is small several wind farms are currently being installed in forested areas. In order to optimize the distribution of the wind turbines within the wind farm the Reynolds Averaged Navier Stokes equations are solved over the domain of interest using either commercial or in house codes. The existence of a canopy alters the Atmospheric Boundary Layer wind profile close to the ground. Therefore in order to obtain a more accurate representation of the flow in forested areas modification to both the Navier Stokes and turbulence variables equations need to be introduced. Several existing canopy models have been tested in an academic problem showing that the one proposed by Sogachev et. al gives the best results. This model has been implemented in an in house CFD solver named Alya. It is a high performance unstructured finite element code that has been designed from scratch to be able to run in the world's biggest supercomputers. Its scalabililty has recently been tested up to 100000 processors in both American and European supercomputers. During the past three years the code has been tuned and tested for wind energy problems. Recent efforts have focused on the canopy model following industry needs. In this work we shall benchmark our results in a wind farm that is currently being designed by Scottish Power and Iberdrola in Scotland. This is a very interesting real case with extensive experimental data from five different masts with anemometers at several heights. It is used to benchmark both the wind profiles and the speed up obtained between different masts. Sixteen different wind directions are simulated. The numerical model provides very satisfactory results for both the masts that are affected by the canopy and those that are not influenced by it.

  6. FRAPCON-3: Modifications to fuel rod material properties and performance models for high-burnup application

    Lanning, D.D.; Beyer, C.E.; Painter, C.L.

    1997-12-01

    This volume describes the fuel rod material and performance models that were updated for the FRAPCON-3 steady-state fuel rod performance code. The property and performance models were changed to account for behavior at extended burnup levels up to 65 Gwd/MTU. The property and performance models updated were the fission gas release, fuel thermal conductivity, fuel swelling, fuel relocation, radial power distribution, solid-solid contact gap conductance, cladding corrosion and hydriding, cladding mechanical properties, and cladding axial growth. Each updated property and model was compared to well characterized data up to high burnup levels. The installation of these properties and models in the FRAPCON-3 code along with input instructions are provided in Volume 2 of this report and Volume 3 provides a code assessment based on comparison to integral performance data. The updated FRAPCON-3 code is intended to replace the earlier codes FRAPCON-2 and GAPCON-THERMAL-2. 94 refs., 61 figs., 9 tabs

  7. Solving Problems in Various Domains by Hybrid Models of High Performance Computations

    Yurii Rogozhin

    2014-03-01

    Full Text Available This work presents a hybrid model of high performance computations. The model is based on membrane system (P~system where some membranes may contain quantum device that is triggered by the data entering the membrane. This model is supposed to take advantages of both biomolecular and quantum paradigms and to overcome some of their inherent limitations. The proposed approach is demonstrated through two selected problems: SAT, and image retrieving.

  8. A new rate-dependent model for high-frequency tracking performance enhancement of piezoactuator system

    Tian, Lizhi; Xiong, Zhenhua; Wu, Jianhua; Ding, Han

    2017-05-01

    Feedforward-feedback control is widely used in motion control of piezoactuator systems. Due to the phase lag caused by incomplete dynamics compensation, the performance of the composite controller is greatly limited at high frequency. This paper proposes a new rate-dependent model to improve the high-frequency tracking performance by reducing dynamics compensation error. The rate-dependent model is designed as a function of the input and input variation rate to describe the input-output relationship of the residual system dynamics which mainly performs as phase lag in a wide frequency band. Then the direct inversion of the proposed rate-dependent model is used to compensate the residual system dynamics. Using the proposed rate-dependent model as feedforward term, the open loop performance can be improved significantly at medium-high frequency. Then, combining the with feedback controller, the composite controller can provide enhanced close loop performance from low frequency to high frequency. At the frequency of 1 Hz, the proposed controller presents the same performance as previous methods. However, at the frequency of 900 Hz, the tracking error is reduced to be 30.7% of the decoupled approach.

  9. Kinetic Hydration Heat Modeling for High-Performance Concrete Containing Limestone Powder

    Xiao-Yong Wang

    2017-01-01

    Full Text Available Limestone powder is increasingly used in producing high-performance concrete in the modern concrete industry. Limestone powder blended concrete has many advantages, such as increasing the early-age strength, reducing the setting time, improving the workability, and reducing the heat of hydration. This study presents a kinetic model for modeling the hydration heat of limestone blended concrete. First, an improved hydration model is proposed which considers the dilution effect and nucleation effect due to limestone powder addition. A degree of hydration is calculated using this improved hydration model. Second, hydration heat is calculated using the degree of hydration. The effects of water to binder ratio and limestone replacement ratio on hydration heat are clarified. Third, the temperature history and temperature distribution of hardening limestone blended concrete are calculated by combining hydration model with finite element method. The analysis results generally agree with experimental results of high-performance concrete with various mixing proportions.

  10. Challenges and opportunities of modeling plasma–surface interactions in tungsten using high-performance computing

    Wirth, Brian D., E-mail: bdwirth@utk.edu [Department of Nuclear Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Nuclear Science and Engineering Directorate, Oak Ridge National Laboratory, Oak Ridge, TN (United States); Hammond, K.D. [Department of Nuclear Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Krasheninnikov, S.I. [University of California, San Diego, La Jolla, CA (United States); Maroudas, D. [University of Massachusetts, Amherst, Amherst, MA 01003 (United States)

    2015-08-15

    The performance of plasma facing components (PFCs) is critical for ITER and future magnetic fusion reactors. The ITER divertor will be tungsten, which is the primary candidate material for future reactors. Recent experiments involving tungsten exposure to low-energy helium plasmas reveal significant surface modification, including the growth of nanometer-scale tendrils of “fuzz” and formation of nanometer-sized bubbles in the near-surface region. The large span of spatial and temporal scales governing plasma surface interactions are among the challenges to modeling divertor performance. Fortunately, recent innovations in computational modeling, increasingly powerful high-performance computers, and improved experimental characterization tools provide a path toward self-consistent, experimentally validated models of PFC and divertor performance. Recent advances in understanding tungsten–helium interactions are reviewed, including such processes as helium clustering, which serve as nuclei for gas bubbles; and trap mutation, dislocation loop punching and bubble bursting; which together initiate surface morphological modification.

  11. Challenges and opportunities of modeling plasma–surface interactions in tungsten using high-performance computing

    Wirth, Brian D.; Hammond, K.D.; Krasheninnikov, S.I.; Maroudas, D.

    2015-01-01

    The performance of plasma facing components (PFCs) is critical for ITER and future magnetic fusion reactors. The ITER divertor will be tungsten, which is the primary candidate material for future reactors. Recent experiments involving tungsten exposure to low-energy helium plasmas reveal significant surface modification, including the growth of nanometer-scale tendrils of “fuzz” and formation of nanometer-sized bubbles in the near-surface region. The large span of spatial and temporal scales governing plasma surface interactions are among the challenges to modeling divertor performance. Fortunately, recent innovations in computational modeling, increasingly powerful high-performance computers, and improved experimental characterization tools provide a path toward self-consistent, experimentally validated models of PFC and divertor performance. Recent advances in understanding tungsten–helium interactions are reviewed, including such processes as helium clustering, which serve as nuclei for gas bubbles; and trap mutation, dislocation loop punching and bubble bursting; which together initiate surface morphological modification

  12. High Performance Programming Using Explicit Shared Memory Model on Cray T3D1

    Simon, Horst D.; Saini, Subhash; Grassi, Charles

    1994-01-01

    The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.

  13. High Performance Electrical Modeling and Simulation Verification Test Suite - Tier I; TOPICAL

    SCHELLS, REGINA L.; BOGDAN, CAROLYN W.; WIX, STEVEN D.

    2001-01-01

    This document describes the High Performance Electrical Modeling and Simulation (HPEMS) Global Verification Test Suite (VERTS). The VERTS is a regression test suite used for verification of the electrical circuit simulation codes currently being developed by the HPEMS code development team. This document contains descriptions of the Tier I test cases

  14. Performance Modeling and Optimization of a High Energy CollidingBeam Simulation Code

    Shan, Hongzhang; Strohmaier, Erich; Qiang, Ji; Bailey, David H.; Yelick, Kathy

    2006-06-01

    An accurate modeling of the beam-beam interaction is essential to maximizing the luminosity in existing and future colliders. BeamBeam3D was the first parallel code that can be used to study this interaction fully self-consistently on high-performance computing platforms. Various all-to-all personalized communication (AAPC) algorithms dominate its communication patterns, for which we developed a sequence of performance models using a series of micro-benchmarks. We find that for SMP based systems the most important performance constraint is node-adapter contention, while for 3D-Torus topologies good performance models are not possible without considering link contention. The best average model prediction error is very low on SMP based systems with of 3% to 7%. On torus based systems errors of 29% are higher but optimized performance can again be predicted within 8% in some cases. These excellent results across five different systems indicate that this methodology for performance modeling can be applied to a large class of algorithms.

  15. Performance Modeling and Optimization of a High Energy Colliding Beam Simulation Code

    Shan, Hongzhang; Strohmaier, Erich; Qiang, Ji; Bailey, David H.; Yelick, Kathy

    2006-01-01

    An accurate modeling of the beam-beam interaction is essential to maximizing the luminosity in existing and future colliders. BeamBeam3D was the first parallel code that can be used to study this interaction fully self-consistently on high-performance computing platforms. Various all-to-all personalized communication (AAPC) algorithms dominate its communication patterns, for which we developed a sequence of performance models using a series of micro-benchmarks. We find that for SMP based systems the most important performance constraint is node-adapter contention, while for 3D-Torus topologies good performance models are not possible without considering link contention. The best average model prediction error is very low on SMP based systems with of 3% to 7%. On torus based systems errors of 29% are higher but optimized performance can again be predicted within 8% in some cases. These excellent results across five different systems indicate that this methodology for performance modeling can be applied to a large class of algorithms

  16. Corrosion models for predictions of performance of high-level radioactive-waste containers

    Farmer, J.C.; McCright, R.D. [Lawrence Livermore National Lab., CA (United States); Gdowski, G.E. [KMI Energy Services, Livermore, CA (United States)

    1991-11-01

    The present plan for disposal of high-level radioactive waste in the US is to seal it in containers before emplacement in a geologic repository. A proposed site at Yucca Mountain, Nevada, is being evaluated for its suitability as a geologic repository. The containers will probably be made of either an austenitic or a copper-based alloy. Models of alloy degradation are being used to predict the long-term performance of the containers under repository conditions. The models are of uniform oxidation and corrosion, localized corrosion, and stress corrosion cracking, and are applicable to worst-case scenarios of container degradation. This paper reviews several of the models.

  17. Performance of five surface energy balance models for estimating daily evapotranspiration in high biomass sorghum

    Wagle, Pradeep; Bhattarai, Nishan; Gowda, Prasanna H.; Kakani, Vijaya G.

    2017-06-01

    Robust evapotranspiration (ET) models are required to predict water usage in a variety of terrestrial ecosystems under different geographical and agrometeorological conditions. As a result, several remote sensing-based surface energy balance (SEB) models have been developed to estimate ET over large regions. However, comparison of the performance of several SEB models at the same site is limited. In addition, none of the SEB models have been evaluated for their ability to predict ET in rain-fed high biomass sorghum grown for biofuel production. In this paper, we evaluated the performance of five widely used single-source SEB models, namely Surface Energy Balance Algorithm for Land (SEBAL), Mapping ET with Internalized Calibration (METRIC), Surface Energy Balance System (SEBS), Simplified Surface Energy Balance Index (S-SEBI), and operational Simplified Surface Energy Balance (SSEBop), for estimating ET over a high biomass sorghum field during the 2012 and 2013 growing seasons. The predicted ET values were compared against eddy covariance (EC) measured ET (ETEC) for 19 cloud-free Landsat image. In general, S-SEBI, SEBAL, and SEBS performed reasonably well for the study period, while METRIC and SSEBop performed poorly. All SEB models substantially overestimated ET under extremely dry conditions as they underestimated sensible heat (H) and overestimated latent heat (LE) fluxes under dry conditions during the partitioning of available energy. METRIC, SEBAL, and SEBS overestimated LE regardless of wet or dry periods. Consequently, predicted seasonal cumulative ET by METRIC, SEBAL, and SEBS were higher than seasonal cumulative ETEC in both seasons. In contrast, S-SEBI and SSEBop substantially underestimated ET under too wet conditions, and predicted seasonal cumulative ET by S-SEBI and SSEBop were lower than seasonal cumulative ETEC in the relatively wetter 2013 growing season. Our results indicate the necessity of inclusion of soil moisture or plant water stress

  18. Micromechanical Models of Mechanical Response of High Performance Fibre Reinforced Cement Composites

    Li, V. C.; Mihashi, H.; Alwan, J.

    1996-01-01

    generation of FRC with high performance and economical viability, is in sight. However, utilization of micromechanical models for a more comprehensive set of important HPFRCC properties awaits further investigations into fundamental mechanisms governing composite properties, as well as intergrative efforts......The state-of-the-art in micromechanical modeling of the mechanical response of HPFRCC is reviewed. Much advances in modeling has been made over the last decade to the point that certain properties of composites can be carefully designed using the models as analytic tools. As a result, a new...... across responses to different load types. Further, micromechanical models for HPFRCC behavior under complex loading histories, including those in fracture, fatigue and multuaxial loading are urgently needed in order to optimize HPFRCC microstrcuctures and enable predictions of such material in structures...

  19. Heat transfer modeling in exhaust systems of high-performance two-stroke engines

    Lujan Martinez, José Manuel; Climent Puchades, Héctor; Olmeda González, Pablo Cesar; JIMENEZ MACEDO, VICTOR DANIEL

    2014-01-01

    Heat transfer from the hot gases to the wall in exhaust systems of high-performance two-stroke engines is underestimated using steady state with fully developed flow empirical correlations. This fact is detected when comparing measured and modeled pressure pulses in different positions in the exhaust system. This can be explained taking into account that classical expressions have been validated for fully developed flows, a situation that is far from the flow behavior in reciprocating interna...

  20. Performance prediction of high Tc superconducting small antennas using a two-fluid-moment method model

    Cook, G. G.; Khamas, S. K.; Kingsley, S. P.; Woods, R. C.

    1992-01-01

    The radar cross section and Q factors of electrically small dipole and loop antennas made with a YBCO high Tc superconductor are predicted using a two-fluid-moment method model, in order to determine the effects of finite conductivity on the performances of such antennas. The results compare the useful operating bandwidths of YBCO antennas exhibiting varying degrees of impurity with their copper counterparts at 77 K, showing a linear relationship between bandwidth and impurity level.

  1. Modeling Phase-transitions Using a High-performance, Isogeometric Analysis Framework

    Vignal, Philippe

    2014-06-06

    In this paper, we present a high-performance framework for solving partial differential equations using Isogeometric Analysis, called PetIGA, and show how it can be used to solve phase-field problems. We specifically chose the Cahn-Hilliard equation, and the phase-field crystal equation as test cases. These two models allow us to highlight some of the main advantages that we have access to while using PetIGA for scientific computing.

  2. A Family of High-Performance Solvers for Linear Model Predictive Control

    Frison, Gianluca; Sokoler, Leo Emil; Jørgensen, John Bagterp

    2014-01-01

    In Model Predictive Control (MPC), an optimization problem has to be solved at each sampling time, and this has traditionally limited the use of MPC to systems with slow dynamic. In this paper, we propose an e_cient solution strategy for the unconstrained sub-problems that give the search......-direction in Interior-Point (IP) methods for MPC, and that usually are the computational bottle-neck. This strategy combines a Riccati-like solver with the use of high-performance computing techniques: in particular, in this paper we explore the performance boost given by the use of single precision computation...

  3. SUMO, System performance assessment for a high-level nuclear waste repository: Mathematical models

    Eslinger, P.W.; Miley, T.B.; Engel, D.W.; Chamberlain, P.J. II.

    1992-09-01

    Following completion of the preliminary risk assessment of the potential Yucca Mountain Site by Pacific Northwest Laboratory (PNL) in 1988, the Office of Civilian Radioactive Waste Management (OCRWM) of the US Department of Energy (DOE) requested the Performance Assessment Scientific Support (PASS) Program at PNL to develop an integrated system model and computer code that provides performance and risk assessment analysis capabilities for a potential high-level nuclear waste repository. The system model that has been developed addresses the cumulative radionuclide release criteria established by the US Environmental Protection Agency (EPA) and estimates population risks in terms of dose to humans. The system model embodied in the SUMO (System Unsaturated Model) code will also allow benchmarking of other models being developed for the Yucca Mountain Project. The system model has three natural divisions: (1) source term, (2) far-field transport, and (3) dose to humans. This document gives a detailed description of the mathematics of each of these three divisions. Each of the governing equations employed is based on modeling assumptions that are widely accepted within the scientific community

  4. Meteorological conditions associated to high sublimation amounts in semiarid high-elevation Andes decrease the performance of empirical melt models

    Ayala, Alvaro; Pellicciotti, Francesca; MacDonell, Shelley; McPhee, James; Burlando, Paolo

    2015-04-01

    Empirical melt (EM) models are often preferred to surface energy balance (SEB) models to calculate melt amounts of snow and ice in hydrological modelling of high-elevation catchments. The most common reasons to support this decision are that, in comparison to SEB models, EM models require lower levels of meteorological data, complexity and computational costs. However, EM models assume that melt can be characterized by means of a few index variables only, and their results strongly depend on the transferability in space and time of the calibrated empirical parameters. In addition, they are intrinsically limited in accounting for specific process components, the complexity of which cannot be easily reconciled with the empirical nature of the model. As an example of an EM model, in this study we use the Enhanced Temperature Index (ETI) model, which calculates melt amounts using air temperature and the shortwave radiation balance as index variables. We evaluate the performance of the ETI model on dry high-elevation sites where sublimation amounts - that are not explicitly accounted for the EM model - represent a relevant percentage of total ablation (1.1 to 8.7%). We analyse a data set of four Automatic Weather Stations (AWS), which were collected during the ablation season 2013-14, at elevations between 3466 and 4775 m asl, on the glaciers El Tapado, San Francisco, Bello and El Yeso, which are located in the semiarid Andes of central Chile. We complement our analysis using data from past studies in Juncal Norte Glacier (Chile) and Haut Glacier d'Arolla (Switzerland), during the ablation seasons 2008-09 and 2006, respectively. We use the results of a SEB model, applied to each study site, along the entire season, to calibrate the ETI model. The ETI model was not designed to calculate sublimation amounts, however, results show that their ability is low also to simulate melt amounts at sites where sublimation represents larger percentages of total ablation. In fact, we

  5. Novel Complete Probabilistic Models of Random Variation in High Frequency Performance of Nanoscale MOSFET

    Rawid Banchuin

    2013-01-01

    Full Text Available The novel probabilistic models of the random variations in nanoscale MOSFET's high frequency performance defined in terms of gate capacitance and transition frequency have been proposed. As the transition frequency variation has also been considered, the proposed models are considered as complete unlike the previous one which take only the gate capacitance variation into account. The proposed models have been found to be both analytic and physical level oriented as they are the precise mathematical expressions in terms of physical parameters. Since the up-to-date model of variation in MOSFET's characteristic induced by physical level fluctuation has been used, part of the proposed models for gate capacitance is more accurate and physical level oriented than its predecessor. The proposed models have been verified based on the 65 nm CMOS technology by using the Monte-Carlo SPICE simulations of benchmark circuits and Kolmogorov-Smirnov tests as highly accurate since they fit the Monte-Carlo-based analysis results with 99% confidence. Hence, these novel models have been found to be versatile for the statistical/variability aware analysis/design of nanoscale MOSFET-based analog/mixed signal circuits and systems.

  6. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing.

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2014-10-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA's CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream . Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels.

  7. Choosing processor array configuration by performance modeling for a highly parallel linear algebra algorithm

    Littlefield, R.J.; Maschhoff, K.J.

    1991-04-01

    Many linear algebra algorithms utilize an array of processors across which matrices are distributed. Given a particular matrix size and a maximum number of processors, what configuration of processors, i.e., what size and shape array, will execute the fastest? The answer to this question depends on tradeoffs between load balancing, communication startup and transfer costs, and computational overhead. In this paper we analyze in detail one algorithm: the blocked factored Jacobi method for solving dense eigensystems. A performance model is developed to predict execution time as a function of the processor array and matrix sizes, plus the basic computation and communication speeds of the underlying computer system. In experiments on a large hypercube (up to 512 processors), this model has been found to be highly accurate (mean error ∼ 2%) over a wide range of matrix sizes (10 x 10 through 200 x 200) and processor counts (1 to 512). The model reveals, and direct experiment confirms, that the tradeoffs mentioned above can be surprisingly complex and counterintuitive. We propose decision procedures based directly on the performance model to choose configurations for fastest execution. The model-based decision procedures are compared to a heuristic strategy and shown to be significantly better. 7 refs., 8 figs., 1 tab

  8. Simulink models for performance analysis of high speed DQPSK modulated optical link

    Sharan, Lucky; Rupanshi,; Chaubey, V. K.

    2016-01-01

    This paper attempts to present the design approach for development of simulation models to study and analyze the transmission of 10 Gbps DQPSK signal over a single channel Peer to Peer link using Matlab Simulink. The simulation model considers the different optical components used in link design with their behavior represented initially by theoretical interpretation, including the transmitter topology, Mach Zehnder Modulator(MZM) module and, the propagation model for optical fibers etc. thus allowing scope for direct realization in experimental configurations. It provides the flexibility to incorporate the various photonic components as either user-defined or fixed and, can also be enhanced or removed from the model as per the design requirements. We describe the detailed operation and need of every component model and its representation in Simulink blocksets. Moreover the developed model can be extended in future to support Dense Wavelength Division Multiplexing (DWDM) system, thereby allowing high speed transmission with N × 40 Gbps systems. The various compensation techniques and their influence on system performance can be easily investigated by using such models.

  9. Simulink models for performance analysis of high speed DQPSK modulated optical link

    Sharan, Lucky, E-mail: luckysharan@pilani.bits-pilani.ac.in; Rupanshi,, E-mail: f2011222@pilani.bits-pilani.ac.in; Chaubey, V. K., E-mail: vkc@pilani.bits-pilani.ac.in [EEE Department, BITS-Pilani, Rajasthan, 333031 (India)

    2016-03-09

    This paper attempts to present the design approach for development of simulation models to study and analyze the transmission of 10 Gbps DQPSK signal over a single channel Peer to Peer link using Matlab Simulink. The simulation model considers the different optical components used in link design with their behavior represented initially by theoretical interpretation, including the transmitter topology, Mach Zehnder Modulator(MZM) module and, the propagation model for optical fibers etc. thus allowing scope for direct realization in experimental configurations. It provides the flexibility to incorporate the various photonic components as either user-defined or fixed and, can also be enhanced or removed from the model as per the design requirements. We describe the detailed operation and need of every component model and its representation in Simulink blocksets. Moreover the developed model can be extended in future to support Dense Wavelength Division Multiplexing (DWDM) system, thereby allowing high speed transmission with N × 40 Gbps systems. The various compensation techniques and their influence on system performance can be easily investigated by using such models.

  10. Modelling and Development of a High Performance Milling Process with Monolithic Cutting Tools

    Ozturk, E.; Taylor, C. M.; Turner, S.; Devey, M.

    2011-01-01

    Critical aerospace components usually require difficult to machine workpiece materials like nickel based alloys. Moreover; there is a pressing need to maximize the productivity of machining operations. This need can be satisfied by selection of higher feed velocity, axial and radial depths. But there may be several problems during machining in this case. Due to high cutting speeds in high performance machining, the tool life may be unacceptably low. If magnitudes of cutting forces are high, out of tolerance static form errors may result; moreover in the extreme cases, the cutting tool may break apart. Forced vibrations may deteriorate the surface quality. Chatter vibrations may develop if the selected parameters result in instability. In this study, in order to deal with the tool life issue, several experimental cuts are made with different tool geometries, and the best combination in terms of tool life is selected. A force model is developed and the results of the force model are verified by experimental results. The force model is used in predicting the effect of process parameters on cutting forces. In order to account for the other concerns such as static form errors, forced and chatter vibrations, additional process models are currently under development.

  11. Modeling of high-density U-MO dispersion fuel plate performance

    Hayes, S.L.; Meyer, M.K.; Hofman, G.L.; Rest, J.; Snelgrove, J.L.

    2002-01-01

    Results from postirradiation examinations (PIE) of highly loaded U-Mo/Al dispersion fuel plates over the past several years have shown that the interaction between the metallic fuel particles and the matrix aluminum can be extensive, reducing the volume of the high-conductivity matrix phase and producing a significant volume of low-conductivity reaction-product phase. This phenomenon results in a significant decrease in fuel meat thermal conductivity during irradiation. PIE has further shown that the fuel-matrix interaction rate is a sensitive function of irradiation temperature. The interplay between fuel temperature and fuel-matrix interaction makes the development of a simple empirical correlation between the two difficult. For this reason a comprehensive thermal model has been developed to calculate temperatures throughout the fuel plate over its lifetime, taking into account the changing volume fractions of fuel, matrix and reaction-product phases within the fuel meat owing to fuel-matrix interaction; this thermal model has been incorporated into the dispersion fuel performance code designated PLATE. Other phenomena important to fuel thermal performance that are also treated in PLATE include: gas generation and swelling in the fuel and reaction-product phases, incorporation of matrix aluminum into solid solution with the unreacted metallic fuel particles, matrix extrusion resulting from fuel swelling, and cladding corrosion. The phenomena modeled also make possible a prediction of fuel plate swelling. This paper presents a description of the models and empirical correlations employed within PLATE as well as validation of code predictions against fuel performance data for U-Mo experimental fuel plates from the RERTR-3 irradiation test. (author)

  12. A novel high-performance self-powered ultraviolet photodetector: Concept, analytical modeling and analysis

    Ferhati, H.; Djeffal, F.

    2017-12-01

    In this paper, a new MSM-UV-photodetector (PD) based on dual wide band-gap material (DM) engineering aspect is proposed to achieve high-performance self-powered device. Comprehensive analytical models for the proposed sensor photocurrent and the device properties are developed incorporating the impact of DM aspect on the device photoelectrical behavior. The obtained results are validated with the numerical data using commercial TCAD software. Our investigation demonstrates that the adopted design amendment modulates the electric field in the device, which provides the possibility to drive appropriate photo-generated carriers without an external applied voltage. This phenomenon suggests achieving the dual role of effective carriers' separation and an efficient reduce of the dark current. Moreover, a new hybrid approach based on analytical modeling and Particle Swarm Optimization (PSO) is proposed to achieve improved photoelectric behavior at zero bias that can ensure favorable self-powered MSM-based UV-PD. It is found that the proposed design methodology has succeeded in identifying the optimized design that offers a self-powered device with high-responsivity (98 mA/W) and superior ION/IOFF ratio (480 dB). These results make the optimized MSM-UV-DM-PD suitable for providing low cost self-powered devices for high-performance optical communication and monitoring applications.

  13. High-Performance Computer Modeling of the Cosmos-Iridium Collision

    Olivier, S; Cook, K; Fasenfest, B; Jefferson, D; Jiang, M; Leek, J; Levatin, J; Nikolaev, S; Pertica, A; Phillion, D; Springer, K; De Vries, W

    2009-08-28

    This paper describes the application of a new, integrated modeling and simulation framework, encompassing the space situational awareness (SSA) enterprise, to the recent Cosmos-Iridium collision. This framework is based on a flexible, scalable architecture to enable efficient simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel, high-performance computer systems available, for example, at Lawrence Livermore National Laboratory. We will describe the application of this framework to the recent collision of the Cosmos and Iridium satellites, including (1) detailed hydrodynamic modeling of the satellite collision and resulting debris generation, (2) orbital propagation of the simulated debris and analysis of the increased risk to other satellites (3) calculation of the radar and optical signatures of the simulated debris and modeling of debris detection with space surveillance radar and optical systems (4) determination of simulated debris orbits from modeled space surveillance observations and analysis of the resulting orbital accuracy, (5) comparison of these modeling and simulation results with Space Surveillance Network observations. We will also discuss the use of this integrated modeling and simulation framework to analyze the risks and consequences of future satellite collisions and to assess strategies for mitigating or avoiding future incidents, including the addition of new sensor systems, used in conjunction with the Space Surveillance Network, for improving space situational awareness.

  14. Using a High-Performance Planning Model to Increase Levels of Functional Effectiveness Within Professional Development.

    Winter, Peggi

    2016-01-01

    Nursing professional practice models continue to shape how we practice nursing by putting families and members at the heart of everything we do. Faced with enormous challenges around healthcare reform, models create frameworks for practice by unifying, uniting, and guiding our nurses. The Kaiser Permanente Practice model was developed to ensure consistency for nursing practice across the continuum. Four key pillars support this practice model and the work of nursing: quality and safety, leadership, professional development, and research/evidence-based practice. These four pillars form the foundation that makes transformational practice possible and aligns nursing with Kaiser Permanente's mission. The purpose of this article is to discuss the pillar of professional development and the components of the Nursing Professional Development: Scope and Standards of Practice model (American Nurses Association & National Nursing Staff Development Organization, 2010) and place them in a five-level development framework. This process allowed us to identify the current organizational level of practice, prioritize each nursing professional development component, and design an operational strategy to move nursing professional development toward a level of high performance. This process is suggested for nursing professional development specialists.

  15. Hagfish slime threads as a biomimetic model for high performance protein fibres

    Fudge, Douglas S; Hillis, Sonja; Levy, Nimrod; Gosline, John M

    2010-01-01

    Textile manufacturing is one of the largest industries in the world, and synthetic fibres represent two-thirds of the global textile market. Synthetic fibres are manufactured from petroleum-based feedstocks, which are becoming increasingly expensive as demand for finite petroleum reserves continues to rise. For the last three decades, spider silks have been held up as a model that could inspire the production of protein fibres exhibiting high performance and ecological sustainability, but unfortunately, artificial spider silks have yet to fulfil this promise. Previous work on the biomechanics of protein fibres from the slime of hagfishes suggests that these fibres might be a superior biomimetic model to spider silks. Based on the fact that the proteins within these 'slime threads' adopt conformations that are similar to those in spider silks when they are stretched, we hypothesized that draw processing of slime threads should yield fibres that are comparable to spider dragline silk in their mechanical performance. Here we show that draw-processed slime threads are indeed exceptionally strong and tough. We also show that post-drawing steps such as annealing, dehydration and covalent cross-linking can dramatically improve the long-term dimensional stability of the threads. The data presented here suggest that hagfish slime threads are a model that should be pursued in the quest to produce fibres that are ecologically sustainable and economically viable.

  16. Accuracy of W' Recovery Kinetics in High Performance Cyclists - Modelling Intermittent Work Capacity.

    Bartram, Jason C; Thewlis, Dominic; Martin, David T; Norton, Kevin I

    2017-10-16

    With knowledge of an individual's critical power (CP) and W' the SKIBA 2 model provides a framework with which to track W' balance during intermittent high intensity work bouts. There are fears the time constant controlling the recovery rate of W' (τ W' ) may require refinement to enable effective use in an elite population. Four elite endurance cyclists completed an array of intermittent exercise protocols to volitional exhaustion. Each protocol lasted approximately 3.5-6 minutes and featured a range of recovery intensities, set in relation to athlete's CPs (DCP). Using the framework of the SKIBA 2 model, the τ W ' values were modified for each protocol to achieve an accurate W' at volitional exhaustion. Modified τ W ' values were compared to equivalent SKIBA 2 τ W ' values to assess the difference in recovery rates for this population. Plotting modified τ W ' values against DCP showed the adjusted relationship between work-rate and recovery-rate. Comparing modified τ W' values against the SKIBA 2 τ W' values showed a negative bias of 112±46s (mean±95%CL), suggesting athlete's recovered W' faster than predicted by SKIBA 2 (p=0.0001). The modified τ W' to DCP relationship was best described by a power function: τ W' =2287.2∗D CP -0.688 (R 2 = 0.433). The current SKIBA 2 model is not appropriate for use in elite cyclists as it under predicts the recovery rate of W'. The modified τ W' equation presented will require validation, but appears more appropriate for high performance athletes. Individual τ W' relationships may be necessary in order to maximise the model's validity.

  17. Analysis and Modeling of Social In uence in High Performance Computing Workloads

    Zheng, Shuai

    2011-06-01

    High Performance Computing (HPC) is becoming a common tool in many research areas. Social influence (e.g., project collaboration) among increasing users of HPC systems creates bursty behavior in underlying workloads. This bursty behavior is increasingly common with the advent of grid computing and cloud computing. Mining the user bursty behavior is important for HPC workloads prediction and scheduling, which has direct impact on overall HPC computing performance. A representative work in this area is the Mixed User Group Model (MUGM), which clusters users according to the resource demand features of their submissions, such as duration time and parallelism. However, MUGM has some difficulties when implemented in real-world system. First, representing user behaviors by the features of their resource demand is usually difficult. Second, these features are not always available. Third, measuring the similarities among users is not a well-defined problem. In this work, we propose a Social Influence Model (SIM) to identify, analyze, and quantify the level of social influence across HPC users. The advantage of the SIM model is that it finds HPC communities by analyzing user job submission time, thereby avoiding the difficulties of MUGM. An offline algorithm and a fast-converging, computationally-efficient online learning algorithm for identifying social groups are proposed. Both offline and online algorithms are applied on several HPC and grid workloads, including Grid 5000, EGEE 2005 and 2007, and KAUST Supercomputing Lab (KSL) BGP data. From the experimental results, we show the existence of a social graph, which is characterized by a pattern of dominant users and followers. In order to evaluate the effectiveness of identified user groups, we show the pattern discovered by the offline algorithm follows a power-law distribution, which is consistent with those observed in mainstream social networks. We finally conclude the thesis and discuss future directions of our work.

  18. Research on Appraisal System of Procurator Performance by Using High-Order CFA Model

    Yong-mao Huang

    2014-01-01

    Full Text Available The prosecutor is the main body of procuratorial organs. The performance appraisal system plays an important role in promoting the work efficiency of procurator. In this paper, we establish the performance appraisal system of procurators by high-order confirmatory factor analysis method and evaluate procurators’ performance by fuzzy comprehensive evaluation method based on the 360 degrees. The results have some help to performance management of procuratorial organs.

  19. Neutronic and Thermal-hydraulic Modelling of High Performance Light Water Reactor

    Seppaelae, Malla [VTT Technical Research Centre of Finland, P.O.Box 1000, FI02044 VTT (Finland)

    2008-07-01

    High Performance Light Water Reactor (HPLWR), which is studied in EU project 'HPLWR2', uses water at supercritical pressures as coolant and moderator to achieve higher core outlet temperature and thus higher efficiency compared to present reactors. At VTT Technical Research Centre of Finland, functionality of the thermal-hydraulics in the coupled reactor dynamics code TRAB3D/ SMABRE was extended to supercritical pressures for the analyses of HPLWR. Input models for neutronics and thermal-hydraulics were made for TRAB3D/ SMABRE according to the latest HPLWR design. A preliminary analysis was performed in which the capability of SMABRE in the transition from supercritical pressures to subcritical pressures was demonstrated. Parameterized two-group cross sections for TRAB3D neutronics were received from Hungarian Academy of Sciences KFKI Atomic Energy Research Institute together with a subroutine for handling them. PSG, a new Monte Carlo transport code developed at VTT, was also used to generate two-group constants for HPLWR and comparisons were made with the KFKI cross sections and MCNP calculations. (author)

  20. Neutronic and Thermal-hydraulic Modelling of High Performance Light Water Reactor

    Seppaelae, Malla

    2008-01-01

    High Performance Light Water Reactor (HPLWR), which is studied in EU project 'HPLWR2', uses water at supercritical pressures as coolant and moderator to achieve higher core outlet temperature and thus higher efficiency compared to present reactors. At VTT Technical Research Centre of Finland, functionality of the thermal-hydraulics in the coupled reactor dynamics code TRAB3D/ SMABRE was extended to supercritical pressures for the analyses of HPLWR. Input models for neutronics and thermal-hydraulics were made for TRAB3D/ SMABRE according to the latest HPLWR design. A preliminary analysis was performed in which the capability of SMABRE in the transition from supercritical pressures to subcritical pressures was demonstrated. Parameterized two-group cross sections for TRAB3D neutronics were received from Hungarian Academy of Sciences KFKI Atomic Energy Research Institute together with a subroutine for handling them. PSG, a new Monte Carlo transport code developed at VTT, was also used to generate two-group constants for HPLWR and comparisons were made with the KFKI cross sections and MCNP calculations. (author)

  1. High-Performance Modeling and Simulation of Anchoring in Granular Media for NEO Applications

    Quadrelli, Marco B.; Jain, Abhinandan; Negrut, Dan; Mazhar, Hammad

    2012-01-01

    NASA is interested in designing a spacecraft capable of visiting a near-Earth object (NEO), performing experiments, and then returning safely. Certain periods of this mission would require the spacecraft to remain stationary relative to the NEO, in an environment characterized by very low gravity levels; such situations require an anchoring mechanism that is compact, easy to deploy, and upon mission completion, easy to remove. The design philosophy used in this task relies on the simulation capability of a high-performance multibody dynamics physics engine. On Earth, it is difficult to create low-gravity conditions, and testing in low-gravity environments, whether artificial or in space, can be costly and very difficult to achieve. Through simulation, the effect of gravity can be controlled with great accuracy, making it ideally suited to analyze the problem at hand. Using Chrono::Engine, a simulation pack age capable of utilizing massively parallel Graphic Processing Unit (GPU) hardware, several validation experiments were performed. Modeling of the regolith interaction has been carried out, after which the anchor penetration tests were performed and analyzed. The regolith was modeled by a granular medium composed of very large numbers of convex three-dimensional rigid bodies, subject to microgravity levels and interacting with each other with contact, friction, and cohesional forces. The multibody dynamics simulation approach used for simulating anchors penetrating a soil uses a differential variational inequality (DVI) methodology to solve the contact problem posed as a linear complementarity method (LCP). Implemented within a GPU processing environment, collision detection is greatly accelerated compared to traditional CPU (central processing unit)- based collision detection. Hence, systems of millions of particles interacting with complex dynamic systems can be efficiently analyzed, and design recommendations can be made in a much shorter time. The figure

  2. High-Performance Modeling of Carbon Dioxide Sequestration by Coupling Reservoir Simulation and Molecular Dynamics

    Bao, Kai

    2015-10-26

    The present work describes a parallel computational framework for carbon dioxide (CO2) sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel high-performance-computing (HPC) systems. In this framework, a parallel reservoir simulator, reservoir-simulation toolbox (RST), solves the flow and transport equations that describe the subsurface flow behavior, whereas the MD simulations are performed to provide the required physical parameters. Technologies from several different fields are used to make this novel coupled system work efficiently. One of the major applications of the framework is the modeling of large-scale CO2 sequestration for long-term storage in subsurface geological formations, such as depleted oil and gas reservoirs and deep saline aquifers, which has been proposed as one of the few attractive and practical solutions to reduce CO2 emissions and address the global-warming threat. Fine grids and accurate prediction of the properties of fluid mixtures under geological conditions are essential for accurate simulations. In this work, CO2 sequestration is presented as a first example for coupling reservoir simulation and MD, although the framework can be extended naturally to the full multiphase multicomponent compositional flow simulation to handle more complicated physical processes in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our MD simulations compared with published data, and good scalability is observed with the massively parallel HPC systems. The performance and capacity of the proposed framework are well-demonstrated with several experiments with hundreds of millions to one billion cells. To the best of our knowledge, the present work represents the first attempt to couple reservoir simulation and molecular simulation for large-scale modeling. Because of the complexity of

  3. ENVIRONMENTAL RESEARCH BRIEF : ANALYTIC ELEMENT MODELING OF GROUND-WATER FLOW AND HIGH PERFORMANCE COMPUTING

    Several advances in the analytic element method have been made to enhance its performance and facilitate three-dimensional ground-water flow modeling in a regional aquifer setting. First, a new public domain modular code (ModAEM) has been developed for modeling ground-water flow ...

  4. Analysis and modeling of social influence in high performance computing workloads

    Zheng, Shuai; Shae, Zon Yin; Zhang, Xiangliang; Jamjoom, Hani T.; Fong, Liana

    2011-01-01

    Social influence among users (e.g., collaboration on a project) creates bursty behavior in the underlying high performance computing (HPC) workloads. Using representative HPC and cluster workload logs, this paper identifies, analyzes, and quantifies

  5. High-Performance Modeling of Carbon Dioxide Sequestration by Coupling Reservoir Simulation and Molecular Dynamics

    Bao, Kai; Yan, Mi; Allen, Rebecca; Salama, Amgad; Lu, Ligang; Jordan, Kirk E.; Sun, Shuyu; Keyes, David E.

    2015-01-01

    The present work describes a parallel computational framework for carbon dioxide (CO2) sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel high-performance-computing (HPC) systems

  6. High performance computation of landscape genomic models including local indicators of spatial association.

    Stucki, S; Orozco-terWengel, P; Forester, B R; Duruz, S; Colli, L; Masembe, C; Negrini, R; Landguth, E; Jones, M R; Bruford, M W; Taberlet, P; Joost, S

    2017-09-01

    With the increasing availability of both molecular and topo-climatic data, the main challenges facing landscape genomics - that is the combination of landscape ecology with population genomics - include processing large numbers of models and distinguishing between selection and demographic processes (e.g. population structure). Several methods address the latter, either by estimating a null model of population history or by simultaneously inferring environmental and demographic effects. Here we present samβada, an approach designed to study signatures of local adaptation, with special emphasis on high performance computing of large-scale genetic and environmental data sets. samβada identifies candidate loci using genotype-environment associations while also incorporating multivariate analyses to assess the effect of many environmental predictor variables. This enables the inclusion of explanatory variables representing population structure into the models to lower the occurrences of spurious genotype-environment associations. In addition, samβada calculates local indicators of spatial association for candidate loci to provide information on whether similar genotypes tend to cluster in space, which constitutes a useful indication of the possible kinship between individuals. To test the usefulness of this approach, we carried out a simulation study and analysed a data set from Ugandan cattle to detect signatures of local adaptation with samβada, bayenv, lfmm and an F ST outlier method (FDIST approach in arlequin) and compare their results. samβada - an open source software for Windows, Linux and Mac OS X available at http://lasig.epfl.ch/sambada - outperforms other approaches and better suits whole-genome sequence data processing. © 2016 The Authors. Molecular Ecology Resources Published by John Wiley & Sons Ltd.

  7. LL13-MatModelRadDetect-PD2Jf Final Report: Materials Modeling for High-Performance Radiation Detectors

    Lordi, Vincenzo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-12-11

    The aims of this project are to enable rational materials design for select high-payoff challenges in radiation detection materials by using state-of-the-art predictive atomistic modeling techniques. Three specific high-impact challenges are addressed: (i) design and optimization of electrical contact stacks for TlBr detectors to stabilize temporal response at room-temperature; (ii) identification of chemical design principles of host glass materials for large-volume, low-cost, highperformance glass scintillators; and (iii) determination of the electrical impacts of dislocation networks in Cd1-xZnxTe (CZT) that limit its performance and usable single-crystal volume. The specific goals are to establish design and process strategies to achieve improved materials for high performance detectors. Each of the major tasks is discussed below in three sections, which include the goals for the task and a summary of the major results, followed by a listing of publications that contain the full details, including details of the methodologies used. The appendix lists 12 conference presentations given for this project, including 1 invited talk and 1 invited poster.

  8. MaMR: High-performance MapReduce programming model for material cloud applications

    Jing, Weipeng; Tong, Danyu; Wang, Yangang; Wang, Jingyuan; Liu, Yaqiu; Zhao, Peng

    2017-02-01

    With the increasing data size in materials science, existing programming models no longer satisfy the application requirements. MapReduce is a programming model that enables the easy development of scalable parallel applications to process big data on cloud computing systems. However, this model does not directly support the processing of multiple related data, and the processing performance does not reflect the advantages of cloud computing. To enhance the capability of workflow applications in material data processing, we defined a programming model for material cloud applications that supports multiple different Map and Reduce functions running concurrently based on hybrid share-memory BSP called MaMR. An optimized data sharing strategy to supply the shared data to the different Map and Reduce stages was also designed. We added a new merge phase to MapReduce that can efficiently merge data from the map and reduce modules. Experiments showed that the model and framework present effective performance improvements compared to previous work.

  9. Business Models of High Performance Computing Centres in Higher Education in Europe

    Eurich, Markus; Calleja, Paul; Boutellier, Roman

    2013-01-01

    High performance computing (HPC) service centres are a vital part of the academic infrastructure of higher education organisations. However, despite their importance for research and the necessary high capital expenditures, business research on HPC service centres is mostly missing. From a business perspective, it is important to find an answer to…

  10. A high-performance model for shallow-water simulations in distributed and heterogeneous architectures

    Conde, Daniel; Canelas, Ricardo B.; Ferreira, Rui M. L.

    2017-04-01

    unstructured nature of the mesh topology with the corresponding employed solution, based on space-filling curves, being analyzed and discussed. Intra-node parallelism is achieved through OpenMP for CPUs and CUDA for GPUs, depending on which kind of device the process is running. Here the main difficulty is associated with the Object-Oriented approach, where the presence of complex data structures can degrade model performance considerably. STAV-2D now supports fully distributed and heterogeneous simulations where multiple different devices can be used to accelerate computation time. The advantages, short-comings and specific solutions for the employed unified Object-Oriented approach, where the source code for CPU and GPU has the same compilation units (no device specific branches like seen in available models), are discussed and quantified with a thorough scalability and performance analysis. The assembled parallel model is expected to achieve faster than real-time simulations for high resolutions (from meters to sub-meter) in large scaled problems (from cities to watersheds), effectively bridging the gap between detailed and timely simulation results. Acknowledgements This research as partially supported by Portuguese and European funds, within programs COMPETE2020 and PORL-FEDER, through project PTDC/ECM-HID/6387/2014 and Doctoral Grant SFRH/BD/97933/2013 granted by the National Foundation for Science and Technology (FCT). References Canelas, R.; Murillo, J. & Ferreira, R.M.L. (2013), Two-dimensional depth-averaged modelling of dam-break flows over mobile beds. Journal of Hydraulic Research, 51(4), 392-407. Conde, D. A. S.; Baptista, M. A. V.; Sousa Oliveira, C. & Ferreira, R. M. L. (2013), A shallow-flow model for the propagation of tsunamis over complex geometries and mobile beds, Nat. Hazards and Earth Syst. Sci., 13, 2533-2542. Conde, D. A. S.; Telhado, M. J.; Viana Baptista, M. A. & Ferreira, R. M. L. (2015) Severity and exposure associated with tsunami actions in

  11. Performance Model for High-Power Lithium Titanate Oxide Batteries based on Extended Characterization Tests

    Stroe, Ana-Irina; Swierczynski, Maciej Jozef; Stroe, Daniel Ioan

    2015-01-01

    Lithium-ion (Li-ion) batteries are found nowadays not only in portable/consumer electronics but also in more power demanding applications, such as stationary renewable energy storage, automotive and back-up power supply, because of their superior characteristics in comparison to other energy...... storage technologies. Nevertheless, prior to be used in any of the aforementioned application, a Li-ion battery cell must be intensively characterized and its behavior needs to be understood. This can be realized by performing extended laboratory characterization tests and developing Li-ion battery...... performance models. Furthermore, accurate performance models are necessary in order to analyze the behavior of the battery cell under different mission profiles, by simulation; thus, avoiding time and cost demanding real life tests. This paper presents the development and the parametrization of a performance...

  12. High performance homes

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    Can prefabrication contribute to the development of high performance homes? To answer this question, this chapter defines high performance in more broadly inclusive terms, acknowledging the technical, architectural, social and economic conditions under which energy consumption and production occur....... Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  13. LIAR -- A computer program for the modeling and simulation of high performance linacs

    Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.; Siemann, R.; Thompson, K.; Zimmermann, F.

    1997-04-01

    The computer program LIAR (LInear Accelerator Research Code) is a numerical modeling and simulation tool for high performance linacs. Amongst others, it addresses the needs of state-of-the-art linear colliders where low emittance, high-intensity beams must be accelerated to energies in the 0.05-1 TeV range. LIAR is designed to be used for a variety of different projects. LIAR allows the study of single- and multi-particle beam dynamics in linear accelerators. It calculates emittance dilutions due to wakefield deflections, linear and non-linear dispersion and chromatic effects in the presence of multiple accelerator imperfections. Both single-bunch and multi-bunch beams can be simulated. Several basic and advanced optimization schemes are implemented. Present limitations arise from the incomplete treatment of bending magnets and sextupoles. A major objective of the LIAR project is to provide an open programming platform for the accelerator physics community. Due to its design, LIAR allows straight-forward access to its internal FORTRAN data structures. The program can easily be extended and its interactive command language ensures maximum ease of use. Presently, versions of LIAR are compiled for UNIX and MS Windows operating systems. An interface for the graphical visualization of results is provided. Scientific graphs can be saved in the PS and EPS file formats. In addition a Mathematica interface has been developed. LIAR now contains more than 40,000 lines of source code in more than 130 subroutines. This report describes the theoretical basis of the program, provides a reference for existing features and explains how to add further commands. The LIAR home page and the ONLINE version of this manual can be accessed under: http://www.slac.stanford.edu/grp/arb/rwa/liar.htm

  14. High performance statistical computing with parallel R: applications to biology and climate modelling

    Samatova, Nagiza F; Branstetter, Marcia; Ganguly, Auroop R; Hettich, Robert; Khan, Shiraj; Kora, Guruprasad; Li, Jiangtian; Ma, Xiaosong; Pan, Chongle; Shoshani, Arie; Yoginath, Srikanth

    2006-01-01

    Ultrascale computing and high-throughput experimental technologies have enabled the production of scientific data about complex natural phenomena. With this opportunity, comes a new problem - the massive quantities of data so produced. Answers to fundamental questions about the nature of those phenomena remain largely hidden in the produced data. The goal of this work is to provide a scalable high performance statistical data analysis framework to help scientists perform interactive analyses of these raw data to extract knowledge. Towards this goal we have been developing an open source parallel statistical analysis package, called Parallel R, that lets scientists employ a wide range of statistical analysis routines on high performance shared and distributed memory architectures without having to deal with the intricacies of parallelizing these routines

  15. Modeling the Relations among Parental Involvement, School Engagement and Academic Performance of High School Students

    Al-Alwan, Ahmed F.

    2014-01-01

    The author proposed a model to explain how parental involvement and school engagement related to academic performance. Participants were (671) 9th and 10th graders students who completed two scales of "parental involvement" and "school engagement" in their regular classrooms. Results of the path analysis suggested that the…

  16. High Performance Marine Vessels

    Yun, Liang

    2012-01-01

    High Performance Marine Vessels (HPMVs) range from the Fast Ferries to the latest high speed Navy Craft, including competition power boats and hydroplanes, hydrofoils, hovercraft, catamarans and other multi-hull craft. High Performance Marine Vessels covers the main concepts of HPMVs and discusses historical background, design features, services that have been successful and not so successful, and some sample data of the range of HPMVs to date. Included is a comparison of all HPMVs craft and the differences between them and descriptions of performance (hydrodynamics and aerodynamics). Readers will find a comprehensive overview of the design, development and building of HPMVs. In summary, this book: Focuses on technology at the aero-marine interface Covers the full range of high performance marine vessel concepts Explains the historical development of various HPMVs Discusses ferries, racing and pleasure craft, as well as utility and military missions High Performance Marine Vessels is an ideal book for student...

  17. Applying the High Reliability Health Care Maturity Model to Assess Hospital Performance: A VA Case Study.

    Sullivan, Jennifer L; Rivard, Peter E; Shin, Marlena H; Rosen, Amy K

    2016-09-01

    The lack of a tool for categorizing and differentiating hospitals according to their high reliability organization (HRO)-related characteristics has hindered progress toward implementing and sustaining evidence-based HRO practices. Hospitals would benefit both from an understanding of the organizational characteristics that support HRO practices and from knowledge about the steps necessary to achieve HRO status to reduce the risk of harm and improve outcomes. The High Reliability Health Care Maturity (HRHCM) model, a model for health care organizations' achievement of high reliability with zero patient harm, incorporates three major domains critical for promoting HROs-Leadership, Safety Culture, and Robust Process Improvement ®. A study was conducted to examine the content validity of the HRHCM model and evaluate whether it can differentiate hospitals' maturity levels for each of the model's components. Staff perceptions of patient safety at six US Department of Veterans Affairs (VA) hospitals were examined to determine whether all 14 HRHCM components were present and to characterize each hospital's level of organizational maturity. Twelve of the 14 components from the HRHCM model were detected; two additional characteristics emerged that are present in the HRO literature but not represented in the model-teamwork culture and system-focused tools for learning and improvement. Each hospital's level of organizational maturity could be characterized for 9 of the 14 components. The findings suggest the HRHCM model has good content validity and that there is differentiation between hospitals on model components. Additional research is needed to understand how these components can be used to build the infrastructure necessary for reaching high reliability.

  18. Ion thruster performance model

    Brophy, J.R.

    1984-01-01

    A model of ion thruster performance is developed for high flux density cusped magnetic field thruster designs. This model is formulated in terms of the average energy required to produce an ion in the discharge chamber plasma and the fraction of these ions that are extracted to form the beam. The direct loss of high energy (primary) electrons from the plasma to the anode is shown to have a major effect on thruster performance. The model provides simple algebraic equations enabling one to calculate the beam ion energy cost, the average discharge chamber plasma ion energy cost, the primary electron density, the primary-to-Maxwellian electron density ratio and the Maxwellian electron temperature. Experiments indicate that the model correctly predicts the variation in plasma ion energy cost for changes in propellant gas (Ar, Kr, and Xe), grid transparency to neutral atoms, beam extraction area, discharge voltage, and discharge chamber wall temperature

  19. Modeling Phase-transitions Using a High-performance, Isogeometric Analysis Framework

    Vignal, Philippe; Dalcin, Lisandro; Collier, Nathan; Calo, Victor M.

    2014-01-01

    In this paper, we present a high-performance framework for solving partial differential equations using Isogeometric Analysis, called PetIGA, and show how it can be used to solve phase-field problems. We specifically chose the Cahn-Hilliard equation

  20. Wind Farm Layout Optimization through a Crossover-Elitist Evolutionary Algorithm performed over a High Performing Analytical Wake Model

    Kirchner-Bossi, Nicolas; Porté-Agel, Fernando

    2017-04-01

    Wind turbine wakes can significantly disrupt the performance of further downstream turbines in a wind farm, thus seriously limiting the overall wind farm power output. Such effect makes the layout design of a wind farm to play a crucial role on the whole performance of the project. An accurate definition of the wake interactions added to a computationally compromised layout optimization strategy can result in an efficient resource when addressing the problem. This work presents a novel soft-computing approach to optimize the wind farm layout by minimizing the overall wake effects that the installed turbines exert on one another. An evolutionary algorithm with an elitist sub-optimization crossover routine and an unconstrained (continuous) turbine positioning set up is developed and tested over an 80-turbine offshore wind farm over the North Sea off Denmark (Horns Rev I). Within every generation of the evolution, the wind power output (cost function) is computed through a recently developed and validated analytical wake model with a Gaussian profile velocity deficit [1], which has shown to outperform the traditionally employed wake models through different LES simulations and wind tunnel experiments. Two schemes with slightly different perimeter constraint conditions (full or partial) are tested. Results show, compared to the baseline, gridded layout, a wind power output increase between 5.5% and 7.7%. In addition, it is observed that the electric cable length at the facilities is reduced by up to 21%. [1] Bastankhah, Majid, and Fernando Porté-Agel. "A new analytical model for wind-turbine wakes." Renewable Energy 70 (2014): 116-123.

  1. Model My Watershed: A high-performance cloud application for public engagement, watershed modeling and conservation decision support

    Aufdenkampe, A. K.; Tarboton, D. G.; Horsburgh, J. S.; Mayorga, E.; McFarland, M.; Robbins, A.; Haag, S.; Shokoufandeh, A.; Evans, B. M.; Arscott, D. B.

    2017-12-01

    The Model My Watershed Web app (https://app.wikiwatershed.org/) and the BiG-CZ Data Portal (http://portal.bigcz.org/) and are web applications that share a common codebase and a common goal to deliver high-performance discovery, visualization and analysis of geospatial data in an intuitive user interface in web browser. Model My Watershed (MMW) was designed as a decision support system for watershed conservation implementation. BiG CZ Data Portal was designed to provide context and background data for research sites. Users begin by creating an Area of Interest, via an automated watershed delineation tool, a free draw tool, selection of a predefined area such as a county or USGS Hydrological Unit (HUC), or uploading a custom polygon. Both Web apps visualize and provide summary statistics of land use, soil groups, streams, climate and other geospatial information. MMW then allows users to run a watershed model to simulate different scenarios of human impacts on stormwater runoff and water-quality. BiG CZ Data Portal allows users to search for scientific and monitoring data within the Area of Interest, which also serves as a prototype for the upcoming Monitor My Watershed web app. Both systems integrate with CUAHSI cyberinfrastructure, including visualizing observational data from CUAHSI Water Data Center and storing user data via CUAHSI HydroShare. Both systems also integrate with the new EnviroDIY Water Quality Data Portal (http://data.envirodiy.org/), a system for crowd-sourcing environmental monitoring data using open-source sensor stations (http://envirodiy.org/mayfly/) and based on the Observations Data Model v2.

  2. High-Performance First-Principles Molecular Dynamics for Predictive Theory and Modeling

    Gygi, Francois [Univ. of California, Davis, CA (United States). Dept. of Computer Science; Galli, Giulia [Univ. of Chicago, IL (United States); Schwegler, Eric [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-12-03

    This project focused on developing high-performance software tools for First-Principles Molecular Dynamics (FPMD) simulations, and applying them in investigations of materials relevant to energy conversion processes. FPMD is an atomistic simulation method that combines a quantum-mechanical description of electronic structure with the statistical description provided by molecular dynamics (MD) simulations. This reliance on fundamental principles allows FPMD simulations to provide a consistent description of structural, dynamical and electronic properties of a material. This is particularly useful in systems for which reliable empirical models are lacking. FPMD simulations are increasingly used as a predictive tool for applications such as batteries, solar energy conversion, light-emitting devices, electro-chemical energy conversion devices and other materials. During the course of the project, several new features were developed and added to the open-source Qbox FPMD code. The code was further optimized for scalable operation of large-scale, Leadership-Class DOE computers. When combined with Many-Body Perturbation Theory (MBPT) calculations, this infrastructure was used to investigate structural and electronic properties of liquid water, ice, aqueous solutions, nanoparticles and solid-liquid interfaces. Computing both ionic trajectories and electronic structure in a consistent manner enabled the simulation of several spectroscopic properties, such as Raman spectra, infrared spectra, and sum-frequency generation spectra. The accuracy of the approximations used allowed for direct comparisons of results with experimental data such as optical spectra, X-ray and neutron diffraction spectra. The software infrastructure developed in this project, as applied to various investigations of solids, liquids and interfaces, demonstrates that FPMD simulations can provide a detailed, atomic-scale picture of structural, vibrational and electronic properties of complex systems

  3. The computer program LIAR for the simulation and modeling of high performance linacs

    Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.O.; Siemann, R.; Thompson, K.; Zimmermann, F.

    1997-07-01

    High performance linear accelerators are the central components of the proposed next generation of linear colliders. They must provide acceleration of up to 750 GeV per beam while maintaining small normalized emittances. Standard simulation programs, mainly developed for storage rings, did not meet the specific requirements for high performance linacs with high bunch charges and strong wakefields. The authors present the program. LIAR (LInear Accelerator Research code) that includes single and multi-bunch wakefield effects, a 6D coupled beam description, specific optimization algorithms and other advanced features. LIAR has been applied to and checked against the existing Stanford Linear Collider (SLC), the linacs of the proposed Next Linear Collider (NLC) and the proposed Linac Coherent Light Source (LCLS) at SLAC. Its modular structure allows easy extension for different purposes. The program is available for UNIX workstations and Windows PC's

  4. High performance systems

    Vigil, M.B. [comp.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  5. Constitutive modeling of SMA SMP multifunctional high performance smart adaptive shape memory composite

    Jarali, Chetan S; Raja, S; Upadhya, A R

    2010-01-01

    Materials design involving the thermomechanical constitutive modeling of shape memory alloy (SMA) and shape memory polymer (SMP) composites is a key topic in the development of smart adaptive shape memory composites (SASMC). In this work, a constitutive model for SASMC is developed. First, a one-dimensional SMA model, which can simulate the pseudoelastic (PE) and shape memory effects (SME) is presented. Subsequently, a one-dimensional SMP model able to reproduce the SME is addressed. Both SMA and SMP models are based on a single internal state variable, namely the martensite fraction and the frozen fraction, which can be expressed as a function of temperature. A consistent form of the analytical solution for the SMP model is obtained using the fourth-order Runge–Kutta method. Finally, the SASMC constitutive model is proposed, following two analytical homogenization approaches. One approach is based on an equivalent inclusion method and the other approach is the rule of mixtures. The SMA and SMP constitutive models are validated independently with experimental results. However, the validation of the composite model is performed using the two homogenization approaches and a close agreement in results is observed. Results regarding the isothermal and thermomechanical stress–strain responses are analyzed as a function of SMA volume fraction. Further, it is concluded that the proposed composite model is able to reproduce consistently the overall composite response by taking into consideration not only the phase transformations, variable modulus and transformation stresses in SMA but also the variable modulus, the evolution of stored strain and thermal strain in the SMP

  6. The performance of a new Geant4 Bertini intra-nuclear cascade model in high throughput computing (HTC) cluster architecture

    Aatos, Heikkinen; Andi, Hektor; Veikko, Karimaki; Tomas, Linden [Helsinki Univ., Institute of Physics (Finland)

    2003-07-01

    We study the performance of a new Bertini intra-nuclear cascade model implemented in the general detector simulation tool-kit Geant4 with a High Throughput Computing (HTC) cluster architecture. A 60 node Pentium III open-Mosix cluster is used with the Mosix kernel performing automatic process load-balancing across several CPUs. The Mosix cluster consists of several computer classes equipped with Windows NT workstations that automatically boot, daily and become nodes of the Mosix cluster. The models included in our study are a Bertini intra-nuclear cascade model with excitons, consisting of a pre-equilibrium model, a nucleus explosion model, a fission model and an evaporation model. The speed and accuracy obtained for these models is presented. (authors)

  7. Cpl6: The New Extensible, High-Performance Parallel Coupler forthe Community Climate System Model

    Craig, Anthony P.; Jacob, Robert L.; Kauffman, Brain; Bettge,Tom; Larson, Jay; Ong, Everest; Ding, Chris; He, Yun

    2005-03-24

    Coupled climate models are large, multiphysics applications designed to simulate the Earth's climate and predict the response of the climate to any changes in the forcing or boundary conditions. The Community Climate System Model (CCSM) is a widely used state-of-art climate model that has released several versions to the climate community over the past ten years. Like many climate models, CCSM employs a coupler, a functional unit that coordinates the exchange of data between parts of climate system such as the atmosphere and ocean. This paper describes the new coupler, cpl6, contained in the latest version of CCSM,CCSM3. Cpl6 introduces distributed-memory parallelism to the coupler, a class library for important coupler functions, and a standardized interface for component models. Cpl6 is implemented entirely in Fortran90 and uses Model Coupling Toolkit as the base for most of its classes. Cpl6 gives improved performance over previous versions and scales well on multiple platforms.

  8. Responsive design high performance

    Els, Dewald

    2015-01-01

    This book is ideal for developers who have experience in developing websites or possess minor knowledge of how responsive websites work. No experience of high-level website development or performance tweaking is required.

  9. High Performance Macromolecular Material

    Forest, M

    2002-01-01

    .... In essence, most commercial high-performance polymers are processed through fiber spinning, following Nature and spider silk, which is still pound-for-pound the toughest liquid crystalline polymer...

  10. THE MODEL CHARACTERISTICS OF JUMP ACTIONS STRUCTURE OF HIGH PERFORMANCE FEMALE VOLLEYBALL PLAYERS

    Stech M.

    2012-12-01

    Full Text Available The purpose of this study was to develop generalized and individual models of the jump actions of skilled female volleyball players. The main prerequisite for the development of the jump actions models were the results of our earlier studies of factor structure of jump actions of 10 sportswomen of the Polish volleyball team "Gedania" (Premier League in the preparatory and competitive periods of the annual cycle of preparation. The athletes age was 22.0 +- 2.9 years, the sports experience - 8.1 +- 3.1 years, body height - 181.9 +- 8.4 years and body weight - 72.8 +- 10.8 kg. Mathematical and statistical processing of the data (the definition of M ± SD and significant differences between the samples was performed using a standard computer program "STATISTICA 7,0". Based on the analysis of the factor structure of 20 jump actions of skilled women volleyball players determined to within 5 of the most informative indexes and their tentative values recommended for the formation of a generalized model of this structure. Comparison of individual models of jump actions of skilled women volleyball players with their generalized models in different periods of preparation can be used for the rational choice of means and methods for the increasing of the training process efficiency.

  11. High Performance Programming Using Explicit Shared Memory Model on the Cray T3D

    Saini, Subhash; Simon, Horst D.; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    The Cray T3D is the first-phase system in Cray Research Inc.'s (CRI) three-phase massively parallel processing program. In this report we describe the architecture of the T3D, as well as the CRAFT (Cray Research Adaptive Fortran) programming model, and contrast it with PVM, which is also supported on the T3D We present some performance data based on the NAS Parallel Benchmarks to illustrate both architectural and software features of the T3D.

  12. New Developments in Modeling MHD Systems on High Performance Computing Architectures

    Germaschewski, K.; Raeder, J.; Larson, D. J.; Bhattacharjee, A.

    2009-04-01

    Modeling the wide range of time and length scales present even in fluid models of plasmas like MHD and X-MHD (Extended MHD including two fluid effects like Hall term, electron inertia, electron pressure gradient) is challenging even on state-of-the-art supercomputers. In the last years, HPC capacity has continued to grow exponentially, but at the expense of making the computer systems more and more difficult to program in order to get maximum performance. In this paper, we will present a new approach to managing the complexity caused by the need to write efficient codes: Separating the numerical description of the problem, in our case a discretized right hand side (r.h.s.), from the actual implementation of efficiently evaluating it. An automatic code generator is used to describe the r.h.s. in a quasi-symbolic form while leaving the translation into efficient and parallelized code to a computer program itself. We implemented this approach for OpenGGCM (Open General Geospace Circulation Model), a model of the Earth's magnetosphere, which was accelerated by a factor of three on regular x86 architecture and a factor of 25 on the Cell BE architecture (commonly known for its deployment in Sony's PlayStation 3).

  13. Development of a GPU-based high-performance radiative transfer model for the Infrared Atmospheric Sounding Interferometer (IASI)

    Huang Bormin; Mielikainen, Jarno; Oh, Hyunjong; Allen Huang, Hung-Lung

    2011-01-01

    Satellite-observed radiance is a nonlinear functional of surface properties and atmospheric temperature and absorbing gas profiles as described by the radiative transfer equation (RTE). In the era of hyperspectral sounders with thousands of high-resolution channels, the computation of the radiative transfer model becomes more time-consuming. The radiative transfer model performance in operational numerical weather prediction systems still limits the number of channels we can use in hyperspectral sounders to only a few hundreds. To take the full advantage of such high-resolution infrared observations, a computationally efficient radiative transfer model is needed to facilitate satellite data assimilation. In recent years the programmable commodity graphics processing unit (GPU) has evolved into a highly parallel, multi-threaded, many-core processor with tremendous computational speed and very high memory bandwidth. The radiative transfer model is very suitable for the GPU implementation to take advantage of the hardware's efficiency and parallelism where radiances of many channels can be calculated in parallel in GPUs. In this paper, we develop a GPU-based high-performance radiative transfer model for the Infrared Atmospheric Sounding Interferometer (IASI) launched in 2006 onboard the first European meteorological polar-orbiting satellites, METOP-A. Each IASI spectrum has 8461 spectral channels. The IASI radiative transfer model consists of three modules. The first module for computing the regression predictors takes less than 0.004% of CPU time, while the second module for transmittance computation and the third module for radiance computation take approximately 92.5% and 7.5%, respectively. Our GPU-based IASI radiative transfer model is developed to run on a low-cost personal supercomputer with four GPUs with total 960 compute cores, delivering near 4 TFlops theoretical peak performance. By massively parallelizing the second and third modules, we reached 364x

  14. An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology.

    Deodhar, Suruchi; Bisset, Keith R; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V

    2014-07-01

    We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity.

  15. Development of a code and models for high burnup fuel performance analysis

    Kinoshita, M; Kitajima, S [Central Research Inst. of Electric Power Industry, Tokyo (Japan)

    1997-08-01

    First the high burnup LWR fuel behavior is discussed and necessary models for the analysis are reviewed. These aspects of behavior are the changes of power history due to the higher enrichment, the temperature feedback due to fission gas release and resultant degradation of gap conductance, axial fission gas transport in fuel free volume, fuel conductivity degradation due to fission product solution and modification of fuel micro-structure. Models developed for these phenomena, modifications in the code, and the benchmark results mainly based on Risoe fission gas project is presented. Finally the rim effect which is observe only around the fuel periphery will be discussed focusing into the fuel conductivity degradation and swelling due to the porosity development. (author). 18 refs, 13 figs, 3 tabs.

  16. Relational database hybrid model, of high performance and storage capacity for nuclear engineering applications

    Gomes Neto, Jose

    2008-01-01

    The objective of this work is to present the relational database, named FALCAO. It was created and implemented to support the storage of the monitored variables in the IEA-R1 research reactor, located in the Instituto de Pesquisas Energeticas e Nucleares, IPEN/CNEN-SP. The data logical model and its direct influence in the integrity of the provided information are carefully considered. The concepts and steps of normalization and de normalization including the entities and relations involved in the logical model are presented. It is also presented the effects of the model rules in the acquisition, loading and availability of the final information, under the performance concept since the acquisition process loads and provides lots of information in small intervals of time. The SACD application, through its functionalities, presents the information stored in the FALCAO database in a practical and optimized form. The implementation of the FALCAO database occurred successfully and its existence leads to a considerably favorable situation. It is now essential to the routine of the researchers involved, not only due to the substantial improvement of the process but also to the reliability associated to it. (author)

  17. High Performance Computing Multicast

    2012-02-01

    A History of the Virtual Synchrony Replication Model,” in Replication: Theory and Practice, Charron-Bost, B., Pedone, F., and Schiper, A. (Eds...Performance Computing IP / IPv4 Internet Protocol (version 4.0) IPMC Internet Protocol MultiCast LAN Local Area Network MCMD Dr. Multicast MPI

  18. Clojure high performance programming

    Kumar, Shantanu

    2013-01-01

    This is a short, practical guide that will teach you everything you need to know to start writing high performance Clojure code.This book is ideal for intermediate Clojure developers who are looking to get a good grip on how to achieve optimum performance. You should already have some experience with Clojure and it would help if you already know a little bit of Java. Knowledge of performance analysis and engineering is not required. For hands-on practice, you should have access to Clojure REPL with Leiningen.

  19. NONLINEAR-REGRESSION METHODS FOR MODELING OF HETEROSCEDASTIC RETENTION DATA IN REVERSED-PHASE HIGH-PERFORMANCE LIQUID-CHROMATOGRAPHY

    HENDRIKS, MMWB; COENEGRACHT, PMJ; DOORNBOS, DA

    1994-01-01

    New models have been developed that accurately describe the response surfaces of capacity factors that are a function of changes in the pH and the fraction of organic modifier in reversed-phase high-performance liquid chromatography (RP-HPLC). The purpose of this article is to illustrate one of the

  20. Photons, photosynthesis, and high-performance computing: challenges, progress, and promise of modeling metabolism in green algae

    Chang, C H; Graf, P; Alber, D M; Kim, K; Murray, G; Posewitz, M; Seibert, M

    2008-01-01

    The complexity associated with biological metabolism considered at a kinetic level presents a challenge to quantitative modeling. In particular, the relatively sparse knowledge of parameters for enzymes with known kinetic responses is problematic. The possible space of these parameters is of high-dimension, and sampling of such a space typifies a combinatorial explosion of possible dynamic states. However, with sufficient quantitative transcriptomics, proteomics, and metabolomics data at hand, these challenges could be met by high-performance software with sampling, fitting, and optimization capabilities. With this in mind, we present the High-Performance Systems Biology Toolkit HiPer SBTK, an evolving software package to simulate, fit, and optimize metabolite concentrations and fluxes within the space of rate and binding parameters associated with detailed enzyme kinetic models. We present our chosen modeling paradigm for the formulation of metabolic pathway models, the means to address the challenge of representing such models in a precise and persistent fashion using the standardized Systems Biology Markup Language, and our second-generation model of H2-associated Chlamydomonas metabolism. Processing of such models for hierarchically parallelized simulation and optimization, job specification by the user through a GUI interface, software capabilities and initial scaling data, and the mapping of the computation to biological questions is also discussed. Moreover, we present near-term future software and model development goals

  1. High Performance Concrete

    Traian Oneţ

    2009-01-01

    Full Text Available The paper presents the last studies and researches accomplished in Cluj-Napoca related to high performance concrete, high strength concrete and self compacting concrete. The purpose of this paper is to raid upon the advantages and inconveniences when a particular concrete type is used. Two concrete recipes are presented, namely for the concrete used in rigid pavement for roads and another one for self-compacting concrete.

  2. High performance polymeric foams

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-01-01

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy

  3. High-performance coupled poro-hydro-mechanical models to resolve fluid escape pipes

    Räss, Ludovic; Makhnenko, Roman; Podladchikov, Yury

    2017-04-01

    Field observations and laboratory experiments exhibit inelastic deformation features arising in many coupled settings relevant to geo-applications. These irreversible deformations and their specific patterns suggest a rather ductile or brittle mechanism, such as viscous creep or micro cracks, taking place on both geological (long) and human (short) timescales. In order to understand the underlying mechanisms responsible for these deformation features, there is a current need to accurately resolve the non-linearities inherent to strongly coupled physical processes. Among the large variety of modelling tools and softwares available nowadays in the community, very few are capable to efficiently solve coupled systems with high accuracy in both space and time and run efficiently on modern hardware. Here, we propose a robust framework to solve coupled multi-physics hydro-mechanical processes on very high spatial and temporal resolution in both two and three dimensions. Our software relies on the Finite-Difference Method and a pseudo-transient scheme is used to converge to the implicit solution of the system of poro-visco-elasto-plastic equations at each physical time step. The rheology including viscosity estimates for major reservoir rock types is inferred from novel lab experiments and confirms the ease of flow of sedimentary rocks. Our results propose a physical mechanism responsible for the generation of high permeability pathways in fluid saturated porous media and predict their propagation in rates observable on operational timescales. Finally, our software scales linearly on more than 5000 GPUs.

  4. High-performance modeling of CO2 sequestration by coupling reservoir simulation and molecular dynamics

    Bao, Kai

    2013-01-01

    The present work describes a parallel computational framework for CO2 sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel HPC systems. In this framework, a parallel reservoir simulator, Reservoir Simulation Toolbox (RST), solves the flow and transport equations that describe the subsurface flow behavior, while the molecular dynamics simulations are performed to provide the required physical parameters. Numerous technologies from different fields are employed to make this novel coupled system work efficiently. One of the major applications of the framework is the modeling of large scale CO2 sequestration for long-term storage in the subsurface geological formations, such as depleted reservoirs and deep saline aquifers, which has been proposed as one of the most attractive and practical solutions to reduce the CO2 emission problem to address the global-warming threat. To effectively solve such problems, fine grids and accurate prediction of the properties of fluid mixtures are essential for accuracy. In this work, the CO2 sequestration is presented as our first example to couple the reservoir simulation and molecular dynamics, while the framework can be extended naturally to the full multiphase multicomponent compositional flow simulation to handle more complicated physical process in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our MD simulations compared with published data, and good scalability are observed with the massively parallel HPC systems. The performance and capacity of the proposed framework are well demonstrated with several experiments with hundreds of millions to a billion cells. To our best knowledge, the work represents the first attempt to couple the reservoir simulation and molecular simulation for large scale modeling. Due to the complexity of the subsurface systems

  5. Analysis and modeling of social influence in high performance computing workloads

    Zheng, Shuai

    2011-01-01

    Social influence among users (e.g., collaboration on a project) creates bursty behavior in the underlying high performance computing (HPC) workloads. Using representative HPC and cluster workload logs, this paper identifies, analyzes, and quantifies the level of social influence across HPC users. We show the existence of a social graph that is characterized by a pattern of dominant users and followers. This pattern also follows a power-law distribution, which is consistent with those observed in mainstream social networks. Given its potential impact on HPC workloads prediction and scheduling, we propose a fast-converging, computationally-efficient online learning algorithm for identifying social groups. Extensive evaluation shows that our online algorithm can (1) quickly identify the social relationships by using a small portion of incoming jobs and (2) can efficiently track group evolution over time. © 2011 Springer-Verlag.

  6. Performance assessment modeling of high level nuclear wasteforms from the pyroprocess fuel cycle

    Nutt, W.M.; Hill, R.N.; Bullen, D.B.

    1995-01-01

    Several performance assessment (PA) analyses have been completed to estimate the release to the accessible environment of radionuclides from spent light water reactor (LWR) fuel emplaced in the proposed Yucca Mountain repository. Probabilistic methods were utilized based on the complexity of the repository system. Recent investigations have been conducted to identify the merits of a pyroprocess fuel cycle. This cycle utilizes high temperature molten salts and metals to partially separate actinides and fission products. In a closed liquid metal reactor (LMR) fuel cycle, this allows recycling of nearly all of the actinides. In a once-through cycle, this isolates the actinides for storage into a wasteform which can be specifically tailored for their retention. With appropriate front-end treatment, this Process can also be used to treat LWR spent fuel

  7. High performance conductometry

    Saha, B.

    2000-01-01

    Inexpensive but high performance systems have emerged progressively for basic and applied measurements in physical and analytical chemistry on one hand, and for on-line monitoring and leak detection in plants and facilities on the other. Salient features of the developments will be presented with specific examples

  8. Danish High Performance Concretes

    Nielsen, M. P.; Christoffersen, J.; Frederiksen, J.

    1994-01-01

    In this paper the main results obtained in the research program High Performance Concretes in the 90's are presented. This program was financed by the Danish government and was carried out in cooperation between The Technical University of Denmark, several private companies, and Aalborg University...... concretes, workability, ductility, and confinement problems....

  9. High performance homes

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    . Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  10. A High Performance Pulsatile Pump for Aortic Flow Experiments in 3-Dimensional Models.

    Chaudhury, Rafeed A; Atlasman, Victor; Pathangey, Girish; Pracht, Nicholas; Adrian, Ronald J; Frakes, David H

    2016-06-01

    Aortic pathologies such as coarctation, dissection, and aneurysm represent a particularly emergent class of cardiovascular diseases. Computational simulations of aortic flows are growing increasingly important as tools for gaining understanding of these pathologies, as well as for planning their surgical repair. In vitro experiments are required to validate the simulations against real world data, and the experiments require a pulsatile flow pump system that can provide physiologic flow conditions characteristic of the aorta. We designed a newly capable piston-based pulsatile flow pump system that can generate high volume flow rates (850 mL/s), replicate physiologic waveforms, and pump high viscosity fluids against large impedances. The system is also compatible with a broad range of fluid types, and is operable in magnetic resonance imaging environments. Performance of the system was validated using image processing-based analysis of piston motion as well as particle image velocimetry. The new system represents a more capable pumping solution for aortic flow experiments than other available designs, and can be manufactured at a relatively low cost.

  11. Liquid-metal fast breeder reactor fuel rod performance and modeling at high burnup

    Verbeek, P.; Toebbe, H.; Hoppe, N.; Steinmetz, B.

    1978-01-01

    The fuel rod modeling codes IAMBUS and COMETHE were used in the analysis and interpretation of postirradiation examination results of mixed-oxide fuel pins. These codes were developed in the framework of the SNR-300 research and development (R and D) program at Interatom and Belgonucleaire, respectively. SNR-300 is a liquid-metal fast breeder reactor demonstration plant designed and presently constructed in consortial cooperation by Germany, Belgium, and the Netherlands. RAPSODIE I, the two-bundle irradiation experiment, was irradiated in the French test FBR RAPSODIE FORTISSIMO and is one of the key irradiation experiments within the SNR-300 R and D program. The comparison of code predictions with postirradiation examination results concentrates on clad diameter expansions, clad total axial elongations, fuel differential and total axial elongations, fuel restructuring, and fission gas release. Fuel rod modeling was considered in the light of benchmarking of the codes, and there was consideration of fuel rod design for operation at low and high burnup

  12. Plume Tracker: Interactive mapping of volcanic sulfur dioxide emissions with high-performance radiative transfer modeling

    Realmuto, Vincent J.; Berk, Alexander

    2016-11-01

    We describe the development of Plume Tracker, an interactive toolkit for the analysis of multispectral thermal infrared observations of volcanic plumes and clouds. Plume Tracker is the successor to MAP_SO2, and together these flexible and comprehensive tools have enabled investigators to map sulfur dioxide (SO2) emissions from a number of volcanoes with TIR data from a variety of airborne and satellite instruments. Our objective for the development of Plume Tracker was to improve the computational performance of the retrieval procedures while retaining the accuracy of the retrievals. We have achieved a 300 × improvement in the benchmark performance of the retrieval procedures through the introduction of innovative data binning and signal reconstruction strategies, and improved the accuracy of the retrievals with a new method for evaluating the misfit between model and observed radiance spectra. We evaluated the accuracy of Plume Tracker retrievals with case studies based on MODIS and AIRS data acquired over Sarychev Peak Volcano, and ASTER data acquired over Kilauea and Turrialba Volcanoes. In the Sarychev Peak study, the AIRS-based estimate of total SO2 mass was 40% lower than the MODIS-based estimate. This result was consistent with a 45% reduction in the AIRS-based estimate of plume area relative to the corresponding MODIS-based estimate. In addition, we found that our AIRS-based estimate agreed with an independent estimate, based on a competing retrieval technique, within a margin of ± 20%. In the Kilauea study, the ASTER-based concentration estimates from 21 May 2012 were within ± 50% of concurrent ground-level concentration measurements. In the Turrialba study, the ASTER-based concentration estimates on 21 January 2012 were in exact agreement with SO2 concentrations measured at plume altitude on 1 February 2012.

  13. Impact of high-performance work systems on individual- and branch-level performance: test of a multilevel model of intermediate linkages.

    Aryee, Samuel; Walumbwa, Fred O; Seidu, Emmanuel Y M; Otaye, Lilian E

    2012-03-01

    We proposed and tested a multilevel model, underpinned by empowerment theory, that examines the processes linking high-performance work systems (HPWS) and performance outcomes at the individual and organizational levels of analyses. Data were obtained from 37 branches of 2 banking institutions in Ghana. Results of hierarchical regression analysis revealed that branch-level HPWS relates to empowerment climate. Additionally, results of hierarchical linear modeling that examined the hypothesized cross-level relationships revealed 3 salient findings. First, experienced HPWS and empowerment climate partially mediate the influence of branch-level HPWS on psychological empowerment. Second, psychological empowerment partially mediates the influence of empowerment climate and experienced HPWS on service performance. Third, service orientation moderates the psychological empowerment-service performance relationship such that the relationship is stronger for those high rather than low in service orientation. Last, ordinary least squares regression results revealed that branch-level HPWS influences branch-level market performance through cross-level and individual-level influences on service performance that emerges at the branch level as aggregated service performance.

  14. High-Performance Networking

    CERN. Geneva

    2003-01-01

    The series will start with an historical introduction about what people saw as high performance message communication in their time and how that developed to the now to day known "standard computer network communication". It will be followed by a far more technical part that uses the High Performance Computer Network standards of the 90's, with 1 Gbit/sec systems as introduction for an in depth explanation of the three new 10 Gbit/s network and interconnect technology standards that exist already or emerge. If necessary for a good understanding some sidesteps will be included to explain important protocols as well as some necessary details of concerned Wide Area Network (WAN) standards details including some basics of wavelength multiplexing (DWDM). Some remarks will be made concerning the rapid expanding applications of networked storage.

  15. High performance data transfer

    Cottrell, R.; Fang, C.; Hanushevsky, A.; Kreuger, W.; Yang, W.

    2017-10-01

    The exponentially increasing need for high speed data transfer is driven by big data, and cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software. This has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a scalable, balanced, easy to deploy and use way while minimizing power and space utilization. In collaboration with several commercial vendors, Proofs of Concept (PoC) consisting of clusters have been put together using off-the- shelf components to test the ZX scalability and ability to balance services using multiple cores, and links. The PoCs are based on SSD flash storage that is managed by a parallel file system. Each cluster occupies 4 rack units. Using the PoCs, between clusters we have achieved almost 200Gbps memory to memory over two 100Gbps links, and 70Gbps parallel file to parallel file with encryption over a 5000 mile 100Gbps link.

  16. High Performance Electrical Modeling and Simulation Software Normal Environment Verification and Validation Plan, Version 1.0; TOPICAL

    WIX, STEVEN D.; BOGDAN, CAROLYN W.; MARCHIONDO JR., JULIO P.; DEVENEY, MICHAEL F.; NUNEZ, ALBERT V.

    2002-01-01

    The requirements in modeling and simulation are driven by two fundamental changes in the nuclear weapons landscape: (1) The Comprehensive Test Ban Treaty and (2) The Stockpile Life Extension Program which extends weapon lifetimes well beyond their originally anticipated field lifetimes. The move from confidence based on nuclear testing to confidence based on predictive simulation forces a profound change in the performance asked of codes. The scope of this document is to improve the confidence in the computational results by demonstration and documentation of the predictive capability of electrical circuit codes and the underlying conceptual, mathematical and numerical models as applied to a specific stockpile driver. This document describes the High Performance Electrical Modeling and Simulation software normal environment Verification and Validation Plan

  17. Analysis of Highly Wind Power Integrated Power System model performance during Critical Weather conditions

    Basit, Abdul; Hansen, Anca Daniela; Sørensen, Poul Ejnar

    2014-01-01

    , is provided by the hour-ahead power balancing model, i.e. Simulation power Balancing model (SimBa. The regulating power plan is prepared from day-ahead power production plan and hour-ahead wind power forecast. The wind power (forecasts and available) are provided by the Correlated Wind power fluctuations (Cor......Wind) model, where the wind turbine storm controllers are also implemented....

  18. Identifying High Performance ERP Projects

    Stensrud, Erik; Myrtveit, Ingunn

    2002-01-01

    Learning from high performance projects is crucial for software process improvement. Therefore, we need to identify outstanding projects that may serve as role models. It is common to measure productivity as an indicator of performance. It is vital that productivity measurements deal correctly with variable returns to scale and multivariate data. Software projects generally exhibit variable returns to scale, and the output from ERP projects is multivariate. We propose to use Data Envelopment ...

  19. Modeling and identification for high-performance robot control : an RRR-robotic arm case study

    Kostic, D.; Jager, de A.G.; Steinbuch, M.; Hensen, R.H.A.

    2004-01-01

    We explain a procedure for getting models of robot kinematics and dynamics that are appropriate for robot control design. The procedure consists of the following steps: (i) derivation of robot kinematic and dynamic models and establishing correctness of their structures; (ii) experimental estimation

  20. Explaining high and low performers in complex intervention trials: a new model based on diffusion of innovations theory.

    McMullen, Heather; Griffiths, Chris; Leber, Werner; Greenhalgh, Trisha

    2015-05-31

    Complex intervention trials may require health care organisations to implement new service models. In a recent cluster randomised controlled trial, some participating organisations achieved high recruitment, whereas others found it difficult to assimilate the intervention and were low recruiters. We sought to explain this variation and develop a model to inform organisational participation in future complex intervention trials. The trial included 40 general practices in a London borough with high HIV prevalence. The intervention was offering a rapid HIV test as part of the New Patient Health Check. The primary outcome was mean CD4 cell count at diagnosis. The process evaluation consisted of several hundred hours of ethnographic observation, 21 semi-structured interviews and analysis of routine documents (e.g., patient leaflets, clinical protocols) and trial documents (e.g., inclusion criteria, recruitment statistics). Qualitative data were analysed thematically using--and, where necessary, extending--Greenhalgh et al.'s model of diffusion of innovations. Narrative synthesis was used to prepare case studies of four practices representing maximum variety in clinicians' interest in HIV (assessed by level of serological testing prior to the trial) and performance in the trial (high vs. low recruiters). High-recruiting practices were, in general though not invariably, also innovative practices. They were characterised by strong leadership, good managerial relations, readiness for change, a culture of staff training and available staff time ('slack resources'). Their front-line staff believed that patients might benefit from the rapid HIV test ('relative advantage'), were emotionally comfortable administering it ('compatibility'), skilled in performing it ('task issues') and made creative adaptations to embed the test in local working practices ('reinvention'). Early experience of a positive HIV test ('observability') appeared to reinforce staff commitment to recruiting

  1. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC): gap analysis for high fidelity and performance assessment code development

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-01-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  2. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  3. High-Fidelity Microstructural Characterization and Performance Modeling of Aluminized Composite Propellant

    Kosiba, Graham D.; Wixom, Ryan R.; Oehlschlaeger, Matthew A.

    2017-01-01

    Image processing and stereological techniques were used to characterize the heterogeneity of composite propellant and inform a predictive burn rate model. Composite propellant samples made up of ammonium perchlorate (AP), hydroxyl-terminated polybutadiene (HTPB), and aluminum (Al) were faced with an ion mill and imaged with a scanning electron microscope (SEM) and x-ray tomography (micro-CT). Properties of both the bulk and individual components of the composite propellant were determined from a variety of image processing tools. An algebraic model, based on the improved Beckstead-Derr-Price model developed by Cohen and Strand, was used to predict the steady-state burning of the aluminized composite propellant. In the presented model the presence of aluminum particles within the propellant was introduced. The thermal effects of aluminum particles are accounted for at the solid-gas propellant surface interface and aluminum combustion is considered in the gas phase using a single global reaction. In conclusion, properties derived from image processing were used directly as model inputs, leading to a sample-specific predictive combustion model.

  4. High performance sapphire windows

    Bates, Stephen C.; Liou, Larry

    1993-02-01

    High-quality, wide-aperture optical access is usually required for the advanced laser diagnostics that can now make a wide variety of non-intrusive measurements of combustion processes. Specially processed and mounted sapphire windows are proposed to provide this optical access to extreme environment. Through surface treatments and proper thermal stress design, single crystal sapphire can be a mechanically equivalent replacement for high strength steel. A prototype sapphire window and mounting system have been developed in a successful NASA SBIR Phase 1 project. A large and reliable increase in sapphire design strength (as much as 10x) has been achieved, and the initial specifications necessary for these gains have been defined. Failure testing of small windows has conclusively demonstrated the increased sapphire strength, indicating that a nearly flawless surface polish is the primary cause of strengthening, while an unusual mounting arrangement also significantly contributes to a larger effective strength. Phase 2 work will complete specification and demonstration of these windows, and will fabricate a set for use at NASA. The enhanced capabilities of these high performance sapphire windows will lead to many diagnostic capabilities not previously possible, as well as new applications for sapphire.

  5. High-performance modeling of CO2 sequestration by coupling reservoir simulation and molecular dynamics

    Bao, Kai; Yan, Mi; Lu, Ligang; Allen, Rebecca; Salam, Amgad; Jordan, Kirk E.; Sun, Shuyu

    2013-01-01

    multicomponent compositional flow simulation to handle more complicated physical process in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our

  6. Program for aerodynamic performance tests of helium gas compressor model of the gas turbine high temperature reactor (GTHTR300)

    Takada, Shoji; Takizuka, Takakazu; Kunimoto, Kazuhiko; Yan, Xing; Itaka, Hidehiko; Mori, Eiji

    2003-01-01

    Research and development program for helium gas compressor aerodynamics was planned for the power conversion system of the Gas Turbine High Temperature Reactor (GTHTR300). The axial compressor with polytropic efficiency of 90% and surge margin more than 30% was designed with 3-dimensional aerodynamic design. Performance and surge margin of the helium gas compressor tends to be lower due to the higher boss ratio which makes the tip clearance wide relative to the blade height, as well as due to a larger number of stages. The compressor was designed on the basis of methods and data for the aerodynamic design of industrial open-cycle gas-turbine. To validate the design of the helium gas compressor of the GTHTR300, aerodynamic performance tests were planned, and a 1/3-scale, 4-stage compressor model was designed. In the tests, the performance data of the helium gas compressor model will be acquired by using helium gas as a working fluid. The maximum design pressure at the model inlet is 0.88 MPa, which allows the Reynolds number to be sufficiently high. The present study is entrusted from the Ministry of Education, Culture, Sports, Science and Technology of Japan. (author)

  7. A Model for Predicting Student Performance on High-Stakes Assessment

    Dammann, Matthew Walter

    2010-01-01

    This research study examined the use of student achievement on reading and math state assessments to predict success on the science state assessment. Multiple regression analysis was utilized to test the prediction for all students in grades 5 and 8 in a mid-Atlantic state. The prediction model developed from the analysis explored the combined…

  8. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-03-01

    We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  9. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    Hadjidoukas, P.E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-01-01

    We present Π4U, 1 an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow

  10. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    Hadjidoukas, P.E.; Angelikopoulos, P. [Computational Science and Engineering Laboratory, ETH Zürich, CH-8092 (Switzerland); Papadimitriou, C. [Department of Mechanical Engineering, University of Thessaly, GR-38334 Volos (Greece); Koumoutsakos, P., E-mail: petros@ethz.ch [Computational Science and Engineering Laboratory, ETH Zürich, CH-8092 (Switzerland)

    2015-03-01

    We present Π4U,{sup 1} an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  11. High-performance modeling of plasma-based acceleration and laser-plasma interactions

    Vay, Jean-Luc; Blaclard, Guillaume; Godfrey, Brendan; Kirchen, Manuel; Lee, Patrick; Lehe, Remi; Lobet, Mathieu; Vincenti, Henri

    2016-10-01

    Large-scale numerical simulations are essential to the design of plasma-based accelerators and laser-plasma interations for ultra-high intensity (UHI) physics. The electromagnetic Particle-In-Cell (PIC) approach is the method of choice for self-consistent simulations, as it is based on first principles, and captures all kinetic effects, and also scale favorably to many cores on supercomputers. The standard PIC algorithm relies on second-order finite-difference discretization of the Maxwell and Newton-Lorentz equations. We present here novel formulations, based on very high-order pseudo-spectral Maxwell solvers, which enable near-total elimination of the numerical Cherenkov instability and increased accuracy over the standard PIC method for standard laboratory frame and Lorentz boosted frame simulations. We also present the latest implementations in the PIC modules Warp-PICSAR and FBPIC on the Intel Xeon Phi and GPU architectures. Examples of applications will be given on the simulation of laser-plasma accelerators and high-harmonic generation with plasma mirrors. Work supported by US-DOE Contracts DE-AC02-05CH11231 and by the European Commission through the Marie Slowdoska-Curie fellowship PICSSAR Grant Number 624543. Used resources of NERSC.

  12. Characterization and modeling of performance of Polymer Composites Reinforced with Highly Non-Linear Cellulosic Fibers

    Rozite, L; Joffe, R; Varna, J; Nyström, B

    2012-01-01

    The behaviour of highly non-linear cellulosic fibers and their composite is characterized. Micro-mechanisms occurring in these materials are identified. Mechanical properties of regenerated cellulose fibers and composites are obtained using simple tensile test. Material visco-plastic and visco-elastic properties are analyzed using creep tests. Two bio-based resins are used in this study – Tribest and EpoBioX. The glass and flax fiber composites are used as reference materials to compare with Cordenka fiber laminates.

  13. Characterization and modeling of performance of Polymer Composites Reinforced with Highly Non-Linear Cellulosic Fibers

    Rozite, L.; Joffe, R.; Varna, J.; Nyström, B.

    2012-02-01

    The behaviour of highly non-linear cellulosic fibers and their composite is characterized. Micro-mechanisms occurring in these materials are identified. Mechanical properties of regenerated cellulose fibers and composites are obtained using simple tensile test. Material visco-plastic and visco-elastic properties are analyzed using creep tests. Two bio-based resins are used in this study - Tribest and EpoBioX. The glass and flax fiber composites are used as reference materials to compare with Cordenka fiber laminates.

  14. Performance of the High-dimensional Propensity Score in a Nordic Healthcare Model

    Hallas, Jesper; Pottegård, Anton

    2017-01-01

    regression, estimating the coxib/tNSAID hazard ratio (HR). Values below 1.00 indicate a lower estimated hazard with coxibs. We build hdPS models with inclusion of up to 500 diagnosis and 500 prescription drug covariates. The crude HR was 1.76 (95% confidence interval: 1.57-1.97), decreasing to 1.12 (1...... was restricted to non-users of low-dose aspirin. The estimate based on 500 diagnoses alone was higher than an estimate based on 500 prescription drugs alone (0.99 versus 0.91). We conclude that hdPS does work within a Nordic setting that prescription data are more effective than diagnosis data in achieving...

  15. LIAR: A COMPUTER PROGRAM FOR THE SIMULATION AND MODELING OF HIGH PERFORMANCE LINACS

    Adolphsen, Chris

    2003-01-01

    The computer program LIAR (''LInear Accelerator Research code'') is a numerical simulation and tracking program for linear colliders. The LIAR project was started at SLAC in August 1995 in order to provide a computing and simulation tool that specifically addresses the needs of high energy linear colliders. LIAR is designed to be used for a variety of different linear accelerators. It has been applied for and checked against the existing Stanford Linear Collider (SLC) as well as the linacs of the proposed Next Linear Collider (NLC) and the proposed Linac Coherent Light Source (LCLS). The program includes wakefield effects, a 4D coupled beam description, specific optimization algorithms and other advanced features. We describe the most important concepts and highlights of the program. After having presented the LIAR program at the LINAC96 and the PAC97 conferences, we do now introduce it to the European particle accelerator community

  16. High-performing trauma teams: frequency of behavioral markers of a shared mental model displayed by team leaders and quality of medical performance.

    Johnsen, Bjørn Helge; Westli, Heidi Kristina; Espevik, Roar; Wisborg, Torben; Brattebø, Guttorm

    2017-11-10

    High quality team leadership is important for the outcome of medical emergencies. However, the behavioral marker of leadership are not well defined. The present study investigated frequency of behavioral markers of shared mental models (SMM) on quality of medical management. Training video recordings of 27 trauma teams simulating emergencies were analyzed according to team -leader's frequency of shared mental model behavioral markers. The results showed a positive correlation of quality of medical management with leaders sharing information without an explicit demand for the information ("push" of information) and with leaders communicating their situational awareness (SA) and demonstrating implicit supporting behavior. When separating the sample into higher versus lower performing teams, the higher performing teams had leaders who displayed a greater frequency of "push" of information and communication of SA and supportive behavior. No difference was found for the behavioral marker of team initiative, measured as bringing up suggestions to other teammembers. The results of this study emphasize the team leader's role in initiating and updating a team's shared mental model. Team leaders should also set expectations for acceptable interaction patterns (e.g., promoting information exchange) and create a team climate that encourages behaviors, such as mutual performance monitoring, backup behavior, and adaptability to enhance SMM.

  17. R high performance programming

    Lim, Aloysius

    2015-01-01

    This book is for programmers and developers who want to improve the performance of their R programs by making them run faster with large data sets or who are trying to solve a pesky performance problem.

  18. Adoption of High Performance Computational (HPC) Modeling Software for Widespread Use in the Manufacture of Welded Structures

    Brust, Frederick W. [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Punch, Edward F. [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Twombly, Elizabeth Kurth [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Kalyanam, Suresh [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Kennedy, James [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Hattery, Garty R. [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Dodds, Robert H. [Professional Consulting Services, Inc., Lisle, IL (United States); Mach, Justin C [Caterpillar, Peoria, IL (United States); Chalker, Alan [Ohio Supercomputer Center (OSC), Columbus, OH (United States); Nicklas, Jeremy [Ohio Supercomputer Center (OSC), Columbus, OH (United States); Gohar, Basil M [Ohio Supercomputer Center (OSC), Columbus, OH (United States); Hudak, David [Ohio Supercomputer Center (OSC), Columbus, OH (United States)

    2016-12-30

    This report summarizes the final product developed for the US DOE Small Business Innovation Research (SBIR) Phase II grant made to Engineering Mechanics Corporation of Columbus (Emc2) between April 16, 2014 and August 31, 2016 titled ‘Adoption of High Performance Computational (HPC) Modeling Software for Widespread Use in the Manufacture of Welded Structures’. Many US companies have moved fabrication and production facilities off shore because of cheaper labor costs. A key aspect in bringing these jobs back to the US is the use of technology to render US-made fabrications more cost-efficient overall with higher quality. One significant advantage that has emerged in the US over the last two decades is the use of virtual design for fabrication of small and large structures in weld fabrication industries. Industries that use virtual design and analysis tools have reduced material part size, developed environmentally-friendly fabrication processes, improved product quality and performance, and reduced manufacturing costs. Indeed, Caterpillar Inc. (CAT), one of the partners in this effort, continues to have a large fabrication presence in the US because of the use of weld fabrication modeling to optimize fabrications by controlling weld residual stresses and distortions and improving fatigue, corrosion, and fracture performance. This report describes Emc2’s DOE SBIR Phase II final results to extend an existing, state-of-the-art software code, Virtual Fabrication Technology (VFT®), currently used to design and model large welded structures prior to fabrication - to a broader range of products with widespread applications for small and medium-sized enterprises (SMEs). VFT® helps control distortion, can minimize and/or control residual stresses, control welding microstructure, and pre-determine welding parameters such as weld-sequencing, pre-bending, thermal-tensioning, etc. VFT® uses material properties, consumable properties, etc. as inputs

  19. High performance in software development

    CERN. Geneva; Haapio, Petri; Liukkonen, Juha-Matti

    2015-01-01

    What are the ingredients of high-performing software? Software development, especially for large high-performance systems, is one the most complex tasks mankind has ever tried. Technological change leads to huge opportunities but challenges our old ways of working. Processing large data sets, possibly in real time or with other tight computational constraints, requires an efficient solution architecture. Efficiency requirements span from the distributed storage and large-scale organization of computation and data onto the lowest level of processor and data bus behavior. Integrating performance behavior over these levels is especially important when the computation is resource-bounded, as it is in numerics: physical simulation, machine learning, estimation of statistical models, etc. For example, memory locality and utilization of vector processing are essential for harnessing the computing power of modern processor architectures due to the deep memory hierarchies of modern general-purpose computers. As a r...

  20. Use of simplified models in the performance assessment of a high-level waste repository system in Japan

    Pensado, Osvaldo; Mohanty, Sitakanta; Kanno, Takeshi; Tochigi, Yoshikatsu

    2005-01-01

    This paper explores simplifications to the H12 performance assessment model to enhance performance in Monte Carlo analyses. It is shown that similar reference case results to those of the H12 model can be derived by describing the buffer material surrounding a waste package as a planar body. Other possible simplifications to the performance assessment model in areas related to the stratification of the host rock transmissivity domain and solubility constraints in the buffer material are explored. (author)

  1. Performance of the first short model 150 mm aperture Nb$_3$Sn Quadrupole MQXFS for the High- Luminosity LHC upgrade

    Chlachidze, G.; et al.

    2016-08-30

    The US LHC Accelerator Research Program (LARP) and CERN combined their efforts in developing Nb3Sn magnets for the High-Luminosity LHC upgrade. The ultimate goal of this collaboration is to fabricate large aperture Nb3Sn quadrupoles for the LHC interaction regions (IR). These magnets will replace the present 70 mm aperture NbTi quadrupole triplets for expected increase of the LHC peak luminosity by a factor of 5. Over the past decade LARP successfully fabricated and tested short and long models of 90 mm and 120 mm aperture Nb3Sn quadrupoles. Recently the first short model of 150 mm diameter quadrupole MQXFS was built with coils fabricated both by the LARP and CERN. The magnet performance was tested at Fermilab’s vertical magnet test facility. This paper reports the test results, including the quench training at 1.9 K, ramp rate and temperature dependence studies.

  2. High performance work practices, innovation and performance

    Jørgensen, Frances; Newton, Cameron; Johnston, Kim

    2013-01-01

    Research spanning nearly 20 years has provided considerable empirical evidence for relationships between High Performance Work Practices (HPWPs) and various measures of performance including increased productivity, improved customer service, and reduced turnover. What stands out from......, and Africa to examine these various questions relating to the HPWP-innovation-performance relationship. Each paper discusses a practice that has been identified in HPWP literature and potential variables that can facilitate or hinder the effects of these practices of innovation- and performance...

  3. Well performance model

    Thomas, L.K.; Evans, C.E.; Pierson, R.G.; Scott, S.L.

    1992-01-01

    This paper describes the development and application of a comprehensive oil or gas well performance model. The model contains six distinct sections: stimulation design, tubing and/or casing flow, reservoir and near-wellbore calculations, production forecasting, wellbore heat transmission, and economics. These calculations may be performed separately or in an integrated fashion with data and results shared among the different sections. The model analysis allows evaluation of all aspects of well completion design, including the effects on future production and overall well economics

  4. High school and college biology: A multi-level model of the effects of high school biology courses on student academic performance in introductory college biology courses

    Loehr, John Francis

    The issue of student preparation for college study in science has been an ongoing concern for both college-bound students and educators of various levels. This study uses a national sample of college students enrolled in introductory biology courses to address the relationship between high school biology preparation and subsequent introductory college biology performance. Multi-Level Modeling was used to investigate the relationship between students' high school science and mathematics experiences and college biology performance. This analysis controls for student demographic and educational background factors along with factors associated with the college or university attended. The results indicated that high school course-taking and science instructional experiences have the largest impact on student achievement in the first introductory college biology course. In particular, enrollment in courses, such as high school Calculus and Advanced Placement (AP) Biology, along with biology course content that focuses on developing a deep understanding of the topics is found to be positively associated with student achievement in introductory college biology. On the other hand, experiencing high numbers of laboratory activities, demonstrations, and independent projects along with higher levels of laboratory freedom are associated with negative achievement. These findings are relevant to high school biology teachers, college students, their parents, and educators looking beyond the goal of high school graduation.

  5. Python high performance programming

    Lanaro, Gabriele

    2013-01-01

    An exciting, easy-to-follow guide illustrating the techniques to boost the performance of Python code, and their applications with plenty of hands-on examples.If you are a programmer who likes the power and simplicity of Python and would like to use this language for performance-critical applications, this book is ideal for you. All that is required is a basic knowledge of the Python programming language. The book will cover basic and advanced topics so will be great for you whether you are a new or a seasoned Python developer.

  6. High performance germanium MOSFETs

    Saraswat, Krishna [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States)]. E-mail: saraswat@stanford.edu; Chui, Chi On [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Krishnamohan, Tejas [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Kim, Donghyun [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Nayfeh, Ammar [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Pethe, Abhijit [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States)

    2006-12-15

    Ge is a very promising material as future channel materials for nanoscale MOSFETs due to its high mobility and thus a higher source injection velocity, which translates into higher drive current and smaller gate delay. However, for Ge to become main-stream, surface passivation and heterogeneous integration of crystalline Ge layers on Si must be achieved. We have demonstrated growth of fully relaxed smooth single crystal Ge layers on Si using a novel multi-step growth and hydrogen anneal process without any graded buffer SiGe layer. Surface passivation of Ge has been achieved with its native oxynitride (GeO {sub x}N {sub y} ) and high-permittivity (high-k) metal oxides of Al, Zr and Hf. High mobility MOSFETs have been demonstrated in bulk Ge with high-k gate dielectrics and metal gates. However, due to their smaller bandgap and higher dielectric constant, most high mobility materials suffer from large band-to-band tunneling (BTBT) leakage currents and worse short channel effects. We present novel, Si and Ge based heterostructure MOSFETs, which can significantly reduce the BTBT leakage currents while retaining high channel mobility, making them suitable for scaling into the sub-15 nm regime. Through full band Monte-Carlo, Poisson-Schrodinger and detailed BTBT simulations we show a dramatic reduction in BTBT and excellent electrostatic control of the channel, while maintaining very high drive currents in these highly scaled heterostructure DGFETs. Heterostructure MOSFETs with varying strained-Ge or SiGe thickness, Si cap thickness and Ge percentage were fabricated on bulk Si and SOI substrates. The ultra-thin ({approx}2 nm) strained-Ge channel heterostructure MOSFETs exhibited >4x mobility enhancements over bulk Si devices and >10x BTBT reduction over surface channel strained SiGe devices.

  7. High performance germanium MOSFETs

    Saraswat, Krishna; Chui, Chi On; Krishnamohan, Tejas; Kim, Donghyun; Nayfeh, Ammar; Pethe, Abhijit

    2006-01-01

    Ge is a very promising material as future channel materials for nanoscale MOSFETs due to its high mobility and thus a higher source injection velocity, which translates into higher drive current and smaller gate delay. However, for Ge to become main-stream, surface passivation and heterogeneous integration of crystalline Ge layers on Si must be achieved. We have demonstrated growth of fully relaxed smooth single crystal Ge layers on Si using a novel multi-step growth and hydrogen anneal process without any graded buffer SiGe layer. Surface passivation of Ge has been achieved with its native oxynitride (GeO x N y ) and high-permittivity (high-k) metal oxides of Al, Zr and Hf. High mobility MOSFETs have been demonstrated in bulk Ge with high-k gate dielectrics and metal gates. However, due to their smaller bandgap and higher dielectric constant, most high mobility materials suffer from large band-to-band tunneling (BTBT) leakage currents and worse short channel effects. We present novel, Si and Ge based heterostructure MOSFETs, which can significantly reduce the BTBT leakage currents while retaining high channel mobility, making them suitable for scaling into the sub-15 nm regime. Through full band Monte-Carlo, Poisson-Schrodinger and detailed BTBT simulations we show a dramatic reduction in BTBT and excellent electrostatic control of the channel, while maintaining very high drive currents in these highly scaled heterostructure DGFETs. Heterostructure MOSFETs with varying strained-Ge or SiGe thickness, Si cap thickness and Ge percentage were fabricated on bulk Si and SOI substrates. The ultra-thin (∼2 nm) strained-Ge channel heterostructure MOSFETs exhibited >4x mobility enhancements over bulk Si devices and >10x BTBT reduction over surface channel strained SiGe devices

  8. NGINX high performance

    Sharma, Rahul

    2015-01-01

    System administrators, developers, and engineers looking for ways to achieve maximum performance from NGINX will find this book beneficial. If you are looking for solutions such as how to handle more users from the same system or load your website pages faster, then this is the book for you.

  9. High Performance Multiphase Combustion Tool Using Level Set-Based Primary Atomization Coupled with Flamelet Models, Phase II

    National Aeronautics and Space Administration — The innovative methodologies proposed in this STTR Phase 2 project will enhance Loci-STREAM which is a high performance, high fidelity simulation tool already being...

  10. High Performance Multiphase Combustion Tool Using Level Set-Based Primary Atomization Coupled with Flamelet Models, Phase I

    National Aeronautics and Space Administration — The innovative methodologies proposed in this STTR Phase 1 project will enhance Loci-STREAM which is a high performance, high fidelity simulation tool already being...

  11. Using the Eclipse Parallel Tools Platform to Assist Earth Science Model Development and Optimization on High Performance Computers

    Alameda, J. C.

    2011-12-01

    Development and optimization of computational science models, particularly on high performance computers, and with the advent of ubiquitous multicore processor systems, practically on every system, has been accomplished with basic software tools, typically, command-line based compilers, debuggers, performance tools that have not changed substantially from the days of serial and early vector computers. However, model complexity, including the complexity added by modern message passing libraries such as MPI, and the need for hybrid code models (such as openMP and MPI) to be able to take full advantage of high performance computers with an increasing core count per shared memory node, has made development and optimization of such codes an increasingly arduous task. Additional architectural developments, such as many-core processors, only complicate the situation further. In this paper, we describe how our NSF-funded project, "SI2-SSI: A Productive and Accessible Development Workbench for HPC Applications Using the Eclipse Parallel Tools Platform" (WHPC) seeks to improve the Eclipse Parallel Tools Platform, an environment designed to support scientific code development targeted at a diverse set of high performance computing systems. Our WHPC project to improve Eclipse PTP takes an application-centric view to improve PTP. We are using a set of scientific applications, each with a variety of challenges, and using PTP to drive further improvements to both the scientific application, as well as to understand shortcomings in Eclipse PTP from an application developer perspective, to drive our list of improvements we seek to make. We are also partnering with performance tool providers, to drive higher quality performance tool integration. We have partnered with the Cactus group at Louisiana State University to improve Eclipse's ability to work with computational frameworks and extremely complex build systems, as well as to develop educational materials to incorporate into

  12. NIF capsule performance modeling

    Weber S.

    2013-11-01

    Full Text Available Post-shot modeling of NIF capsule implosions was performed in order to validate our physical and numerical models. Cryogenic layered target implosions and experiments with surrogate targets produce an abundance of capsule performance data including implosion velocity, remaining ablator mass, times of peak x-ray and neutron emission, core image size, core symmetry, neutron yield, and x-ray spectra. We have attempted to match the integrated data set with capsule-only simulations by adjusting the drive and other physics parameters within expected uncertainties. The simulations include interface roughness, time-dependent symmetry, and a model of mix. We were able to match many of the measured performance parameters for a selection of shots.

  13. High performance MEAs. Final report

    NONE

    2012-07-15

    The aim of the present project is through modeling, material and process development to obtain significantly better MEA performance and to attain the technology necessary to fabricate stable catalyst materials thereby providing a viable alternative to current industry standard. This project primarily focused on the development and characterization of novel catalyst materials for the use in high temperature (HT) and low temperature (LT) proton-exchange membrane fuel cells (PEMFC). New catalysts are needed in order to improve fuel cell performance and reduce the cost of fuel cell systems. Additional tasks were the development of new, durable sealing materials to be used in PEMFC as well as the computational modeling of heat and mass transfer processes, predominantly in LT PEMFC, in order to improve fundamental understanding of the multi-phase flow issues and liquid water management in fuel cells. An improved fundamental understanding of these processes will lead to improved fuel cell performance and hence will also result in a reduced catalyst loading to achieve the same performance. The consortium have obtained significant research results and progress for new catalyst materials and substrates with promising enhanced performance and fabrication of the materials using novel methods. However, the new materials and synthesis methods explored are still in the early research and development phase. The project has contributed to improved MEA performance using less precious metal and has been demonstrated for both LT-PEM, DMFC and HT-PEM applications. New novel approach and progress of the modelling activities has been extremely satisfactory with numerous conference and journal publications along with two potential inventions concerning the catalyst layer. (LN)

  14. Comparing the Performance of Artificial Intelligence Models in Estimating Water Quality Parameters in Periods of Low and High Water Flow

    majid montaseri

    2017-03-01

    , Na and Mg. The optimal ANN (LM architecture used in this study consists of an input layer with seven inputs, one hidden and output layer with two and five neurons for dry and wet periods, respectively. Similar ANN(LM, ANFIS-SC model had the best performance. It is clear that the ANFIS with 0/72 and 0/58 radii value has the highest R and the lowest RMSE for dry and wet periods, respectively. Comparing the ANFIS-SC estimations with the measured data for the test stage demonstrates a high generalization capacity of the model, with relatively low error and high correlation. From the scatter plots it is obviously seen that the ANFIS-SC predictions are closer to the corresponding measured TDS than other models in two periods. As seen from the best straight line equations (assume the equation as y=ax in the scatter plots that the coefficient for ANFIS-SC is closer to 1 than other models. In addition ANFIS-SC performancedwith the correlation coefficients in dry and wet periods, respectively 0.975 , 0.969 and with Root-mean-square errors, respectively 34.41 , 23.85 in order to predict dissolved solids (TDS in the rivers of Zarrineh River Basin. The obtained results showed the efficiency of the applied models in simulating the nonlinear behavior of TDS variations in terms of performance indices. The results are also tested by using t test for verifying the robustness of the models at 99% significance level. Comparison results indicated that the poorest model in TDS simulation was ANN-GDAin dry and wet periods, especially in test period. The observed relationship between residuals and model computed TDS values shows complete independence and random distribution. It is further supported by the respective correlations for ANFIS-SC models (R2 = 0.0012 for dry period and R2 = 0.0214 for wet period which are negligible small. Plots of the residuals versus model computed values can be more informative regarding model fitting to a data set. If the residuals appear to behave randomly

  15. Base Station Performance Model

    Walsh, Barbara; Farrell, Ronan

    2005-01-01

    At present the testing of power amplifiers within base station transmitters is limited to testing at component level as opposed to testing at the system level. While the detection of catastrophic failure is possible, that of performance degradation is not. This paper proposes a base station model with respect to transmitter output power with the aim of introducing system level monitoring of the power amplifier behaviour within the base station. Our model reflects the expe...

  16. Acoustic Performance of Novel Fan Noise Reduction Technologies for a High Bypass Model Turbofan at Simulated Flights Conditions

    Elliott, David M.; Woodward, Richard P.; Podboy, Gary G.

    2010-01-01

    Two novel fan noise reduction technologies, over the rotor acoustic treatment and soft stator vane technologies, were tested in an ultra-high bypass ratio turbofan model in the NASA Glenn Research Center s 9- by 15-Foot Low-Speed Wind Tunnel. The performance of these technologies was compared to that of the baseline fan configuration, which did not have these technologies. Sideline acoustic data and hot film flow data were acquired and are used to determine the effectiveness of the various treatments. The material used for the over the rotor treatment was foam metal and two different types were used. The soft stator vanes had several internal cavities tuned to target certain frequencies. In order to accommodate the cavities it was necessary to use a cut-on stator to demonstrate the soft vane concept.

  17. Designing a Scalable Fault Tolerance Model for High Performance Computational Chemistry: A Case Study with Coupled Cluster Perturbative Triples.

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A

    2011-01-11

    In the past couple of decades, the massive computational power provided by the most modern supercomputers has resulted in simulation of higher-order computational chemistry methods, previously considered intractable. As the system sizes continue to increase, the computational chemistry domain continues to escalate this trend using parallel computing with programming models such as Message Passing Interface (MPI) and Partitioned Global Address Space (PGAS) programming models such as Global Arrays. The ever increasing scale of these supercomputers comes at a cost of reduced Mean Time Between Failures (MTBF), currently on the order of days and projected to be on the order of hours for upcoming extreme scale systems. While traditional disk-based check pointing methods are ubiquitous for storing intermediate solutions, they suffer from high overhead of writing and recovering from checkpoints. In practice, checkpointing itself often brings the system down. Clearly, methods beyond checkpointing are imperative to handling the aggravating issue of reducing MTBF. In this paper, we address this challenge by designing and implementing an efficient fault tolerant version of the Coupled Cluster (CC) method with NWChem, using in-memory data redundancy. We present the challenges associated with our design, including an efficient data storage model, maintenance of at least one consistent data copy, and the recovery process. Our performance evaluation without faults shows that the current design exhibits a small overhead. In the presence of a simulated fault, the proposed design incurs negligible overhead in comparison to the state of the art implementation without faults.

  18. Performance modeling of Beamlet

    Auerbach, J.M.; Lawson, J.K.; Rotter, M.D.; Sacks, R.A.; Van Wonterghem, B.W.; Williams, W.H.

    1995-01-01

    Detailed modeling of beam propagation in Beamlet has been made to predict system performance. New software allows extensive use of optical component characteristics. This inclusion of real optical component characteristics has resulted in close agreement between calculated and measured beam distributions

  19. Field-scale model for the natural attenuation of uranium at the Hanford 300 area using high performance computing

    Lichtner, Peter C [Los Alamos National Laboratory; Hammond, Glenn E [PNNL

    2009-01-01

    Three-dimensional reactive flow and transport simulations are carried out to better understand the persistence of uranium [U(VI)] at the Hanford 300 Area bordering the Columbia River. The massively parallel code PFLOTRAN developed under a DOE SciDAC-2 project is employed in the simulations. The calculations were carried out on 4096 processor cores on ORNL's Jaguar XT4 & 5 Cray supercomputers with run times on the order of 6 hours, equivalent to several years if performed on a single processor with sufficient memory. A new conceptual model is presented for understanding present-day and future attenuation rates of U(VI) at the 300 Area site. Unique to the conceptual model is the recognition of three distinct phases in the evolution of the site corresponding to: (I) initial emplacement of waste; (II) present-day conditions of slow leaching of U(VI) from the Hanford sediments; and (III) the complete removal of non-labile U(VI) from the source region. This work focuses on Phase II. Both labile and non-labile forms of U(VI) are included in the model as sorbed and mineralized forms of U(VI), respectively. The non-labile form plays an important role in providing a long-term source of U(VI) as it slowly leaches out of the Hanford sediment. Rapid fluctuations in the Columbia River stage on hourly, weekly and seasonal time scales are found to' playa major role in determining the migration behavior of U(VI). The calculations demonstrate that U(VI) is released into the Columbia River at a highly fluctuating rate in a ratchet-like behavior with nonzero U(VI) flux occurring only during flow from contaminated sediment into the river. The cumulative flux, however, is found to increase approximately linearly with time. The flow rate and U(VI) flux into the Columbia River predicted by the model is highly sensitive to the value used in the conductance boundary condition at the river-sediment interface. By fitting the conductance to the measured piezometric head at well 399

  20. High performance proton accelerators

    Favale, A.J.

    1989-01-01

    In concert with this theme this paper briefly outlines how Grumman, over the past 4 years, has evolved from a company that designed and fabricated a Radio Frequency Quadrupole (RFQ) accelerator from the Los Alamos National Laboratory (LANL) physics and specifications to a company who, as prime contractor, is designing, fabricating, assembling and commissioning the US Army Strategic Defense Commands (USA SDC) Continuous Wave Deuterium Demonstrator (CWDD) accelerator as a turn-key operation. In the case of the RFQ, LANL scientists performed the physics analysis, established the specifications supported Grumman on the mechanical design, conducted the RFQ tuning and tested the RFQ at their laboratory. For the CWDD Program Grumman has the responsibility for the physics and engineering designs, assembly, testing and commissioning albeit with the support of consultants from LANL, Lawrence Berkeley Laboratory (LBL) and Brookhaven National laboratory. In addition, Culham Laboratory and LANL are team members on CWDD. LANL scientists have reviewed the physics design as well as a USA SDC review board. 9 figs

  1. ATR performance modeling concepts

    Ross, Timothy D.; Baker, Hyatt B.; Nolan, Adam R.; McGinnis, Ryan E.; Paulson, Christopher R.

    2016-05-01

    Performance models are needed for automatic target recognition (ATR) development and use. ATRs consume sensor data and produce decisions about the scene observed. ATR performance models (APMs) on the other hand consume operating conditions (OCs) and produce probabilities about what the ATR will produce. APMs are needed for many modeling roles of many kinds of ATRs (each with different sensing modality and exploitation functionality combinations); moreover, there are different approaches to constructing the APMs. Therefore, although many APMs have been developed, there is rarely one that fits a particular need. Clarified APM concepts may allow us to recognize new uses of existing APMs and identify new APM technologies and components that better support coverage of the needed APMs. The concepts begin with thinking of ATRs as mapping OCs of the real scene (including the sensor data) to reports. An APM is then a mapping from explicit quantized OCs (represented with less resolution than the real OCs) and latent OC distributions to report distributions. The roles of APMs can be distinguished by the explicit OCs they consume. APMs used in simulations consume the true state that the ATR is attempting to report. APMs used online with the exploitation consume the sensor signal and derivatives, such as match scores. APMs used in sensor management consume neither of those, but estimate performance from other OCs. This paper will summarize the major building blocks for APMs, including knowledge sources, OC models, look-up tables, analytical and learned mappings, and tools for signal synthesis and exploitation.

  2. Using high-performance mathematical modelling tools to predict erosion and sediment fluxes in peri-urban catchments

    Pereira, André; Conde, Daniel; Ferreira, Carla S. S.; Walsh, Rory; Ferreira, Rui M. L.

    2017-04-01

    Deforestation and urbanization generally lead to increased soil erosion andthrough the indirect effect of increased overland flow and peak flood discharges. Mathematical modelling tools can be helpful for predicting the spatial distribution of erosion and the morphological changes on the channel network. This is especially useful to predict the impacts of land-use changes in parts of the watershed, namely due to urbanization. However, given the size of the computational domain (normally the watershed itself), the need for high spatial resolution data to model accurately sediment transport processes and possible need to model transcritical flows, the computational cost is high and requires high-performance computing techniques. The aim of this work is to present the latest developments of the hydrodynamic and morphological model STAV2D and its applicability to predict runoff and erosion at watershed scale. STAV2D was developed at CEris - Instituto Superior Técnico, Universidade de Lisboa - as a tool particularly appropriated to model strong transient flows in complex and dynamic geometries. It is based on an explicit, first-order 2DH finite-volume discretization scheme for unstructured triangular meshes, in which a flux-splitting technique is paired with a reviewed Roe-Riemann solver, yielding a model applicable to discontinuous flows over time-evolving geometries. STAV2D features solid transport in both Euleran and Lagrangian forms, with the aim of describing the transport of fine natural sediments and then the large individual debris. The model has been validated with theoretical solutions and laboratory experiments (Canelas et al., 2013 & Conde et al., 2015). STAV-2D now supports fully distributed and heterogeneous simulations where multiple different hardware devices can be used to accelerate computation time within a unified Object-Oriented approach: the source code for CPU and GPU has the same compilation units and requires no device specific branches, like

  3. Modeling of lower hybrid current drive (LHCD) and parametric instability (PI) for high performance internal transport barriers (ITBs)

    Cesario, R.; Cardinali, A.; Castaldo, C.; Paoletti, F.; Challis, C.; Mailloux, J.; Mazon, D.

    2003-01-01

    ITBs (internal transport barrier) with high performance in time duration (4 seconds) were produced at Jet in plasma discharges operating at the plasma current of 2,4 MA and toroidal magnetic field of 3,45 T using lower hybrid (LH) radiofrequency power (2,3 MW) for heating and current drive. The first results of the modeling devoted to calculate the LH power deposition and current density profiles for ITB plasmas are presented. The LH power density profile was first calculated considering the nominal LH power n / spectrum launched by the antenna, a substantially centrally deposition is obtained, many passes (> 10) are necessary for producing a significant fraction of the coupled LH power to be absorbed. In a second step some broadening (20%) of the launched n / power spectrum was considered to simulate the effect of a non-linear wave scattering. Most of the LH power is deposited at the first pass, mainly in the outer half of plasma. The simulation gives a moderate amount (60%) of non-inductive current, including 30% of LHCD fraction. The q-profiles from polarization and from MSE (motional Stark effect) at the beginning and during the main heating phase were analysed. Non-linear plasma edge phenomena allow propagation of some LH power with large n / . Such effect should be retained for a realistic LHCD modeling of ITB plasmas. The consequent enhanced off-axis LHCD is consistent with the observed large ITBs and the obtained large region with low magnetic shear. The LH power might provide a powerful tool for controlling the q-profile for ITB at high plasma current, for potential application to the advanced tokamak regimes

  4. Modeling of lower hybrid current drive (LHCD) and parametric instability (PI) for high performance internal transport barriers (ITBs)

    Cesario, R.; Cardinali, A.; Castaldo, C. [Associazione Euratom-ENEA sulla Fusione, Centro Ricerche Frascadi (Italy); Paoletti, F. [PPPL Pinceton (United States); Challis, C.; Mailloux, J. [Euratom-UKAEA fusion association, Culham Science Centre, Abingdon, Oxfordshire, OX (United Kingdom); Mazon, D. [Association Euratom-CEA Cadarache, 13 - Saint-Paul-lez-Durance (France)

    2003-07-01

    ITBs (internal transport barrier) with high performance in time duration (4 seconds) were produced at Jet in plasma discharges operating at the plasma current of 2,4 MA and toroidal magnetic field of 3,45 T using lower hybrid (LH) radiofrequency power (2,3 MW) for heating and current drive. The first results of the modeling devoted to calculate the LH power deposition and current density profiles for ITB plasmas are presented. The LH power density profile was first calculated considering the nominal LH power n{sub /} spectrum launched by the antenna, a substantially centrally deposition is obtained, many passes (> 10) are necessary for producing a significant fraction of the coupled LH power to be absorbed. In a second step some broadening (20%) of the launched n{sub /} power spectrum was considered to simulate the effect of a non-linear wave scattering. Most of the LH power is deposited at the first pass, mainly in the outer half of plasma. The simulation gives a moderate amount (60%) of non-inductive current, including 30% of LHCD fraction. The q-profiles from polarization and from MSE (motional Stark effect) at the beginning and during the main heating phase were analysed. Non-linear plasma edge phenomena allow propagation of some LH power with large n{sub /}. Such effect should be retained for a realistic LHCD modeling of ITB plasmas. The consequent enhanced off-axis LHCD is consistent with the observed large ITBs and the obtained large region with low magnetic shear. The LH power might provide a powerful tool for controlling the q-profile for ITB at high plasma current, for potential application to the advanced tokamak regimes.

  5. Modeling and performance of the MHTGR [Modular High-Temperature Gas-Cooled Reactor] reactor cavity cooling system

    Conklin, J.C.

    1990-04-01

    The Reactor Cavity Cooling System (RCCS) of the Modular High- Temperature Gas-Cooled Reactor (MHTGR) proposed by the U.S. Department of Energy is designed to remove the nuclear afterheat passively in the event that neither the heat transport system nor the shutdown cooling circulator subsystem is available. A computer dynamic simulation for the physical and mathematical modeling of and RCCS is described here. Two conclusions can be made form computations performed under the assumption of a uniform reactor vessel temperature. First, the heat transferred across the annulus from the reactor vessel and then to ambient conditions is very dependent on the surface emissivities of the reactor vessel and RCCS panels. These emissivities should be periodically checked to ensure the safety function of the RCCS. Second, the heat transfer from the reactor vessel is reduced by a maximum of 10% by the presence of steam at 1 atm in the reactor cavity annulus for an assumed constant in the transmission of radiant energy across the annulus can be expected to result in an increase in the reactor vessel temperature for the MHTGR. Further investigation of participating radiation media, including small particles, in the reactor cavity annulus is warranted. 26 refs., 7 figs., 1 tab

  6. High performance W-AIN cermet solar coatings designed by modelling calculations and deposited by DC magnetron sputtering

    Qi-Chu Zhang [The University of Sydney (Australia). School of Physics; Shen, Y.G. [City University of Hong Kong (Hong Kong). Department of Manufacturing Engineering and Engineering Management

    2004-01-25

    High solar performance W-AIN cermet solar coatings were designed using a numerical computer model and deposited experimentally. In the numerical calculations aluminium oxynitride (AlON) was used as ceramic component. The dielectric functions and then complex refractive index of W-AlON cermet materials were calculated using the Sheng's approximation. The layer thickness and W metal volume fraction were optimised to achieve maximum photo-thermal conversion efficiency for W-AlON cermet solar coatings on an Al reflector with a surface AlON ceramic anti-reflection layer. Optimisation calculations show that the W-AlON cermet solar coatings with two and three cermet layers have nearly identical solar absorptance, emittance and photo-thermal conversion efficiency that are much better than those for films with one cermet layer. The optimised calculated AlON/W-AlON/Al solar coating film with two cermet layers has a high solar absorptance of 0.953 and a low hemispherical emittance of 0.051 at 80{sup o}C for a concentration factor of 2. The AlN/W-AlN/Al solar selective coatings with two cermet layers were deposited using two metal target direct current magnetron sputtering technology. During the deposition of W-AlN cermet layer, both Al and W targets were run simultaneously in a gas mixture of argon and nitrogen. By substrate rotation a multi-sub-layer system consisting of alternating AlN ceramic and W metallic sub-layers was deposited that can be considered as a macro-homogeneous W-AlN cermet layer. A solar absorptance of 0.955 and nearly normal emittance of 0.056 at 80{sup o}C have been achieved for deposited W-AlN cermet solar coatings. (author)

  7. High performance W-AlN cermet solar coatings designed by modelling calculations and deposited by DC magnetron sputtering

    Zhang, Qi-Chu [School of Physics, The University of Sydney, Sydney, NSW 2006 (Australia); Shen, Y.G. [Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong (Hong Kong)

    2004-01-25

    High solar performance W-AlN cermet solar coatings were designed using a numerical computer model and deposited experimentally. In the numerical calculations aluminium oxynitride (AlON) was used as ceramic component. The dielectric function and then complex refractive index of W-AlON cermet materials were calculated using the Sheng's approximation. The layer thickness and W metal volume fraction were optimised to achieve maximum photo-thermal conversion efficiency for W-AlON cermet solar coatings on an Al reflector with a surface AlON ceramic anti-reflection layer. Optimisation calculations show that the W-AlON cermet solar coatings with two and three cermet layers have nearly identical solar absorptance, emittance and photo-thermal conversion efficiency that are much better than those for films with one cermet layer. The optimised calculated AlON/W-AlON/Al solar coating film with two cermet layers has a high solar absorptance of 0.953 and a low hemispherical emittance of 0.051 at 80C for a concentration factor of 2. The AlN/W-AlN/Al solar selective coatings with two cermet layers were deposited using two metal target direct current magnetron sputtering technology. During the deposition of W-AlN cermet layer, both Al and W targets were run simultaneously in a gas mixture of argon and nitrogen. By substrate rotation a multi-sub-layer system consisting of alternating AlN ceramic and W metallic sub-layers was deposited that can be considered as a macro-homogeneous W-AlN cermet layer. A solar absorptance of 0.955 and nearly normal emittance of 0.056 at 80C have been achieved for deposited W-AlN cermet solar coatings.

  8. Geometrical Model of Solar Radiation Pressure Based on High-Performing Galileo Clocks - First Geometrical Mapping of the Yarkowsky effect

    Svehla, Drazen; Rothacher, Markus; Hugentobler, Urs; Steigenberger, Peter; Ziebart, Marek

    2014-05-01

    Solar radiation pressure is the main source of errors in the precise orbit determination of GNSS satellites. All deficiencies in the modeling of Solar radiation pressure map into estimated terrestrial reference frame parameters as well as into derived gravity field coefficients and altimetry results when LEO orbits are determined using GPS. Here we introduce a new approach to geometrically map radial orbit perturbations of GNSS satellites using highly-performing clocks on board the first Galileo satellites. Only a linear model (time bias and time drift) needs to be removed from the estimated clock parameters and the remaining clock residuals map all radial orbit perturbations along the orbit. With the independent SLR measurements, we show that a Galileo clock is stable enough to map radial orbit perturbations continuously along the orbit with a negative sign in comparison to SLR residuals. Agreement between the SLR residuals and the clock residuals is at the 1 cm RMS for an orbit arc of 24 h. Looking at the clock parameters determined along one orbit revolution over a period of one year, we show that the so-called SLR bias in Galileo and GPS orbits can be explained by the translation of the determined orbit in the orbital plane towards the Sun. This orbit translation is due to thermal re-radiation and not accounting for the Sun elevation in the parameterization of the estimated Solar radiation pressure parameters. SLR ranging to GNSS satellites takes place typically at night, e.g. between 6 pm and 6 am local time when the Sun is in opposition to the satellite. Therefore, SLR observes only one part of the GNSS orbit with a negative radial orbit error that is mapped as an artificial bias in SLR observables. The Galileo clocks clearly show orbit translation for all Sun elevations: the radial orbit error is positive when the Sun is in conjuction (orbit noon) and negative when the Sun is in opposition (orbit midnight). The magnitude of this artificial negative SLR bias

  9. High Performance Networks for High Impact Science

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  10. Static properties and impact resistance of a green Ultra-High Performance Hybrid Fibre Reinforced Concrete (UHPHFRC) : experiments and modeling

    Yu, R.; Spiesz, P.R.; Brouwers, H.J.H.

    2014-01-01

    This paper addresses the static properties and impact resistance of a "green" Ultra-High Performance Hybrid Fibre Reinforced Concrete (UHPHFRC). The design of concrete mixtures aims to achieve a densely compacted cementitious matrix, employing the modified Andreasen & Andersen particle packing

  11. Using clinical data to predict high-cost performance coding issues associated with pressure ulcers: a multilevel cohort model.

    Padula, William V; Gibbons, Robert D; Pronovost, Peter J; Hedeker, Donald; Mishra, Manish K; Makic, Mary Beth F; Bridges, John Fp; Wald, Heidi L; Valuck, Robert J; Ginensky, Adam J; Ursitti, Anthony; Venable, Laura Ruth; Epstein, Ziv; Meltzer, David O

    2017-04-01

    Hospital-acquired pressure ulcers (HAPUs) have a mortality rate of 11.6%, are costly to treat, and result in Medicare reimbursement penalties. Medicare codes HAPUs according to Agency for Healthcare Research and Quality Patient-Safety Indicator 3 (PSI-03), but they are sometimes inappropriately coded. The objective is to use electronic health records to predict pressure ulcers and to identify coding issues leading to penalties. We evaluated all hospitalized patient electronic medical records at an academic medical center data repository between 2011 and 2014. These data contained patient encounter level demographic variables, diagnoses, prescription drugs, and provider orders. HAPUs were defined by PSI-03: stages III, IV, or unstageable pressure ulcers not present on admission as a secondary diagnosis, excluding cases of paralysis. Random forests reduced data dimensionality. Multilevel logistic regression of patient encounters evaluated associations between covariates and HAPU incidence. The approach produced a sample population of 21 153 patients with 1549 PSI-03 cases. The greatest odds ratio (OR) of HAPU incidence was among patients diagnosed with spinal cord injury (ICD-9 907.2: OR = 14.3; P  coded for paralysis, leading to a PSI-03 flag. Other high ORs included bed confinement (ICD-9 V49.84: OR = 3.1, P  coded without paralysis, leading to PSI-03 flags. The resulting statistical model can be tested to predict HAPUs during hospitalization. Inappropriate coding of conditions leads to poor hospital performance measures and Medicare reimbursement penalties. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  12. Complex three dimensional modelling of porous media using high performance computing and multi-scale incompressible approach

    Martin, R.; Orgogozo, L.; Noiriel, C. N.; Guibert, R.; Golfier, F.; Debenest, G.; Quintard, M.

    2013-05-01

    In the context of biofilm growth in porous media, we developed high performance computing tools to study the impact of biofilms on the fluid transport through pores of a solid matrix. Indeed, biofilms are consortia of micro-organisms that are developing in polymeric extracellular substances that are generally located at a fluid-solid interfaces like pore interfaces in a water-saturated porous medium. Several applications of biofilms in porous media are encountered for instance in bio-remediation methods by allowing the dissolution of organic pollutants. Many theoretical studies have been done on the resulting effective properties of these modified media ([1],[2], [3]) but the bio-colonized porous media under consideration are mainly described following simplified theoretical media (stratified media, cubic networks of spheres ...). Therefore, recent experimental advances have provided tomography images of bio-colonized porous media which allow us to observe realistic biofilm micro-structures inside the porous media [4]. To solve closure system of equations related to upscaling procedures in realistic porous media, we solve the velocity field of fluids through pores on complex geometries that are described with a huge number of cells (up to billions). Calculations are made on a realistic 3D sample geometry obtained by X micro-tomography. Cell volumes are coming from a percolation experiment performed to estimate the impact of precipitation processes on the properties of a fluid transport phenomena in porous media [5]. Average permeabilities of the sample are obtained from velocities by using MPI-based high performance computing on up to 1000 processors. Steady state Stokes equations are solved using finite volume approach. Relaxation pre-conditioning is introduced to accelerate the code further. Good weak or strong scaling are reached with results obtained in hours instead of weeks. Factors of accelerations of 20 up to 40 can be reached. Tens of geometries can now be

  13. High Performance Reduced Order Models for Wind Turbines with Full-Scale Converters Applied on Grid Interconnection Studies

    Pereira, Heverton A.; F. Cupertino, Allan; Teodorescu, Remus

    2014-01-01

    Wind power has achieved technological evolution, and Grid Code (GC) requirements forced wind industry consolidation in the last three decades. However, more studies are necessary to understand how the dynamics inherent in this energy source interact with the power system. Traditional energy...... of Absolute Error (NIAE). Models are analyzed during wind speed variations and balanced voltage dip. During faults, WPPs must be able to supply reactive power to the grid, and this characteristic is analyzed. Using the proposed performance index, it is possible to conclude if a reduced order model is suitable...

  14. Experimental Evaluation for the Microvibration Performance of a Segmented PC Method Based High Technology Industrial Facility Using 1/2 Scale Test Models

    Sijun Kim

    2017-01-01

    Full Text Available The precast concrete (PC method used in the construction process of high technology industrial facilities is limited when applied to those with greater span lengths, due to the transport length restriction (maximum length of 15~16 m in Korea set by traffic laws. In order to resolve this, this study introduces a structural system with a segmented PC system, and a 1/2 scale model with a width of 9000 mm (hereafter Segmented Model is manufactured to evaluate vibration performance. Since a real vibrational environment cannot be reproduced for vibration testing using a scale model, a comparative analysis of their relative performances is conducted in this study. For this purpose, a 1/2 scale model with a width of 7200 mm (hereafter Nonsegmented Model of a high technology industrial facility is additionally prepared using the conventional PC method. By applying the same experiment method for both scale models and comparing the results, the relative vibration performance of the Segmented Model is observed. Through impact testing, the natural frequencies of the two scale models are compared. Also, in order to analyze the estimated response induced by the equipment, the vibration responses due to the exciter are compared. The experimental results show that the Segmented Model exhibits similar or superior performances when compared to the Nonsegmented Model.

  15. RavenDB high performance

    Ritchie, Brian

    2013-01-01

    RavenDB High Performance is comprehensive yet concise tutorial that developers can use to.This book is for developers & software architects who are designing systems in order to achieve high performance right from the start. A basic understanding of RavenDB is recommended, but not required. While the book focuses on advanced topics, it does not assume that the reader has a great deal of prior knowledge of working with RavenDB.

  16. High-Performance Operating Systems

    Sharp, Robin

    1999-01-01

    Notes prepared for the DTU course 49421 "High Performance Operating Systems". The notes deal with quantitative and qualitative techniques for use in the design and evaluation of operating systems in computer systems for which performance is an important parameter, such as real-time applications......, communication systems and multimedia systems....

  17. Performance of advanced self-shielding models in DRAGON Version4 on analysis of a high conversion light water reactor lattice

    Karthikeyan, Ramamoorthy; Hebert, Alain

    2008-01-01

    A high conversion light water reactor lattice has been analysed using the code DRAGON Version4. This analysis was performed to test the performance of the advanced self-shielding models incorporated in DRAGON Version4. The self-shielding models are broadly classified into two groups - 'equivalence in dilution' and 'subgroup approach'. Under the 'equivalence in dilution' approach we have analysed the generalized Stamm'ler model with and without Nordheim model and Riemann integration. These models have been analysed also using the Livolant-Jeanpierre normalization. Under the 'subgroup approach', we have analysed Statistical self-shielding model based on physical probability tables and Ribon extended self-shielding model based on mathematical probability tables. This analysis will help in understanding the performance of advanced self-shielding models for a lattice that is tight and has a large fraction of fissions happening in the resonance region. The nuclear data for the analysis was generated in-house. NJOY99.90 was used for generating libraries in DRAGLIB format for analysis using DRAGON and A Compact ENDF libraries for analysis using MCNP5. The evaluated datafiles were chosen based on the recommendations of the IAEA Co-ordinated Research Project on the WIMS Library Update Project. The reference solution for the problem was obtained using Monte Carlo code MCNP5. It was found that the Ribon extended self-shielding model based on mathematical probability tables using correlation model performed better than all other models

  18. Utilizing High-Performance Computing to Investigate Parameter Sensitivity of an Inversion Model for Vadose Zone Flow and Transport

    Fang, Z.; Ward, A. L.; Fang, Y.; Yabusaki, S.

    2011-12-01

    High-resolution geologic models have proven effective in improving the accuracy of subsurface flow and transport predictions. However, many of the parameters in subsurface flow and transport models cannot be determined directly at the scale of interest and must be estimated through inverse modeling. A major challenge, particularly in vadose zone flow and transport, is the inversion of the highly-nonlinear, high-dimensional problem as current methods are not readily scalable for large-scale, multi-process models. In this paper we describe the implementation of a fully automated approach for addressing complex parameter optimization and sensitivity issues on massively parallel multi- and many-core systems. The approach is based on the integration of PNNL's extreme scale Subsurface Transport Over Multiple Phases (eSTOMP) simulator, which uses the Global Array toolkit, with the Beowulf-Cluster inspired parallel nonlinear parameter estimation software, BeoPEST in the MPI mode. In the eSTOMP/BeoPEST implementation, a pre-processor generates all of the PEST input files based on the eSTOMP input file. Simulation results for comparison with observations are extracted automatically at each time step eliminating the need for post-process data extractions. The inversion framework was tested with three different experimental data sets: one-dimensional water flow at Hanford Grass Site; irrigation and infiltration experiment at the Andelfingen Site; and a three-dimensional injection experiment at Hanford's Sisson and Lu Site. Good agreements are achieved in all three applications between observations and simulations in both parameter estimates and water dynamics reproduction. Results show that eSTOMP/BeoPEST approach is highly scalable and can be run efficiently with hundreds or thousands of processors. BeoPEST is fault tolerant and new nodes can be dynamically added and removed. A major advantage of this approach is the ability to use high-resolution geologic models to preserve

  19. INL High Performance Building Strategy

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  20. Development and Performance of the Modularized, High-performance Computing and Hybrid-architecture Capable GEOS-Chem Chemical Transport Model

    Long, M. S.; Yantosca, R.; Nielsen, J.; Linford, J. C.; Keller, C. A.; Payer Sulprizio, M.; Jacob, D. J.

    2014-12-01

    The GEOS-Chem global chemical transport model (CTM), used by a large atmospheric chemistry research community, has been reengineered to serve as a platform for a range of computational atmospheric chemistry science foci and applications. Development included modularization for coupling to general circulation and Earth system models (ESMs) and the adoption of co-processor capable atmospheric chemistry solvers. This was done using an Earth System Modeling Framework (ESMF) interface that operates independently of GEOS-Chem scientific code to permit seamless transition from the GEOS-Chem stand-alone serial CTM to deployment as a coupled ESM module. In this manner, the continual stream of updates contributed by the CTM user community is automatically available for broader applications, which remain state-of-science and directly referenceable to the latest version of the standard GEOS-Chem CTM. These developments are now available as part of the standard version of the GEOS-Chem CTM. The system has been implemented as an atmospheric chemistry module within the NASA GEOS-5 ESM. The coupled GEOS-5/GEOS-Chem system was tested for weak and strong scalability and performance with a tropospheric oxidant-aerosol simulation. Results confirm that the GEOS-Chem chemical operator scales efficiently for any number of processes. Although inclusion of atmospheric chemistry in ESMs is computationally expensive, the excellent scalability of the chemical operator means that the relative cost goes down with increasing number of processes, making fine-scale resolution simulations possible.

  1. A 2 Tesla Full Scale High Performance Periodic Permanent Magnet Model for Attractive (228 KN) and repulsive Maglev

    Stekly, Z. J. J.; Gardner, C.; Domigan, P.; Baker, J.; Hass, M.; McDonald, C.; Wu, C.; Farrell, R. A.

    1996-01-01

    Two 214.5 cm. long high performance periodic (26 cm period) permanent magnet half-assemblies were designed and constructed for use as a wiggler using Nd-B-Fe and vanadium permendur as hard and soft magnetic materials by Field Effects, a division of Intermagnetics General Corporation. Placing these assemblies in a supporting structure with a 2.1 cm pole to pole separation resulted in a periodic field with a maximum value of 2.04 T. This is believed to be the highest field ever achieved by this type of device. The attractive force between the two 602 kg magnet assemblies is 228 kN, providing enough force for suspension of a 45,500 kg vehicle. If used in an attractive maglev system with an appropriate flat iron rail, one assembly will generate the same force with a gap of 1.05 cm leading to a lift to weight ratio of 38.6, not including the vehicle attachment structure. This permanent magnet compares well with superconducting systems which have lift to weight ratios in the range of 5 to 10. This paper describes the magnet assemblies and their measured magnetic performance. The measured magnetic field and resulting attractive magnetic force have a negative spring characteristic. Appropriate control coils are necessary to provide stable operation. The estimated performance of the assemblies in a stable repulsive mode, with eddy currents in a conducting guideway, is also discussed.

  2. Predictive performance models and multiple task performance

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  3. High performance fuel technology development

    Koon, Yang Hyun; Kim, Keon Sik; Park, Jeong Yong; Yang, Yong Sik; In, Wang Kee; Kim, Hyung Kyu [KAERI, Daejeon (Korea, Republic of)

    2012-01-15

    {omicron} Development of High Plasticity and Annular Pellet - Development of strong candidates of ultra high burn-up fuel pellets for a PCI remedy - Development of fabrication technology of annular fuel pellet {omicron} Development of High Performance Cladding Materials - Irradiation test of HANA claddings in Halden research reactor and the evaluation of the in-pile performance - Development of the final candidates for the next generation cladding materials. - Development of the manufacturing technology for the dual-cooled fuel cladding tubes. {omicron} Irradiated Fuel Performance Evaluation Technology Development - Development of performance analysis code system for the dual-cooled fuel - Development of fuel performance-proving technology {omicron} Feasibility Studies on Dual-Cooled Annular Fuel Core - Analysis on the property of a reactor core with dual-cooled fuel - Feasibility evaluation on the dual-cooled fuel core {omicron} Development of Design Technology for Dual-Cooled Fuel Structure - Definition of technical issues and invention of concept for dual-cooled fuel structure - Basic design and development of main structure components for dual- cooled fuel - Basic design of a dual-cooled fuel rod.

  4. High Performance Bulk Thermoelectric Materials

    Ren, Zhifeng [Boston College, Chestnut Hill, MA (United States)

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  5. Physical modeling and high-performance GPU computing for characterization, interception, and disruption of hazardous near-Earth objects

    Kaplinger, Brian Douglas

    For the past few decades, both the scientific community and the general public have been becoming more aware that the Earth lives in a shooting gallery of small objects. We classify all of these asteroids and comets, known or unknown, that cross Earth's orbit as near-Earth objects (NEOs). A look at our geologic history tells us that NEOs have collided with Earth in the past, and we expect that they will continue to do so. With thousands of known NEOs crossing the orbit of Earth, there has been significant scientific interest in developing the capability to deflect an NEO from an impacting trajectory. This thesis applies the ideas of Smoothed Particle Hydrodynamics (SPH) theory to the NEO disruption problem. A simulation package was designed that allows efficacy simulation to be integrated into the mission planning and design process. This is done by applying ideas in high-performance computing (HPC) on the computer graphics processing unit (GPU). Rather than prove a concept through large standalone simulations on a supercomputer, a highly parallel structure allows for flexible, target dependent questions to be resolved. Built around nonclassified data and analysis, this computer package will allow academic institutions to better tackle the issue of NEO mitigation effectiveness.

  6. Neo4j high performance

    Raj, Sonal

    2015-01-01

    If you are a professional or enthusiast who has a basic understanding of graphs or has basic knowledge of Neo4j operations, this is the book for you. Although it is targeted at an advanced user base, this book can be used by beginners as it touches upon the basics. So, if you are passionate about taming complex data with the help of graphs and building high performance applications, you will be able to get valuable insights from this book.

  7. Repository environmental parameters and models/methodologies relevant to assessing the performance of high-level waste packages in basalt, tuff, and salt

    Claiborne, H.C.; Croff, A.G.; Griess, J.C.; Smith, F.J.

    1987-09-01

    This document provides specifications for models/methodologies that could be employed in determining postclosure repository environmental parameters relevant to the performance of high-level waste packages for the Basalt Waste Isolation Project (BWIP) at Richland, Washington, the tuff at Yucca Mountain by the Nevada Test Site, and the bedded salt in Deaf Smith County, Texas. Guidance is provided on the identify of the relevant repository environmental parameters; the models/methodologies employed to determine the parameters, and the input data base for the models/methodologies. Supporting studies included are an analysis of potential waste package failure modes leading to identification of the relevant repository environmental parameters, an evaluation of the credible range of the repository environmental parameters, and a summary of the review of existing models/methodologies currently employed in determining repository environmental parameters relevant to waste package performance. 327 refs., 26 figs., 19 tabs.

  8. Repository environmental parameters and models/methodologies relevant to assessing the performance of high-level waste packages in basalt, tuff, and salt

    Claiborne, H.C.; Croff, A.G.; Griess, J.C.; Smith, F.J.

    1987-09-01

    This document provides specifications for models/methodologies that could be employed in determining postclosure repository environmental parameters relevant to the performance of high-level waste packages for the Basalt Waste Isolation Project (BWIP) at Richland, Washington, the tuff at Yucca Mountain by the Nevada Test Site, and the bedded salt in Deaf Smith County, Texas. Guidance is provided on the identify of the relevant repository environmental parameters; the models/methodologies employed to determine the parameters, and the input data base for the models/methodologies. Supporting studies included are an analysis of potential waste package failure modes leading to identification of the relevant repository environmental parameters, an evaluation of the credible range of the repository environmental parameters, and a summary of the review of existing models/methodologies currently employed in determining repository environmental parameters relevant to waste package performance. 327 refs., 26 figs., 19 tabs

  9. Principles of Sonar Performance Modeling

    Ainslie, M.A.

    2010-01-01

    Sonar performance modelling (SPM) is concerned with the prediction of quantitative measures of sonar performance, such as probability of detection. It is a multidisciplinary subject, requiring knowledge and expertise in the disparate fields of underwater acoustics, acoustical oceanography, sonar

  10. High-performance computing using FPGAs

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  11. Bioconversion of red ginseng saponins in the gastro-intestinal tract in vitro model studied by high-performance liquid chromatography-high resolution Fourier transform ion cyclotron resonance mass spectrometry

    Kong, H.; Wang, M.; Venema, K.; Maathuis, A.; Heijden, R. van der; Greef, J. van der; Xu, G.; Hankemeier, T.

    2009-01-01

    A high-performance liquid chromatography-high resolution Fourier transform ion cyclotron resonance mass spectrometry (HPLC-FTICR-MS) method was developed to investigate the metabolism of ginsenosides in in vitro models of the gastro-intestinal tract. The metabolites were identified by

  12. High Performance Proactive Digital Forensics

    Alharbi, Soltan; Traore, Issa; Moa, Belaid; Weber-Jahnke, Jens

    2012-01-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  13. Progression of performance assessment modeling for the Yucca Mountain disposal system for spent nuclear fuel and high-level radioactive waste

    Rechard, Rob P.; Wilson, Michael L.; Sevougian, S. David

    2014-01-01

    This paper summarizes the evolution of consequence modeling for a repository for spent nuclear fuel and high-level radioactive waste at Yucca Mountain in southern Nevada. The discussion includes four early performance assessments (PAs) conducted between 1982 and 1995 to support selection and to evaluate feasibility and three major PAs conducted between 1998 and 2008 to evaluate viability, recommend the site, and assess compliance. Modeling efforts in 1982 estimated dose to individuals 18 km from the site caused by volcanic eruption through the repository. Modeling in 1984 estimated releases via the groundwater pathway because of container corrosion. In combination, this early analysis supported the first environmental assessment. Analysts in 1991 evaluated cumulative release, as specified in the 1985 US radiation protection standards, via the groundwater pathway over 10 4 yr at a 5-km boundary by modeling waste degradation and flow/transport in the saturated and unsaturated zones. By 1992, however, the US Congress mandated a change to a dose measure. Thus, the 1993 and 1995 performance assessments improved modeling of waste container degradation to provide better estimates of radionuclide release rates out to 10 6 yr. The 1998 viability assessment was a major step in modeling complexity. Dose at a 20-km boundary from the repository was evaluated through 10 6 yr for undisturbed conditions using more elaborate modeling of flow and the addition of modules for modeling infiltration, drift seepage, the chemical environment, and biosphere transport. The 2000 assessment for the site recommendation refined the analysis. Seepage modeling was greatly improved and waste form degradation modeling included more chemical dependence. The 2008 compliance assessment for the license application incorporated the influence of the seismicity on waste package performance to evaluate dose at an ∼18-km boundary. - Highlights: • Evolution of the consequence models to simulate physical

  14. Toward High Performance in Industrial Refrigeration Systems

    Thybo, C.; Izadi-Zamanabadi, Roozbeh; Niemann, H.

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...

  15. Towards high performance in industrial refrigeration systems

    Thybo, C.; Izadi-Zamanabadi, R.; Niemann, Hans Henrik

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...

  16. High performance soft magnetic materials

    2017-01-01

    This book provides comprehensive coverage of the current state-of-the-art in soft magnetic materials and related applications, with particular focus on amorphous and nanocrystalline magnetic wires and ribbons and sensor applications. Expert chapters cover preparation, processing, tuning of magnetic properties, modeling, and applications. Cost-effective soft magnetic materials are required in a range of industrial sectors, such as magnetic sensors and actuators, microelectronics, cell phones, security, automobiles, medicine, health monitoring, aerospace, informatics, and electrical engineering. This book presents both fundamentals and applications to enable academic and industry researchers to pursue further developments of these key materials. This highly interdisciplinary volume represents essential reading for researchers in materials science, magnetism, electrodynamics, and modeling who are interested in working with soft magnets. Covers magnetic microwires, sensor applications, amorphous and nanocrystalli...

  17. Characterising performance of environmental models

    Bennett, N.D.; Croke, B.F.W.; Guariso, G.; Guillaume, J.H.A.; Hamilton, S.H.; Jakeman, A.J.; Marsili-Libelli, S.; Newham, L.T.H.; Norton, J.; Perrin, C.; Pierce, S.; Robson, B.; Seppelt, R.; Voinov, A.; Fath, B.D.; Andreassian, V.

    2013-01-01

    In order to use environmental models effectively for management and decision-making, it is vital to establish an appropriate level of confidence in their performance. This paper reviews techniques available across various fields for characterising the performance of environmental models with focus

  18. Administrator Leadership Styles and Their Impact on School Nursing Part II. A High-Performance School Nurse-Building Administrator Relationship Model.

    Davis, Charles R; Lynch, Erik J

    2018-06-01

    There is a significant disparity in roles, responsibilities, education, training, and expertise between the school nurse and building administrator. Because of this disparity, a natural chasm must be bridged to optimize student health, safety, well-being, and achievement in the classroom while meeting the individual needs of both professionals. This article constructs and presents a new school nurse-building administrator relationship model, the foundation of which is formed from the pioneering and seminal work on high-performance professional relationships and outcomes of Lewin and Drucker. The authors posit that this new model provides the framework for successful school nurse-building administrator interactions that will lead to optimal student outcomes.

  19. High performance light water reactor

    Squarer, D.; Schulenberg, T.; Struwe, D.; Oka, Y.; Bittermann, D.; Aksan, N.; Maraczy, C.; Kyrki-Rajamaeki, R.; Souyri, A.; Dumaz, P.

    2003-01-01

    The objective of the high performance light water reactor (HPLWR) project is to assess the merit and economic feasibility of a high efficiency LWR operating at thermodynamically supercritical regime. An efficiency of approximately 44% is expected. To accomplish this objective, a highly qualified team of European research institutes and industrial partners together with the University of Tokyo is assessing the major issues pertaining to a new reactor concept, under the co-sponsorship of the European Commission. The assessment has emphasized the recent advancement achieved in this area by Japan. Additionally, it accounts for advanced European reactor design requirements, recent improvements, practical design aspects, availability of plant components and the availability of high temperature materials. The final objective of this project is to reach a conclusion on the potential of the HPLWR to help sustain the nuclear option, by supplying competitively priced electricity, as well as to continue the nuclear competence in LWR technology. The following is a brief summary of the main project achievements:-A state-of-the-art review of supercritical water-cooled reactors has been performed for the HPLWR project.-Extensive studies have been performed in the last 10 years by the University of Tokyo. Therefore, a 'reference design', developed by the University of Tokyo, was selected in order to assess the available technological tools (i.e. computer codes, analyses, advanced materials, water chemistry, etc.). Design data and results of the analysis were supplied by the University of Tokyo. A benchmark problem, based on the 'reference design' was defined for neutronics calculations and several partners of the HPLWR project carried out independent analyses. The results of these analyses, which in addition help to 'calibrate' the codes, have guided the assessment of the core and the design of an improved HPLWR fuel assembly. Preliminary selection was made for the HPLWR scale

  20. Economic Model For a Return on Investment Analysis of United States Government High Performance Computing (HPC) Research and Development (R & D) Investment

    Joseph, Earl C. [IDC Research Inc., Framingham, MA (United States); Conway, Steve [IDC Research Inc., Framingham, MA (United States); Dekate, Chirag [IDC Research Inc., Framingham, MA (United States)

    2013-09-30

    This study investigated how high-performance computing (HPC) investments can improve economic success and increase scientific innovation. This research focused on the common good and provided uses for DOE, other government agencies, industry, and academia. The study created two unique economic models and an innovation index: 1 A macroeconomic model that depicts the way HPC investments result in economic advancements in the form of ROI in revenue (GDP), profits (and cost savings), and jobs. 2 A macroeconomic model that depicts the way HPC investments result in basic and applied innovations, looking at variations by sector, industry, country, and organization size. A new innovation index that provides a means of measuring and comparing innovation levels. Key findings of the pilot study include: IDC collected the required data across a broad set of organizations, with enough detail to create these models and the innovation index. The research also developed an expansive list of HPC success stories.

  1. Multiprocessor performance modeling with ADAS

    Hayes, Paul J.; Andrews, Asa M.

    1989-01-01

    A graph managing strategy referred to as the Algorithm to Architecture Mapping Model (ATAMM) appears useful for the time-optimized execution of application algorithm graphs in embedded multiprocessors and for the performance prediction of graph designs. This paper reports the modeling of ATAMM in the Architecture Design and Assessment System (ADAS) to make an independent verification of ATAMM's performance prediction capability and to provide a user framework for the evaluation of arbitrary algorithm graphs. Following an overview of ATAMM and its major functional rules are descriptions of the ADAS model of ATAMM, methods to enter an arbitrary graph into the model, and techniques to analyze the simulation results. The performance of a 7-node graph example is evaluated using the ADAS model and verifies the ATAMM concept by substantiating previously published performance results.

  2. A predictive analytic model for high-performance tunneling field-effect transistors approaching non-equilibrium Green's function simulations

    Salazar, Ramon B.; Appenzeller, Joerg; Ilatikhameneh, Hesameddin; Rahman, Rajib; Klimeck, Gerhard

    2015-01-01

    A new compact modeling approach is presented which describes the full current-voltage (I-V) characteristic of high-performance (aggressively scaled-down) tunneling field-effect-transistors (TFETs) based on homojunction direct-bandgap semiconductors. The model is based on an analytic description of two key features, which capture the main physical phenomena related to TFETs: (1) the potential profile from source to channel and (2) the elliptic curvature of the complex bands in the bandgap region. It is proposed to use 1D Poisson's equations in the source and the channel to describe the potential profile in homojunction TFETs. This allows to quantify the impact of source/drain doping on device performance, an aspect usually ignored in TFET modeling but highly relevant in ultra-scaled devices. The compact model is validated by comparison with state-of-the-art quantum transport simulations using a 3D full band atomistic approach based on non-equilibrium Green's functions. It is shown that the model reproduces with good accuracy the data obtained from the simulations in all regions of operation: the on/off states and the n/p branches of conduction. This approach allows calculation of energy-dependent band-to-band tunneling currents in TFETs, a feature that allows gaining deep insights into the underlying device physics. The simplicity and accuracy of the approach provide a powerful tool to explore in a quantitatively manner how a wide variety of parameters (material-, size-, and/or geometry-dependent) impact the TFET performance under any bias conditions. The proposed model presents thus a practical complement to computationally expensive simulations such as the 3D NEGF approach

  3. Dynamic performance of a high-temperature PEM (proton exchange membrane) fuel cell – Modelling and fuzzy control of purging process

    Zhang, Caizhi; Liu, Zhitao; Zhang, Xiongwen; Chan, Siew Hwa; Wang, Youyi

    2016-01-01

    To improve fuel utilization of HT-PEMFC (high-temperature proton exchange membrane fuel cell), which normally operates under dead-end mode, with properly periodical purging to flush out the accumulated water vapour in the anode flow-field is necessary, otherwise the performance of HT-PEMFC would drop gradually. In this paper, a semi-empirical dynamic voltage model of HT-PEMFC is developed for controller design purpose via fitting the experimental data and validated with experimental results. Then, a fuzzy controller is designed to schedule the purging based on the obtained model. According to the result, the developed model well reflects transient characteristics of HT-PEMFC voltage and the fuzzy controller offers good performance for purging scheduling under uncertain load demands. - Highlights: • A semi-empirical dynamic voltage model of HT-PEMFC is developed for control design. • The model is developed via fitting and validated with experimental results. • A fuzzy controller is designed to schedule the purging based on the obtained model.

  4. Multi-physics corrosion modeling for sustainability assessment of steel reinforced high performance fiber reinforced cementitious composites

    Lepech, M.; Michel, Alexander; Geiker, Mette

    2016-01-01

    and widespread depassivation, are the mechanism behind experimental results of HPFRCC steel corrosion studies found in the literature. Such results provide an indication of the fundamental mechanisms by which steel reinforced HPFRCC materials may be more durable than traditional reinforced concrete and other......Using a newly developed multi-physics transport, corrosion, and cracking model, which models these phenomena as a coupled physiochemical processes, the role of HPFRCC crack control and formation in regulating steel reinforcement corrosion is investigated. This model describes transport of water...... and chemical species, the electric potential distribution in the HPFRCC, the electrochemical propagation of steel corrosion, and the role of microcracks in the HPFRCC material. Numerical results show that the reduction in anode and cathode size on the reinforcing steel surface, due to multiple crack formation...

  5. Modeling and experimental performance of an intermediate temperature reversible solid oxide cell for high-efficiency, distributed-scale electrical energy storage

    Wendel, Christopher H.; Gao, Zhan; Barnett, Scott A.; Braun, Robert J.

    2015-06-01

    Electrical energy storage is expected to be a critical component of the future world energy system, performing load-leveling operations to enable increased penetration of renewable and distributed generation. Reversible solid oxide cells, operating sequentially between power-producing fuel cell mode and fuel-producing electrolysis mode, have the capability to provide highly efficient, scalable electricity storage. However, challenges ranging from cell performance and durability to system integration must be addressed before widespread adoption. One central challenge of the system design is establishing effective thermal management in the two distinct operating modes. This work leverages an operating strategy to use carbonaceous reactant species and operate at intermediate stack temperature (650 °C) to promote exothermic fuel-synthesis reactions that thermally self-sustain the electrolysis process. We present performance of a doped lanthanum-gallate (LSGM) electrolyte solid oxide cell that shows high efficiency in both operating modes at 650 °C. A physically based electrochemical model is calibrated to represent the cell performance and used to simulate roundtrip operation for conditions unique to these reversible systems. Design decisions related to system operation are evaluated using the cell model including current density, fuel and oxidant reactant compositions, and flow configuration. The analysis reveals tradeoffs between electrical efficiency, thermal management, energy density, and durability.

  6. Development of high performance cladding

    Kiuchi, Kiyoshi

    2003-01-01

    The developments of superior next-generation light water reactor are requested on the basis of general view points, such as improvement of safety, economics, reduction of radiation waste and effective utilization of plutonium, until 2030 year in which conventional reactor plants should be renovate. Improvements of stainless steel cladding for conventional high burn-up reactor to more than 100 GWd/t, developments of manufacturing technology for reduced moderation-light water reactor (RMWR) of breeding ratio beyond 1.0 and researches of water-materials interaction on super critical pressure-water cooled reactor are carried out in Japan Atomic Energy Research Institute. Stable austenite stainless steel has been selected for fuel element cladding of advanced boiling water reactor (ABWR). The austenite stain less has the superiority for anti-irradiation properties, corrosion resistance and mechanical strength. A hard spectrum of neutron energy up above 0.1 MeV takes place in core of the reduced moderation-light water reactor, as liquid metal-fast breeding reactor (LMFBR). High performance cladding for the RMWR fuel elements is required to get anti-irradiation properties, corrosion resistance and mechanical strength also. Slow strain rate test (SSRT) of SUS 304 and SUS 316 are carried out for studying stress corrosion cracking (SCC). Irradiation tests in LMFBR are intended to obtain irradiation data for damaged quantity of the cladding materials. (M. Suetake)

  7. Evaluation of conventional and high-performance routine solar radiation measurements for improved solar resource, climatological trends, and radiative modeling

    Gueymard, Christian A. [Solar Consulting Services, P.O. Box 392, Colebrook, NH 03576 (United States); Myers, Daryl R. [National Renewable Energy Laboratory, 1617 Cole Blvd., Golden, CO 80401-3305 (United States)

    2009-02-15

    The solar renewable energy community depends on radiometric measurements and instrumentation for data to design and monitor solar energy systems, and develop and validate solar radiation models. This contribution evaluates the impact of instrument uncertainties contributing to data inaccuracies and their effect on short-term and long-term measurement series, and on radiation model validation studies. For the latter part, transposition (horizontal-to-tilt) models are used as an example. Confirming previous studies, it is found that a widely used pyranometer strongly underestimates diffuse and global radiation, particularly in winter, unless appropriate corrective measures are taken. Other types of measurement problems are also discussed, such as those involved in the indirect determination of direct or diffuse irradiance, and in shadowband correction methods. The sensitivity of the predictions from transposition models to inaccuracies in input radiation data is demonstrated. Caution is therefore issued to the whole community regarding drawing detailed conclusions about solar radiation data without due attention to the data quality issues only recently identified. (author)

  8. High-performance work systems in health care management, part 1: development of an evidence-informed model.

    Garman, Andrew N; McAlearney, Ann Scheck; Harrison, Michael I; Song, Paula H; McHugh, Megan

    2011-01-01

    : Although management practices are recognized as important factors in improving health care quality and efficiency, most research thus far has focused on individual practices, ignoring or underspecifying the contexts within which these practices are operating. Research from other industries, which has increasingly focused on systems rather than individual practices, has yielded results that may benefit health services management. : Our goal was to develop a conceptual model on the basis of prior research from health care as well as other industries that could be used to inform important contextual considerations within health care. : Using theoretical frameworks from A. Donabedian (1966), P. M. Wright, T. M. Gardner, and L. M. Moynihan (2003), and B. Schneider, D. B. Smith, and H. W. Goldstein (2000) and review methods adapted from R. Pawson (2006b), we reviewed relevant research from peer-reviewed and other industry-relevant sources to inform our model. The model we developed was then reviewed with a panel of practitioners, including experts in quality and human resource management, to assess the applicability of the model to health care settings. : The resulting conceptual model identified four practice bundles, comprising 14 management practices as well as nine factors influencing adoption and perceived sustainability of these practices. The mechanisms by which these practices influence care outcomes are illustrated using the example of hospital-acquired infections. In addition, limitations of the current evidence base are discussed, and an agenda for future research in health care settings is outlined. : Results may help practitioners better conceptualize management practices as part of a broader system of work practices. This may, in turn, help practitioners to prioritize management improvement efforts more systematically.

  9. High Performance Human Resource Practices, Identification with Organizational Values and Goals, and Service-Oriented Organizational Citizenship Behavior: A Review of Literature and Proposed Model

    Nasurdin Aizzat Mohd.

    2015-01-01

    Full Text Available Increasing competition within the hospitality industry has recognized the importance of service quality as a key business differentiation strategy. Proactive involvement of employees is a vital component of the service delivery, which in turn, enhances customer satisfaction and loyalty. Hence, hospitality organizations, particularly hotels, need to encourage their employees to perform voluntary behaviors that go “beyond their call of duty”. These behaviors are referred to as service-oriented organizational citizenship behaviors (hereafter labeled as SO-OCBs. A review of the literature indicates that an organization’s human resource management (henceforth labeled as HRM practices are instrumental in establishing the tone of the employee-employer relationship, which subsequently affects employees’ display of discretionary functional service-related behaviors. Specifically, high-performance HRM practices can nurture a relational employment relationship, leading to internalization of organizational values and goals. This, in turn, would induce employees to engage in greater SO-OCBs. However, conceptual and empirical work explaining the mechanism by which high-performance HRM practices relate to SO-OCBs remains scarce. Therefore, this paper aims to construct a model linking a set of high-performance HRM practices (selective hiring, communication, appraisal, and reward and SO-OCBs. Identification with organizational values and goals is posited as a mediator in the proposed relationship. A discussion of the literature to support the proposed framework is furnished.

  10. Characterisation of current and future GNSS performance in urban canyons using a high quality 3-D urban model of Melbourne, Australia

    Gang-jun, Liu; Kefei, Zhang; Falin, Wu; Liam, Densley; Retscher, Günther

    2009-03-01

    Global Navigation Satellite System (GNSS) is a critical space-borne geospatial infrastructure providing essential positioning supports to a range of location-sensitive applications. GNSS is currently dominated by the US Global Positioning System (GPS) constellation. The next generation GNSS is expected to offer more satellites, better positioning provision, and improved availability and continuity of navigation support. However, GNSS performance in 3-D urban environments is problematic because GNSS signals are either completely blocked or severely degraded by high-rising geographic features like buildings. The aim of this study is to gain an in-depth understanding of the changing spatial patterns of GNSS performance, measured by the number of visible satellites (NVS) and position dilution-of-precision (PDOP), in the urban canyons of Melbourne, Australia. The methodology used includes the following steps: (1) determination of the dynamic orbital positions of current and future GNSS satellites; (2) development of a 3-D urban model of high geometric quality for Melbourne Central Business District (CBD); (3) evaluation of GNSS performance for every specified location in the urban canyons; and (4) visualisation and characterisation of the dynamic spatial patterns of GNSS performances in the urban canyons. As expected, the study shows that the integration of the GPS and Galileo constellations results in higher availability and stronger geometry, leading to significant improvement of GNSS performance in urban canyons of Melbourne CBD. Some conclusions are drawn and further research currently undertaken is also outlined.

  11. Modeling of the write and read back performances of hexagonal Ba-ferrite particulate media for high density tape recording

    Lee, Jehyun; Fuger, Markus; Fidler, Josef; Suess, Dieter; Schrefl, Thomas; Shimizu, Osamu

    2010-01-01

    In this study, the signal-to-noise ratio (SNR) performances of longitudinally, randomly, and perpendicularly oriented particles, based on hexagonal barium ferrite (h-BaFe) platelets with an average volume of 2400 nm 3 have been studied as a function of the recording head to media distance by numerical micromagnetic simulations. The distances from the write head to media and from the read head to media were varied independently. For a fixed read distance and varied writing distances, the SNR was decreasing in larger write distance. An optimum write distance of 40 and 50 nm was found for the longitudinally oriented media and the perpendicularly oriented media, respectively. The optimum write distance for longitudinally oriented media, 40 nm, resulted in the local minimum SNR for the perpendicularly oriented media. In most write distances the perpendicularly oriented media show the outstanding best performance, but near the write distance of 40 nm the longitudinally oriented media work as good as the perpendicularly oriented media. In a fixed write distance with various read distances, the SNR was almost constant in each media whereas the average signal amplitude was exponentially decayed in larger read head to media distance. The best SNR was found in the perpendicularly oriented media at write head to media distance d write =20 nm and read head to media distance d read =40 nm. The best SNR value is 11.9 and 24.4 dB in time domain and frequency domain, respectively.

  12. Approaches to Process Performance Modeling: A Summary from the SEI Series of Workshops on CMMI High Maturity Measurement and Analysis

    2010-01-01

    Soporte de Modelos Sistémicos: Aplicación al Sector de Desarrollo de Software de Argentina,” Tesis de PhD, Universidad Tecnológica Nacional-Facultad...with New Results 31  2.3  Other Simulation Approaches 37  Conceptual Planning , Execution, and Operation of Combat Fire Support Effectiveness: A...Figure 29:  Functional Structure of Multiple Regression Model 80  Figure 30:  TSP Quality Plan One 85  Figure 31:  TSP Quality Plan Two 85  Figure

  13. Unsaturated flow modeling in performance assessments for the Yucca Mountain disposal system for spent nuclear fuel and high-level radioactive waste

    Rechard, Rob P.; Birkholzer, Jens T.; Wu, Yu-Shu; Stein, Joshua S.; Houseworth, James E.

    2014-01-01

    This paper summarizes the progression of modeling efforts of infiltration, percolation, and seepage conducted between 1984 and 2008 to evaluate feasibility, viability, and assess compliance of a repository in the unsaturated zone for spent nuclear fuel and high-level radioactive waste at Yucca Mountain, Nevada. Scientific understanding of infiltration in a desert environment, unsaturated percolation flux in fractures and matrix of the volcanic tuff, and seepage into an open drift in a thermally perturbed environment was initially lacking in 1984. As understanding of the Yucca Mountain disposal system increased through site characterization and in situ testing, modeling of infiltration, percolation, and seepage evolved from simple assumptions in a single model in 1984 to three modeling modules each based on several detailed process models in 2008. Uncertainty in percolation flux through Yucca Mountain was usually important in explaining the observed uncertainty in performance measures:cumulative release in assessments prior to 1995 and individual dose, thereafter. - Highlights: • Progression of modeling of infiltration, percolation, and seepage conducted is described for a geological repository at Yucca Mountain. • Progression from 1-D in single equivalent to 3-D model of percolation in dual permeability continuum is described. • Introduction of an infiltration boundary condition in 1998 and the refinement for evaluating uncertainty for the license application is described. • Introduction of a seepage module that included calibration to in-site measurements and separating uncertainty and variability is described

  14. Dataset demonstrating the modeling of a high performance Cu(In,GaSe2 absorber based thin film photovoltaic cell

    Md. Asaduzzaman

    2017-04-01

    Full Text Available The physical data of the semiconductor materials used in the design of a CIGS absorber based thin film photovoltaic cell have been presented in this data article. Besides, the values of the contact parameter and operating conditions of the cell have been reported. Furthermore, by conducting the simulation with data corresponding to the device structure: soda-lime glass (SLG substrate/Mo back-contact/CIGS absorber/CdS buffer/intrinsic ZnO/Al-doped ZnO window/Al-grid front-contact, the solar cell performance parameters such as open circuit voltage (Voc, short circuit current density Jsc, fill factor (FF, efficiency (η, and collection efficiency ηc have been analyzed.

  15. SENTINEL-2 GLOBAL REFERENCE IMAGE VALIDATION AND APPLICATION TO MULTITEMPORAL PERFORMANCES AND HIGH LATITUDE DIGITAL SURFACE MODEL

    A. Gaudel

    2017-05-01

    Full Text Available In the frame of the Copernicus program of the European Commission, Sentinel-2 is a constellation of 2 satellites with a revisit time of 5 days in order to have temporal images stacks and a global coverage over terrestrial surfaces. Satellite 2A was launched in June 2015, and satellite 2B will be launched in March 2017. In cooperation with the European Space Agency (ESA, the French space agency (CNES is in charge of the image quality of the project, and so ensures the CAL/VAL commissioning phase during the months following the launch. This cooperation is also extended to routine phase as CNES supports European Space Research Institute (ESRIN and the Sentinel-2 Mission performance Centre (MPC for validation in geometric and radiometric image quality aspects, and in Sentinel-2 GRI geolocation performance assessment whose results will be presented in this paper. The GRI is a set of S2A images at 10m resolution covering the whole world with a good and consistent geolocation. This ground reference enables accurate multi-temporal registration of refined Sentinel-2 products. While not primarily intended for the generation of DSM, Sentinel-2 swaths overlap between orbits would also allow for the generation of a complete DSM of land and ices over 60° of northern latitudes (expected accuracy: few S2 pixels in altimetry. This DSM would benefit from the very frequent revisit times of Sentinel-2, to monitor ice or snow level in area of frequent changes, or to increase measurement accuracy in areas of little changes.

  16. SENTINEL-2 Global Reference Image Validation and Application to Multitemporal Performances and High Latitude Digital Surface Model

    Gaudel, A.; Languille, F.; Delvit, J. M.; Michel, J.; Cournet, M.; Poulain, V.; Youssefi, D.

    2017-05-01

    In the frame of the Copernicus program of the European Commission, Sentinel-2 is a constellation of 2 satellites with a revisit time of 5 days in order to have temporal images stacks and a global coverage over terrestrial surfaces. Satellite 2A was launched in June 2015, and satellite 2B will be launched in March 2017. In cooperation with the European Space Agency (ESA), the French space agency (CNES) is in charge of the image quality of the project, and so ensures the CAL/VAL commissioning phase during the months following the launch. This cooperation is also extended to routine phase as CNES supports European Space Research Institute (ESRIN) and the Sentinel-2 Mission performance Centre (MPC) for validation in geometric and radiometric image quality aspects, and in Sentinel-2 GRI geolocation performance assessment whose results will be presented in this paper. The GRI is a set of S2A images at 10m resolution covering the whole world with a good and consistent geolocation. This ground reference enables accurate multi-temporal registration of refined Sentinel-2 products. While not primarily intended for the generation of DSM, Sentinel-2 swaths overlap between orbits would also allow for the generation of a complete DSM of land and ices over 60° of northern latitudes (expected accuracy: few S2 pixels in altimetry). This DSM would benefit from the very frequent revisit times of Sentinel-2, to monitor ice or snow level in area of frequent changes, or to increase measurement accuracy in areas of little changes.

  17. Development of integraded mechanistically-based degradation-mode models for performance assessment of high-level waste containers

    Farmer, J. C., LLNL

    1998-06-01

    A key component of the Engineered Barrier System (EBS) being designed for containment of spent-fuel and high-level waste at the proposed geological repository at Yucca Mountain, Nevada is a two-tayer canister. In this particular design, the inner barrier is made of a corrosion resistant material (CRM) such as Alloy 825, 625 or C-22, while the outer barrier is made of a corrosion-allowance material (CAM) such as A516 Gr 55 or Monel 400. At the present time, Alloy C- 22 and A516 Gr 55 are favored.

  18. Development of integrated mechanistically-based degradation-mode models for performance assessment of high-level waste containers

    Farmer, J. C.

    1998-01-01

    A key component of the Engineered Barrier System (EBS) being designed for containment of spent-fuel and high-level waste at the proposed geological repository at Yucca Mountain, Nevada is a two-tayer canister. In this particular design, the inner barrier is made of a corrosion resistant material (CRM) such as Alloy 825, 625 or C-22, while the outer barrier is made of a corrosion-allowance material (CAM) such as A516 Gr 55 or Monel 400. At the present time, Alloy C- 22 and A516 Gr 55 are favored

  19. Development of integrated mechanistically-based degradation-mode models for performance assessment of high-level waste containers

    Bedrossian, P; Estill, J; Farmer, J; Hopper, R; Horn, J; Huang, J S; McCright, D; Roy, A; Wang, F; Wilfinger, K

    1999-01-01

    A key component of the Engineered Barrier System (EBS) being designed for containment of spent-fuel and high-level waste at the proposed geological repository at Yucca Mountain, Nevada is a two-layer canister. In this particular design, the inner barrier is made of a corrosion resistant material (CRM) such as Alloy 825, 625 or C-22, while the outer barrier is made of a corrosion-allowance material (CAM) such as A516 Gr 55, a carbon steel, or Monel 400. At the present time, Alloy C-22 and A516 G4 55 are favored

  20. Determination of the n-octanol/water partition coefficients of weakly ionizable basic compounds by reversed-phase high-performance liquid chromatography with neutral model compounds.

    Liang, Chao; Han, Shu-ying; Qiao, Jun-qin; Lian, Hong-zhen; Ge, Xin

    2014-11-01

    A strategy to utilize neutral model compounds for lipophilicity measurement of ionizable basic compounds by reversed-phase high-performance liquid chromatography is proposed in this paper. The applicability of the novel protocol was justified by theoretical derivation. Meanwhile, the linear relationships between logarithm of apparent n-octanol/water partition coefficients (logKow '') and logarithm of retention factors corresponding to the 100% aqueous fraction of mobile phase (logkw ) were established for a basic training set, a neutral training set and a mixed training set of these two. As proved in theory, the good linearity and external validation results indicated that the logKow ''-logkw relationships obtained from a neutral model training set were always reliable regardless of mobile phase pH. Afterwards, the above relationships were adopted to determine the logKow of harmaline, a weakly dissociable alkaloid. As far as we know, this is the first report on experimental logKow data for harmaline (logKow = 2.28 ± 0.08). Introducing neutral compounds into a basic model training set or using neutral model compounds alone is recommended to measure the lipophilicity of weakly ionizable basic compounds especially those with high hydrophobicity for the advantages of more suitable model compound choices and convenient mobile phase pH control. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. High performance computing of density matrix renormalization group method for 2-dimensional model. Parallelization strategy toward peta computing

    Yamada, Susumu; Igarashi, Ryo; Machida, Masahiko; Imamura, Toshiyuki; Okumura, Masahiko; Onishi, Hiroaki

    2010-01-01

    We parallelize the density matrix renormalization group (DMRG) method, which is a ground-state solver for one-dimensional quantum lattice systems. The parallelization allows us to extend the applicable range of the DMRG to n-leg ladders i.e., quasi two-dimension cases. Such an extension is regarded to bring about several breakthroughs in e.g., quantum-physics, chemistry, and nano-engineering. However, the straightforward parallelization requires all-to-all communications between all processes which are unsuitable for multi-core systems, which is a mainstream of current parallel computers. Therefore, we optimize the all-to-all communications by the following two steps. The first one is the elimination of the communications between all processes by only rearranging data distribution with the communication data amount kept. The second one is the avoidance of the communication conflict by rescheduling the calculation and the communication. We evaluate the performance of the DMRG method on multi-core supercomputers and confirm that our two-steps tuning is quite effective. (author)

  2. Performance of complex snow cover descriptions in a distributed hydrological model system: A case study for the high Alpine terrain of the Berchtesgaden Alps.

    Warscher, M; Strasser, U; Kraller, G; Marke, T; Franz, H; Kunstmann, H

    2013-05-01

    [1] Runoff generation in Alpine regions is typically affected by snow processes. Snow accumulation, storage, redistribution, and ablation control the availability of water. In this study, several robust parameterizations describing snow processes in Alpine environments were implemented in a fully distributed, physically based hydrological model. Snow cover development is simulated using different methods from a simple temperature index approach, followed by an energy balance scheme, to additionally accounting for gravitational and wind-driven lateral snow redistribution. Test site for the study is the Berchtesgaden National Park (Bavarian Alps, Germany) which is characterized by extreme topography and climate conditions. The performance of the model system in reproducing snow cover dynamics and resulting discharge generation is analyzed and validated via measurements of snow water equivalent and snow depth, satellite-based remote sensing data, and runoff gauge data. Model efficiency (the Nash-Sutcliffe coefficient) for simulated runoff increases from 0.57 to 0.68 in a high Alpine headwater catchment and from 0.62 to 0.64 in total with increasing snow model complexity. In particular, the results show that the introduction of the energy balance scheme reproduces daily fluctuations in the snowmelt rates that trace down to the channel stream. These daily cycles measured in snowmelt and resulting runoff rates could not be reproduced by using the temperature index approach. In addition, accounting for lateral snow transport changes the seasonal distribution of modeled snowmelt amounts, which leads to a higher accuracy in modeling runoff characteristics.

  3. Structure-based capacitance modeling and power loss analysis for the latest high-performance slant field-plate trench MOSFET

    Kobayashi, Kenya; Sudo, Masaki; Omura, Ichiro

    2018-04-01

    Field-plate trench MOSFETs (FP-MOSFETs), with the features of ultralow on-resistance and very low gate–drain charge, are currently the mainstream of high-performance applications and their advancement is continuing as low-voltage silicon power devices. However, owing to their structure, their output capacitance (C oss), which leads to main power loss, remains to be a problem, especially in megahertz switching. In this study, we propose a structure-based capacitance model of FP-MOSFETs for calculating power loss easily under various conditions. Appropriate equations were modeled for C oss curves as three divided components. Output charge (Q oss) and stored energy (E oss) that were calculated using the model corresponded well to technology computer-aided design (TCAD) simulation, and we validated the accuracy of the model quantitatively. In the power loss analysis of FP-MOSFETs, turn-off loss was sufficiently suppressed, however, mainly Q oss loss increased depending on switching frequency. This analysis reveals that Q oss may become a significant issue in next-generation high-efficiency FP-MOSFETs.

  4. Performance of the first short model 150 mm aperture Nb$_3$Sn Quadrupole MQXFS for the High-Luminosity LHC upgrade

    Chlachidze, G; Anerella, M; Bossert, R; Cavanna, E; Cheng, D; Dietderich, D; DiMarco, J; Felice, H; Ferracin, P; Ghosh, A; Grosclaude, P; Guinchard, M; Hafalia, A R; Holik, E; Izquierdo Bermudez, S; Krave, S; Marchevsky, M; Nobrega, F; Orris, D; Pan, H; Perez, J C; Prestemon, S; Ravaioli, E; Sabbi, G L; Salmi, T; Schmalzle, J; Stoynev, S; Strauss, T; Sylvester, C; Tartaglia, M; Todesco, E; Vallone, G; Velev, G; Wanderer, P; Wang, X; Yu, M

    2017-01-01

    The US LHC Accelerator Research Program (LARP) and CERN combined their efforts in developing Nb$_{3}$Sn magnets for the High-Luminosity LHC upgrade. The ultimate goal of this collaboration is to fabricate large aperture Nb$_{3}$Sn quadrupoles for the LHC interaction regions (IR). These magnets will replace the present 70 mm aperture NbTi quadrupole triplets for expected increase of the LHC peak luminosity by a factor of 5. Over the past decade LARP successfully fabricated and tested short and long models of 90 mm and 120 mm aperture Nb$_{3}$Sn quadrupoles. Recently the first short model of 150 mm diameter quadrupole MQXFS was built with coils fabricated both by the LARP and CERN. The magnet performance was tested at Fermilab’s vertical magnet test facility. This paper reports the test results, including the quench training at 1.9 K, ramp rate and temperature dependence studies.

  5. Integrated plasma control for high performance tokamaks

    Humphreys, D.A.; Deranian, R.D.; Ferron, J.R.; Johnson, R.D.; LaHaye, R.J.; Leuer, J.A.; Penaflor, B.G.; Walker, M.L.; Welander, A.S.; Jayakumar, R.J.; Makowski, M.A.; Khayrutdinov, R.R.

    2005-01-01

    Sustaining high performance in a tokamak requires controlling many equilibrium shape and profile characteristics simultaneously with high accuracy and reliability, while suppressing a variety of MHD instabilities. Integrated plasma control, the process of designing high-performance tokamak controllers based on validated system response models and confirming their performance in detailed simulations, provides a systematic method for achieving and ensuring good control performance. For present-day devices, this approach can greatly reduce the need for machine time traditionally dedicated to control optimization, and can allow determination of high-reliability controllers prior to ever producing the target equilibrium experimentally. A full set of tools needed for this approach has recently been completed and applied to present-day devices including DIII-D, NSTX and MAST. This approach has proven essential in the design of several next-generation devices including KSTAR, EAST, JT-60SC, and ITER. We describe the method, results of design and simulation tool development, and recent research producing novel approaches to equilibrium and MHD control in DIII-D. (author)

  6. Learning Apache Solr high performance

    Mohan, Surendra

    2014-01-01

    This book is an easy-to-follow guide, full of hands-on, real-world examples. Each topic is explained and demonstrated in a specific and user-friendly flow, from search optimization using Solr to Deployment of Zookeeper applications. This book is ideal for Apache Solr developers and want to learn different techniques to optimize Solr performance with utmost efficiency, along with effectively troubleshooting the problems that usually occur while trying to boost performance. Familiarity with search servers and database querying is expected.

  7. High-performance composite chocolate

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-07-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with the material selection process. In a competition-based practical, first-year undergraduate students design, cost and cast composite chocolate samples to maximize a particular performance criterion. The same activity could be adapted for any level of education to introduce the subject of materials properties and their effects on the material chosen for specific applications.

  8. Performability Modelling Tools, Evaluation Techniques and Applications

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  9. Evaluating the performance of coupled snow-soil models in SURFEXv8 to simulate the permafrost thermal regime at a high Arctic site

    Barrere, Mathieu; Domine, Florent; Decharme, Bertrand; Morin, Samuel; Vionnet, Vincent; Lafaysse, Matthieu

    2017-09-01

    Climate change projections still suffer from a limited representation of the permafrost-carbon feedback. Predicting the response of permafrost temperature to climate change requires accurate simulations of Arctic snow and soil properties. This study assesses the capacity of the coupled land surface and snow models ISBA-Crocus and ISBA-ES to simulate snow and soil properties at Bylot Island, a high Arctic site. Field measurements complemented with ERA-Interim reanalyses were used to drive the models and to evaluate simulation outputs. Snow height, density, temperature, thermal conductivity and thermal insulance are examined to determine the critical variables involved in the soil and snow thermal regime. Simulated soil properties are compared to measurements of thermal conductivity, temperature and water content. The simulated snow density profiles are unrealistic, which is most likely caused by the lack of representation in snow models of the upward water vapor fluxes generated by the strong temperature gradients within the snowpack. The resulting vertical profiles of thermal conductivity are inverted compared to observations, with high simulated values at the bottom of the snowpack. Still, ISBA-Crocus manages to successfully simulate the soil temperature in winter. Results are satisfactory in summer, but the temperature of the top soil could be better reproduced by adequately representing surface organic layers, i.e., mosses and litter, and in particular their water retention capacity. Transition periods (soil freezing and thawing) are the least well reproduced because the high basal snow thermal conductivity induces an excessively rapid heat transfer between the soil and the snow in simulations. Hence, global climate models should carefully consider Arctic snow thermal properties, and especially the thermal conductivity of the basal snow layer, to perform accurate predictions of the permafrost evolution under climate change.

  10. High-Performance Composite Chocolate

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-01-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with…

  11. Toward High-Performance Organizations.

    Lawler, Edward E., III

    2002-01-01

    Reviews management changes that companies have made over time in adopting or adapting four approaches to organizational performance: employee involvement, total quality management, re-engineering, and knowledge management. Considers future possibilities and defines a new view of what constitutes effective organizational design in management.…

  12. Development of high performance cladding materials

    Park, Jeong Yong; Jeong, Y. H.; Park, S. Y.

    2010-04-01

    The irradiation test for HANA claddings conducted and a series of evaluation for next-HANA claddings as well as their in-pile and out-of pile performances tests were also carried out at Halden research reactor. The 6th irradiation test have been completed successfully in Halden research reactor. As a result, HANA claddings showed high performance, such as corrosion resistance increased by 40% compared to Zircaloy-4. The high performance of HANA claddings in Halden test has enabled lead test rod program as the first step of the commercialization of HANA claddings. DB has been established for thermal and LOCA-related properties. It was confirmed from the thermal shock test that the integrity of HANA claddings was maintained in more expanded region than the criteria regulated by NRC. The manufacturing process of strips was established in order to apply HANA alloys, which were originally developed for the claddings, to the spacer grids. 250 kinds of model alloys for the next-generation claddings were designed and manufactured over 4 times and used to select the preliminary candidate alloys for the next-generation claddings. The selected candidate alloys showed 50% better corrosion resistance and 20% improved high temperature oxidation resistance compared to the foreign advanced claddings. We established the manufacturing condition controlling the performance of the dual-cooled claddings by changing the reduction rate in the cold working steps

  13. High School and College Biology: A Multi-Level Model of the Effects of High School Courses on Introductory Course Performance

    Loehr, John F.; Almarode, John T.; Tai, Robert H.; Sadler, Philip M.

    2012-01-01

    In a climate where increasing numbers of students are encouraged to pursue post-secondary education, the level of preparedness students have for college-level coursework is not far from the minds of all educators, especially high school teachers. Specifically within the biological sciences, introductory biology classes often serve as the…

  14. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the

  15. Functional High Performance Financial IT

    Berthold, Jost; Filinski, Andrzej; Henglein, Fritz

    2011-01-01

    at the University of Copenhagen that attacks this triple challenge of increased performance, transparency and productivity in the financial sector by a novel integration of financial mathematics, domain-specific language technology, parallel functional programming, and emerging massively parallel hardware. HIPERFIT......The world of finance faces the computational performance challenge of massively expanding data volumes, extreme response time requirements, and compute-intensive complex (risk) analyses. Simultaneously, new international regulatory rules require considerably more transparency and external...... auditability of financial institutions, including their software systems. To top it off, increased product variety and customisation necessitates shorter software development cycles and higher development productivity. In this paper, we report about HIPERFIT, a recently etablished strategic research center...

  16. High performance Mo adsorbent PZC

    Anon,

    1998-10-01

    We have developed Mo adsorbents for natural Mo(n, {gamma}){sup 99}Mo-{sup 99m}Tc generator. Among them, we called the highest performance adsorbent PZC that could adsorb about 250 mg-Mo/g. In this report, we will show the structure, adsorption mechanism of Mo, and the other useful properties of PZC when you carry out the examination of Mo adsorption and elution of {sup 99m}Tc. (author)

  17. Indoor Air Quality in High Performance Schools

    High performance schools are facilities that improve the learning environment while saving energy, resources, and money. The key is understanding the lifetime value of high performance schools and effectively managing priorities, time, and budget.

  18. High performance inertial fusion targets

    Nuckolls, J.H.; Bangerter, R.O.; Lindl, J.D.; Mead, W.C.; Pan, Y.L.

    1977-01-01

    Inertial confinement fusion (ICF) designs are considered which may have very high gains (approximately 1000) and low power requirements (<100 TW) for input energies of approximately one megajoule. These include targets having very low density shells, ultra thin shells, central ignitors, magnetic insulation, and non-ablative acceleration

  19. High performance inertial fusion targets

    Nuckolls, J.H.; Bangerter, R.O.; Lindl, J.D.; Mead, W.C.; Pan, Y.L.

    1978-01-01

    Inertial confinement fusion (ICF) target designs are considered which may have very high gains (approximately 1000) and low power requirements (< 100 TW) for input energies of approximately one megajoule. These include targets having very low density shells, ultra thin shells, central ignitors, magnetic insulation, and non-ablative acceleration

  20. High performance nuclear fuel element

    Mordarski, W.J.; Zegler, S.T.

    1980-01-01

    A fuel-pellet composition is disclosed for use in fast breeder reactors. Uranium carbide particles are mixed with a powder of uraniumplutonium carbides having a stable microstructure. The resulting mixture is formed into fuel pellets. The pellets thus produced exhibit a relatively low propensity to swell while maintaining a high density

  1. High Performance JavaScript

    Zakas, Nicholas

    2010-01-01

    If you're like most developers, you rely heavily on JavaScript to build interactive and quick-responding web applications. The problem is that all of those lines of JavaScript code can slow down your apps. This book reveals techniques and strategies to help you eliminate performance bottlenecks during development. You'll learn how to improve execution time, downloading, interaction with the DOM, page life cycle, and more. Yahoo! frontend engineer Nicholas C. Zakas and five other JavaScript experts -- Ross Harmes, Julien Lecomte, Steven Levithan, Stoyan Stefanov, and Matt Sweeney -- demonstra

  2. Carpet Aids Learning in High Performance Schools

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  3. EDITORIAL: High performance under pressure High performance under pressure

    Demming, Anna

    2011-11-01

    The accumulation of charge in certain materials in response to an applied mechanical stress was first discovered in 1880 by Pierre Curie and his brother Paul-Jacques. The effect, piezoelectricity, forms the basis of today's microphones, quartz watches, and electronic components and constitutes an awesome scientific legacy. Research continues to develop further applications in a range of fields including imaging [1, 2], sensing [3] and, as reported in this issue of Nanotechnology, energy harvesting [4]. Piezoelectricity in biological tissue was first reported in 1941 [5]. More recently Majid Minary-Jolandan and Min-Feng Yu at the University of Illinois at Urbana-Champaign in the USA have studied the piezoelectric properties of collagen I [1]. Their observations support the nanoscale origin of piezoelectricity in bone and tendons and also imply the potential importance of the shear load transfer mechanism in mechanoelectric transduction in bone. Shear load transfer has been the principle basis of the nanoscale mechanics model of collagen. The piezoelectric effect in quartz causes a shift in the resonant frequency in response to a force gradient. This has been exploited for sensing forces in scanning probe microscopes that do not need optical readout. Recently researchers in Spain explored the dynamics of a double-pronged quartz tuning fork [2]. They observed thermal noise spectra in agreement with a coupled-oscillators model, providing important insights into the system's behaviour. Nano-electromechanical systems are increasingly exploiting piezoresistivity for motion detection. Observations of the change in a material's resistance in response to the applied stress pre-date the discovery of piezoelectric effect and were first reported in 1856 by Lord Kelvin. Researchers at Caltech recently demonstrated that a bridge configuration of piezoresistive nanowires can be used to detect in-plane CMOS-based and fully compatible with future very-large scale integration of

  4. Performance of a High-Fidelity 4kW-Class Engineering Model PPU and Integration with HiVHAc System

    Pinero, Luis R.; Kamhawi, Hani; Shilo, Vlad

    2016-01-01

    The High Voltage Hall Accelerator (HiVHAc) propulsion system consists of a thruster, power processing unit (PPU), and propellant feed system. An engineering model PPU was developed by Colorado Power Electronics, Inc. funded by NASA's Small Business Innovative Research Program. This PPU uses an innovative 3-phase resonant converter to deliver 4 kW of discharge power over a wide range of input and output voltage conditions. The PPU includes a digital control interface unit that automatically controls the PPU and a xenon flow control module (XFCM). It interfaces with a control computer to receive highlevel commands and relay telemetry through a MIL-STD-1553B interface. The EM PPU was thoroughly tested at GRC for functionality and performance at temperature limits and demonstrated total efficiencies a high as 95 percent. Integrated testing of the unit was performed with the HiVHAc thruster and the XFCM to demonstrate closed-loop control of discharge current with anode flow. Initiation of the main discharge and power throttling were also successfully demonstrated and discharge oscillations were characterized.

  5. Data management system performance modeling

    Kiser, Larry M.

    1993-01-01

    This paper discusses analytical techniques that have been used to gain a better understanding of the Space Station Freedom's (SSF's) Data Management System (DMS). The DMS is a complex, distributed, real-time computer system that has been redesigned numerous times. The implications of these redesigns have not been fully analyzed. This paper discusses the advantages and disadvantages for static analytical techniques such as Rate Monotonic Analysis (RMA) and also provides a rationale for dynamic modeling. Factors such as system architecture, processor utilization, bus architecture, queuing, etc. are well suited for analysis with a dynamic model. The significance of performance measures for a real-time system are discussed.

  6. Computational Biology and High Performance Computing 2000

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  7. High performance electromagnetic simulation tools

    Gedney, Stephen D.; Whites, Keith W.

    1994-10-01

    Army Research Office Grant #DAAH04-93-G-0453 has supported the purchase of 24 additional compute nodes that were installed in the Intel iPsC/860 hypercube at the Univesity Of Kentucky (UK), rendering a 32-node multiprocessor. This facility has allowed the investigators to explore and extend the boundaries of electromagnetic simulation for important areas of defense concerns including microwave monolithic integrated circuit (MMIC) design/analysis and electromagnetic materials research and development. The iPSC/860 has also provided an ideal platform for MMIC circuit simulations. A number of parallel methods based on direct time-domain solutions of Maxwell's equations have been developed on the iPSC/860, including a parallel finite-difference time-domain (FDTD) algorithm, and a parallel planar generalized Yee-algorithm (PGY). The iPSC/860 has also provided an ideal platform on which to develop a 'virtual laboratory' to numerically analyze, scientifically study and develop new types of materials with beneficial electromagnetic properties. These materials simulations are capable of assembling hundreds of microscopic inclusions from which an electromagnetic full-wave solution will be obtained in toto. This powerful simulation tool has enabled research of the full-wave analysis of complex multicomponent MMIC devices and the electromagnetic properties of many types of materials to be performed numerically rather than strictly in the laboratory.

  8. High-Performance Data Converters

    Steensgaard-Madsen, Jesper

    -resolution internal D/A converters are required. Unit-element mismatch-shaping D/A converters are analyzed, and the concept of mismatch-shaping is generalized to include scaled-element D/A converters. Several types of scaled-element mismatch-shaping D/A converters are proposed. Simulations show that, when implemented...... in a standard CMOS technology, they can be designed to yield 100 dB performance at 10 times oversampling. The proposed scaled-element mismatch-shaping D/A converters are well suited for use as the feedback stage in oversampled delta-sigma quantizers. It is, however, not easy to make full use of their potential......-order difference of the output signal from the loop filter's first integrator stage. This technique avoids the need for accurate matching of analog and digital filters that characterizes the MASH topology, and it preserves the signal-band suppression of quantization errors. Simulations show that quantizers...

  9. High performance polyethylene nanocomposite fibers

    A. Dorigato

    2012-12-01

    Full Text Available A high density polyethylene (HDPE matrix was melt compounded with 2 vol% of dimethyldichlorosilane treated fumed silica nanoparticles. Nanocomposite fibers were prepared by melt spinning through a co-rotating twin screw extruder and drawing at 125°C in air. Thermo-mechanical and morphological properties of the resulting fibers were then investigated. The introduction of nanosilica improved the drawability of the fibers, allowing the achievement of higher draw ratios with respect to the neat matrix. The elastic modulus and creep stability of the fibers were remarkably improved upon nanofiller addition, with a retention of the pristine tensile properties at break. Transmission electronic microscope (TEM images evidenced that the original morphology of the silica aggregates was disrupted by the applied drawing.

  10. Thermal-hydraulic analysis of heat transfer in subchannels of the European high performance supercritical Water-Cooled Reactor for different CFD turbulence models

    Castro, Landy Y.; Rojas, Leorlen Y.; Gamez, Abel; Rosales, Jesus; Gonzalez, Daniel; Garcia, Carlos, E-mail: lcastro@instec.cu, E-mail: leored1984@gmail.com, E-mail: agamezgmf@gmail.com, E-mail: jrosales@instec.cu, E-mail: danielgonro@gmail.com, E-mail: cgh@instec.cu [Instituto Superior de Tecnologias y Ciencias Aplicadas (InSTEC), La Habana (Cuba); Oliveira, Carlos Brayner de, E-mail: cabol@ufpe.br [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil); Dominguez, Dany S., E-mail: dsdominguez@gmail.com [Universidade Estadual de Santa Cruz (UESC), Ilheus, BA (Brazil). Pos-Graduacao em Modelagem Computacional

    2015-07-01

    Chosen as one of six Generation‒IV nuclear-reactor concepts, Supercritical Water-cooled Reactors (SCWRs) are expected to have high thermal efficiencies within the range of 45 - 50% owing to the reactor's high pressures and outlet temperatures. In this reactor, the primary water enters the core under supercritical-pressure condition (25 MPa) at a temperature of 280 deg C and leaves it at a temperature of up to 510 deg C. Due to the significant changes in the physical properties of water at supercritical-pressure, the system is susceptible to local temperature, density and power oscillations. The behavior of supercritical water into the core of the SCWR, need to be sufficiently studied. Most of the methods available to predict the effects of the heat transfer phenomena within the pseudocritical region are based on empirical one-directional correlations, which do not capture the multidimensional effects and do not provide accurate results in regions such as the deteriorated heat transfer regime. In this paper, computational fluid dynamics (CFD) analysis was carried out to study the thermal-hydraulic behavior of supercritical water flows in sub-channels of a typical European High Performance Light Water Reactor (HPLWR) fuel assembly using commercial CFD code CFX-14. It was determined the steady-state equilibrium parameters and calculated the temperature and density distributions. A comparative study for different turbulence models were carried out and the obtained results are discussed. (author)

  11. Thermal-hydraulic analysis of heat transfer in subchannels of the European high performance supercritical Water-Cooled Reactor for different CFD turbulence models

    Castro, Landy Y.; Rojas, Leorlen Y.; Gamez, Abel; Rosales, Jesus; Gonzalez, Daniel; Garcia, Carlos; Oliveira, Carlos Brayner de; Dominguez, Dany S.

    2015-01-01

    Chosen as one of six Generation‒IV nuclear-reactor concepts, Supercritical Water-cooled Reactors (SCWRs) are expected to have high thermal efficiencies within the range of 45 - 50% owing to the reactor's high pressures and outlet temperatures. In this reactor, the primary water enters the core under supercritical-pressure condition (25 MPa) at a temperature of 280 deg C and leaves it at a temperature of up to 510 deg C. Due to the significant changes in the physical properties of water at supercritical-pressure, the system is susceptible to local temperature, density and power oscillations. The behavior of supercritical water into the core of the SCWR, need to be sufficiently studied. Most of the methods available to predict the effects of the heat transfer phenomena within the pseudocritical region are based on empirical one-directional correlations, which do not capture the multidimensional effects and do not provide accurate results in regions such as the deteriorated heat transfer regime. In this paper, computational fluid dynamics (CFD) analysis was carried out to study the thermal-hydraulic behavior of supercritical water flows in sub-channels of a typical European High Performance Light Water Reactor (HPLWR) fuel assembly using commercial CFD code CFX-14. It was determined the steady-state equilibrium parameters and calculated the temperature and density distributions. A comparative study for different turbulence models were carried out and the obtained results are discussed. (author)

  12. Performance concerns for high duty fuel cycle

    Esposito, V.J.; Gutierrez, J.E.

    1999-01-01

    One of the goals of the nuclear industry is to achieve economic performance such that nuclear power plants are competitive in a de-regulated market. The manner in which nuclear fuel is designed and operated lies at the heart of economic viability. In this sense reliability, operating flexibility and low costs are the three major requirements of the NPP today. The translation of these three requirements to the design is part of our work. The challenge today is to produce a fuel design which will operate with long operating cycles, high discharge burnup, power up-rating and while still maintaining all design and safety margins. European Fuel Group (EFG) understands that to achieve the required performance high duty/energy fuel designs are needed. The concerns for high duty design includes, among other items, core design methods, advanced Safety Analysis methodologies, performance models, advanced material and operational strategies. The operational aspects require the trade-off and evaluation of various parameters including coolant chemistry control, material corrosion, boiling duty, boron level impacts, etc. In this environment MAEF is the design that EFG is now offering based on ZIRLO alloy and a robust skeleton. This new design is able to achieve 70 GWd/tU and Lead Test Programs are being executed to demonstrate this capability. A number of performance issues which have been a concern with current designs have been resolved such as cladding corrosion and incomplete RCCA insertion (IRI). As the core duty becomes more aggressive other new issues need to be addressed such as Axial Offset Anomaly. These new issues are being addressed by combination of the new design in concert with advanced methodologies to meet the demanding needs of NPP. The ability and strategy to meet high duty core requirements, flexibility of operation and maintain acceptable balance of all technical issues is the discussion in this paper. (authors)

  13. HIGH-PERFORMANCE COATING MATERIALS

    SUGAMA,T.

    2007-01-01

    Corrosion, erosion, oxidation, and fouling by scale deposits impose critical issues in selecting the metal components used at geothermal power plants operating at brine temperatures up to 300 C. Replacing these components is very costly and time consuming. Currently, components made of titanium alloy and stainless steel commonly are employed for dealing with these problems. However, another major consideration in using these metals is not only that they are considerably more expensive than carbon steel, but also the susceptibility of corrosion-preventing passive oxide layers that develop on their outermost surface sites to reactions with brine-induced scales, such as silicate, silica, and calcite. Such reactions lead to the formation of strong interfacial bonds between the scales and oxide layers, causing the accumulation of multiple layers of scales, and the impairment of the plant component's function and efficacy; furthermore, a substantial amount of time is entailed in removing them. This cleaning operation essential for reusing the components is one of the factors causing the increase in the plant's maintenance costs. If inexpensive carbon steel components could be coated and lined with cost-effective high-hydrothermal temperature stable, anti-corrosion, -oxidation, and -fouling materials, this would improve the power plant's economic factors by engendering a considerable reduction in capital investment, and a decrease in the costs of operations and maintenance through optimized maintenance schedules.

  14. Assessing Ecosystem Model Performance in Semiarid Systems

    Thomas, A.; Dietze, M.; Scott, R. L.; Biederman, J. A.

    2017-12-01

    In ecosystem process modelling, comparing outputs to benchmark datasets observed in the field is an important way to validate models, allowing the modelling community to track model performance over time and compare models at specific sites. Multi-model comparison projects as well as models themselves have largely been focused on temperate forests and similar biomes. Semiarid regions, on the other hand, are underrepresented in land surface and ecosystem modelling efforts, and yet will be disproportionately impacted by disturbances such as climate change due to their sensitivity to changes in the water balance. Benchmarking models at semiarid sites is an important step in assessing and improving models' suitability for predicting the impact of disturbance on semiarid ecosystems. In this study, several ecosystem models were compared at a semiarid grassland in southwestern Arizona using PEcAn, or the Predictive Ecosystem Analyzer, an open-source eco-informatics toolbox ideal for creating the repeatable model workflows necessary for benchmarking. Models included SIPNET, DALEC, JULES, ED2, GDAY, LPJ-GUESS, MAESPA, CLM, CABLE, and FATES. Comparison between model output and benchmarks such as net ecosystem exchange (NEE) tended to produce high root mean square error and low correlation coefficients, reflecting poor simulation of seasonality and the tendency for models to create much higher carbon sources than observed. These results indicate that ecosystem models do not currently adequately represent semiarid ecosystem processes.

  15. Development of a High Performance Spacer Grid

    Song, Kee Nam; Song, K. N.; Yoon, K. H. (and others)

    2007-03-15

    A spacer grid in a LWR fuel assembly is a key structural component to support fuel rods and to enhance the heat transfer from the fuel rod to the coolant. In this research, the main research items are the development of inherent and high performance spacer grid shapes, the establishment of mechanical/structural analysis and test technology, and the set-up of basic test facilities for the spacer grid. The main research areas and results are as follows. 1. 18 different spacer grid candidates have been invented and applied for domestic and US patents. Among the candidates 16 are chosen from the patent. 2. Two kinds of spacer grids are finally selected for the advanced LWR fuel after detailed performance tests on the candidates and commercial spacer grids from a mechanical/structural point of view. According to the test results the features of the selected spacer grids are better than those of the commercial spacer grids. 3. Four kinds of basic test facilities are set up and the relevant test technologies are established. 4. Mechanical/structural analysis models and technology for spacer grid performance are developed and the analysis results are compared with the test results to enhance the reliability of the models.

  16. Resitting a high-stakes postgraduate medical examination on multiple occasions: nonlinear multilevel modelling of performance in the MRCP(UK examinations

    McManus IC

    2012-06-01

    Full Text Available Abstract Background Failure rates in postgraduate examinations are often high and many candidates therefore retake examinations on several or even many times. Little, however, is known about how candidates perform across those multiple attempts. A key theoretical question to be resolved is whether candidates pass at a resit because they have got better, having acquired more knowledge or skills, or whether they have got lucky, chance helping them to get over the pass mark. In the UK, the issue of resits has become of particular interest since the General Medical Council issued a consultation and is considering limiting the number of attempts candidates may make at examinations. Methods Since 1999 the examination for Membership of the Royal Colleges of Physicians of the United Kingdom (MRCP(UK has imposed no limit on the number of attempts candidates can make at its Part 1, Part2 or PACES (Clinical examination. The present study examined the performance of candidates on the examinations from 2002/2003 to 2010, during which time the examination structure has been stable. Data were available for 70,856 attempts at Part 1 by 39,335 candidates, 37,654 attempts at Part 2 by 23,637 candidates and 40,303 attempts at PACES by 21,270 candidates, with the maximum number of attempts being 26, 21 and 14, respectively. The results were analyzed using multilevel modelling, fitting negative exponential growth curves to individual candidate performance. Results The number of candidates taking the assessment falls exponentially at each attempt. Performance improves across attempts, with evidence in the Part 1 examination that candidates are still improving up to the tenth attempt, with a similar improvement up to the fourth attempt in Part 2 and the sixth attempt at PACES. Random effects modelling shows that candidates begin at a starting level, with performance increasing by a smaller amount at each attempt, with evidence of a maximum, asymptotic level for

  17. Resitting a high-stakes postgraduate medical examination on multiple occasions: nonlinear multilevel modelling of performance in the MRCP(UK) examinations.

    McManus, I C; Ludka, Katarzyna

    2012-06-14

    Failure rates in postgraduate examinations are often high and many candidates therefore retake examinations on several or even many times. Little, however, is known about how candidates perform across those multiple attempts. A key theoretical question to be resolved is whether candidates pass at a resit because they have got better, having acquired more knowledge or skills, or whether they have got lucky, chance helping them to get over the pass mark. In the UK, the issue of resits has become of particular interest since the General Medical Council issued a consultation and is considering limiting the number of attempts candidates may make at examinations. Since 1999 the examination for Membership of the Royal Colleges of Physicians of the United Kingdom (MRCP(UK)) has imposed no limit on the number of attempts candidates can make at its Part 1, Part 2 or PACES (Clinical) examination. The present study examined the performance of candidates on the examinations from 2002/2003 to 2010, during which time the examination structure has been stable. Data were available for 70,856 attempts at Part 1 by 39,335 candidates, 37,654 attempts at Part 2 by 23,637 candidates and 40,303 attempts at PACES by 21,270 candidates, with the maximum number of attempts being 26, 21 and 14, respectively. The results were analyzed using multilevel modelling, fitting negative exponential growth curves to individual candidate performance. The number of candidates taking the assessment falls exponentially at each attempt. Performance improves across attempts, with evidence in the Part 1 examination that candidates are still improving up to the tenth attempt, with a similar improvement up to the fourth attempt in Part 2 and the sixth attempt at PACES. Random effects modelling shows that candidates begin at a starting level, with performance increasing by a smaller amount at each attempt, with evidence of a maximum, asymptotic level for candidates, and candidates showing variation in starting

  18. Delivering high performance BWR fuel reliably

    Schardt, J.F.

    1998-01-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  19. Combining high productivity with high performance on commodity hardware

    Skovhede, Kenneth

    -like compiler for translating CIL bytecode on the CELL-BE. I then introduce a bytecode converter that transforms simple loops in Java bytecode to GPGPU capable code. I then introduce the numeric library for the Common Intermediate Language, NumCIL. I can then utilizing the vector programming model from Num......CIL and map this to the Bohrium framework. The result is a complete system that gives the user a choice of high-level languages with no explicit parallelism, yet seamlessly performs efficient execution on a number of hardware setups....

  20. Models for Automated Tube Performance Calculations

    Brunkhorst, C.

    2002-01-01

    High power radio-frequency systems, as typically used in fusion research devices, utilize vacuum tubes. Evaluation of vacuum tube performance involves data taken from tube operating curves. The acquisition of data from such graphical sources is a tedious process. A simple modeling method is presented that will provide values of tube currents for a given set of element voltages. These models may be used as subroutines in iterative solutions of amplifier operating conditions for a specific loading impedance

  1. Building Trust in High-Performing Teams

    Aki Soudunsaari

    2012-06-01

    Full Text Available Facilitation of growth is more about good, trustworthy contacts than capital. Trust is a driving force for business creation, and to create a global business you need to build a team that is capable of meeting the challenge. Trust is a key factor in team building and a needed enabler for cooperation. In general, trust building is a slow process, but it can be accelerated with open interaction and good communication skills. The fast-growing and ever-changing nature of global business sets demands for cooperation and team building, especially for startup companies. Trust building needs personal knowledge and regular face-to-face interaction, but it also requires empathy, respect, and genuine listening. Trust increases communication, and rich and open communication is essential for the building of high-performing teams. Other building materials are a shared vision, clear roles and responsibilities, willingness for cooperation, and supporting and encouraging leadership. This study focuses on trust in high-performing teams. It asks whether it is possible to manage trust and which tools and operation models should be used to speed up the building of trust. In this article, preliminary results from the authors’ research are presented to highlight the importance of sharing critical information and having a high level of communication through constant interaction.

  2. Off gas condenser performance modelling

    Cains, P.W.; Hills, K.M.; Waring, S.; Pratchett, A.G.

    1989-12-01

    A suite of three programmes has been developed to model the ruthenium decontamination performance of a vitrification plant off-gas condenser. The stages of the model are: condensation of water vapour, NO x absorption in the condensate, RuO 4 absorption in the condensate. Juxtaposition of these stages gives a package that may be run on an IBM-compatible desktop PC. Experimental work indicates that the criterion [HNO 2 ] > 10 [RuO 4 ] used to determine RuO 4 destruction in solution is probably realistic under condenser conditions. Vapour pressures of RuO 4 over aqueous solutions at 70 o -90 o C are slightly lower than the values given by extrapolating the ln K p vs. T -1 relation derived from lower temperature data. (author)

  3. Data harmonization and model performance

    The Joint Committee on Urban Storm Drainage of the International Association for Hydraulic Research (IAHR) and International Association on Water Pollution Research and Control (IAWPRC) was formed in 1982. The current committee members are (no more than two from a country): B. C. Yen, Chairman (USA); P. Harremoes, Vice Chairman (Denmark); R. K. Price, Secretary (UK); P. J. Colyer (UK), M. Desbordes (France), W. C. Huber (USA), K. Krauth (FRG), A. Sjoberg (Sweden), and T. Sueishi (Japan).The IAHR/IAWPRC Joint Committee is forming a Task Group on Data Harmonization and Model Performance. One objective is to promote international urban drainage data harmonization for easy data and information exchange. Another objective is to publicize available models and data internationally. Comments and suggestions concerning the formation and charge of the Task Group are welcome and should be sent to: B. C. Yen, Dept. of Civil Engineering, Univ. of Illinois, 208 N. Romine St., Urbana, IL 61801.

  4. High performance liquid chromatography of substituted aromatics with the metal-organic framework MIL-100(Fe): Mechanism analysis and model-based prediction.

    Qin, Weiwei; Silvestre, Martin Eduardo; Li, Yongli; Franzreb, Matthias

    2016-02-05

    Metal-organic framework (MOF) MIL-100(Fe) with well-defined thickness was homogenously coated onto the outer surface of magnetic microparticles via a liquid-phase epitaxy method. The as-synthesized MIL-100(Fe) was used as stationary phase for high-performance liquid chromatography (HPLC) and separations of two groups of mixed aromatic hydrocarbons (toluene, styrene and p-xylene; acetanilide, 2-nirtoaniline and 1-naphthylamine) using methanol/water as mobile phase were performed to evaluate its performance. Increasing water content of the mobile phase composition can greatly improve the separations on the expense of a longer elution time. Stepwise elution significantly shortens the elution time of acetanilide, 2-nirtoaniline and 1-naphthylamine mixtures, while still achieving a baseline separation. Combining the experimental results and in-depth modeling using a recently developed chromatographic software (ChromX), adsorption equilibrium parameters, including the affinities and maximum capacities, for each analyte toward the MIL-100(Fe) are obtained. In addition, the pore diffusivity of aromatic hydrocarbons within MIL-100(Fe) was determined to be 5×10(-12)m(2)s(-1). While the affinities of MIL-100(Fe) toward the analyte molecules differs much, the maximum capacities of the analytes are in a narrow range with q*MOFmax,toluene=3.55molL(-1), q*MOFmax,styrene or p-xylene=3.53molL(-1), and q*MOFmax,anilines=3.12molL(-1) corresponding to approximately 842 toluene and 838 styrene or p-xylene, and 740 aniline molecules per MIL-100(Fe) unit cell, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Vanadium Pentoxide Nanobelt-Reduced Graphene Oxide Nanosheet Composites as High-Performance Pseudocapacitive Electrodes: ac Impedance Spectroscopy Data Modeling and Theoretical Calculations

    Sanju Gupta

    2016-07-01

    Full Text Available Graphene nanosheets and graphene nanoribbons, G combined with vanadium pentoxide (VO nanobelts (VNBs and VNBs forming GVNB composites with varying compositions were synthesized via a one-step low temperature facile hydrothermal decomposition method as high-performance electrochemical pseudocapacitive electrodes. VNBs from vanadium pentoxides (VO are formed in the presence of graphene oxide (GO, a mild oxidant, which transforms into reduced GO (rGOHT, assisting in enhancing the electronic conductivity coupled with the mechanical robustness of VNBs. From electron microscopy, surface sensitive spectroscopy and other complementary structural characterization, hydrothermally-produced rGO nanosheets/nanoribbons are decorated with and inserted within the VNBs’ layered crystal structure, which further confirmed the enhanced electronic conductivity of VNBs. Following the electrochemical properties of GVNBs being investigated, the specific capacitance Csp is determined from cyclic voltammetry (CV with a varying scan rate and galvanostatic charging-discharging (V–t profiles with varying current density. The rGO-rich composite V1G3 (i.e., VO/GO = 1:3 showed superior specific capacitance followed by VO-rich composite V3G1 (VO/GO = 3:1, as compared to V1G1 (VO/GO = 1:1 composite, besides the constituents, i.e., rGO, rGOHT and VNBs. Composites V1G3 and V3G1 also showed excellent cyclic stability and a capacitance retention of >80% after 500 cycles at the highest specific current density. Furthermore, by performing extensive simulations and modeling of electrochemical impedance spectroscopy data, we determined various circuit parameters, including charge transfer and solution resistance, double layer and low frequency capacitance, Warburg impedance and the constant phase element. The detailed analyses provided greater insights into physical-chemical processes occurring at the electrode-electrolyte interface and highlighted the comparative performance of

  6. High performance carbon nanocomposites for ultracapacitors

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  7. Strategies and Experiences Using High Performance Fortran

    Shires, Dale

    2001-01-01

    .... High performance Fortran (HPF) is a relative new addition to the Fortran dialect It is an attempt to provide an efficient high-level Fortran parallel programming language for the latest generation of been debatable...

  8. High Performance Grinding and Advanced Cutting Tools

    Jackson, Mark J

    2013-01-01

    High Performance Grinding and Advanced Cutting Tools discusses the fundamentals and advances in high performance grinding processes, and provides a complete overview of newly-developing areas in the field. Topics covered are grinding tool formulation and structure, grinding wheel design and conditioning and applications using high performance grinding wheels. Also included are heat treatment strategies for grinding tools, using grinding tools for high speed applications, laser-based and diamond dressing techniques, high-efficiency deep grinding, VIPER grinding, and new grinding wheels.

  9. Strategy Guideline: High Performance Residential Lighting

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  10. Behavior model for performance assessment

    Brown-VanHoozer, S. A.

    1999-01-01

    Every individual channels information differently based on their preference of the sensory modality or representational system (visual auditory or kinesthetic) we tend to favor most (our primary representational system (PRS)). Therefore, some of us access and store our information primarily visually first, some auditorily, and others kinesthetically (through feel and touch); which in turn establishes our information processing patterns and strategies and external to internal (and subsequently vice versa) experiential language representation. Because of the different ways we channel our information, each of us will respond differently to a task--the way we gather and process the external information (input), our response time (process), and the outcome (behavior). Traditional human models of decision making and response time focus on perception, cognitive and motor systems stimulated and influenced by the three sensory modalities, visual, auditory and kinesthetic. For us, these are the building blocks to knowing how someone is thinking. Being aware of what is taking place and how to ask questions is essential in assessing performance toward reducing human errors. Existing models give predications based on time values or response times for a particular event, and may be summed and averaged for a generalization of behavior(s). However, by our not establishing a basic understanding of the foundation of how the behavior was predicated through a decision making strategy process, predicative models are overall inefficient in their analysis of the means by which behavior was generated. What is seen is the end result

  11. Behavior model for performance assessment.

    Borwn-VanHoozer, S. A.

    1999-07-23

    Every individual channels information differently based on their preference of the sensory modality or representational system (visual auditory or kinesthetic) we tend to favor most (our primary representational system (PRS)). Therefore, some of us access and store our information primarily visually first, some auditorily, and others kinesthetically (through feel and touch); which in turn establishes our information processing patterns and strategies and external to internal (and subsequently vice versa) experiential language representation. Because of the different ways we channel our information, each of us will respond differently to a task--the way we gather and process the external information (input), our response time (process), and the outcome (behavior). Traditional human models of decision making and response time focus on perception, cognitive and motor systems stimulated and influenced by the three sensory modalities, visual, auditory and kinesthetic. For us, these are the building blocks to knowing how someone is thinking. Being aware of what is taking place and how to ask questions is essential in assessing performance toward reducing human errors. Existing models give predications based on time values or response times for a particular event, and may be summed and averaged for a generalization of behavior(s). However, by our not establishing a basic understanding of the foundation of how the behavior was predicated through a decision making strategy process, predicative models are overall inefficient in their analysis of the means by which behavior was generated. What is seen is the end result.

  12. Carbon nanomaterials for high-performance supercapacitors

    Tao Chen; Liming Dai

    2013-01-01

    Owing to their high energy density and power density, supercapacitors exhibit great potential as high-performance energy sources for advanced technologies. Recently, carbon nanomaterials (especially, carbon nanotubes and graphene) have been widely investigated as effective electrodes in supercapacitors due to their high specific surface area, excellent electrical and mechanical properties. This article summarizes the recent progresses on the development of high-performance supercapacitors bas...

  13. Critical review of glass performance modeling

    Bourcier, W.L.

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process

  14. Critical review of glass performance modeling

    Bourcier, W.L. [Lawrence Livermore National Lab., CA (United States)

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process.

  15. Transport in JET high performance plasmas

    2001-01-01

    Two type of high performance scenarios have been produced in JET during DTE1 campaign. One of them is the well known and extensively used in the past ELM-free hot ion H-mode scenario which has two distinct regions- plasma core and the edge transport barrier. The results obtained during DTE-1 campaign with D, DT and pure T plasmas confirms our previous conclusion that the core transport scales as a gyroBohm in the inner half of plasma volume, recovers its Bohm nature closer to the separatrix and behaves as ion neoclassical in the transport barrier. Measurements on the top of the barrier suggest that the width of the barrier is dependent upon isotope and moreover suggest that fast ions play a key role. The other high performance scenario is a relatively recently developed Optimised Shear Scenario with small or slightly negative magnetic shear in plasma core. Different mechanisms of Internal Transport Barrier (ITB) formation have been tested by predictive modelling and the results are compared with experimentally observed phenomena. The experimentally observed non-penetration of the heavy impurities through the strong ITB which contradicts to a prediction of the conventional neo-classical theory is discussed. (author)

  16. Transport in JET high performance plasmas

    1999-01-01

    Two type of high performance scenarios have been produced in JET during DTE1 campaign. One of them is the well known and extensively used in the past ELM-free hot ion H-mode scenario which has two distinct regions- plasma core and the edge transport barrier. The results obtained during DTE-1 campaign with D, DT and pure T plasmas confirms our previous conclusion that the core transport scales as a gyroBohm in the inner half of plasma volume, recovers its Bohm nature closer to the separatrix and behaves as ion neoclassical in the transport barrier. Measurements on the top of the barrier suggest that the width of the barrier is dependent upon isotope and moreover suggest that fast ions play a key role. The other high performance scenario is a relatively recently developed Optimised Shear Scenario with small or slightly negative magnetic shear in plasma core. Different mechanisms of Internal Transport Barrier (ITB) formation have been tested by predictive modelling and the results are compared with experimentally observed phenomena. The experimentally observed non-penetration of the heavy impurities through the strong ITB which contradicts to a prediction of the conventional neo-classical theory is discussed. (author)

  17. Improving UV Resistance of High Performance Fibers

    Hassanin, Ahmed

    % rutile TiO2 nanoparticles showed excellent protection of braid from PBO. Only 7.5% strength loss was observed. To optimize the degree of protection of the sheath loaded with UV blocker particles, computational models were developed to optimize the protective layer thickness/weight and the amount of UV particles that provide the maximum protection with lightest weight of the protective layer and minimum amount of UV particles. The simulated results were found to be higher that the experimental results due to the tendency of nanoparticles to be agglomerated in real experiments. The third approach to achieve a maximum protection with the minimum weight added is constructing a sleeve from SpectraRTM (Ultra High Molecular Weight Polyethylene (UHMWPE) high performance fiber), which is known to resist UV, woven fabric. Covering the braid from PBO fiber with Spectra RTM woven fabric provide hybrid structure with two compatible components that can share the load and thus maintain the high strength to weight ratio. Although the SpectraRTM fabric had maximum cover factor, 20 % of visible light and about 15 % of UV were able to penetrate the fabric. This transmittance of UV-VIS light negatively affected the protection performance of the SpectraRTM woven fabric layer. It is thought that SpectraRTM fabric be coated with a thin layer (mentioned earlier) containing UV blocker for additional protection while maintain strength contribution to the hybrid structure. To maximize the strength to weight ratio of the hybrid structure (with core from PBO braid and sheath from SpectraRTM woven fabric) an established finite element model was utilized. The theoretical results using the finite element theory indicated that by controlling the bending rigidity of the filling yarn of the SpectraRTM fabric, the extension at peak load of woven fabric in warp direction (loading direction) could be controlled to match the braid extension at peak load. The match in the extension at peak load of the two

  18. Model Performance Evaluation and Scenario Analysis (MPESA)

    Model Performance Evaluation and Scenario Analysis (MPESA) assesses the performance with which models predict time series data. The tool was developed Hydrological Simulation Program-Fortran (HSPF) and the Stormwater Management Model (SWMM)

  19. Team Development for High Performance Management.

    Schermerhorn, John R., Jr.

    1986-01-01

    The author examines a team development approach to management that creates shared commitments to performance improvement by focusing the attention of managers on individual workers and their task accomplishments. It uses the "high-performance equation" to help managers confront shared beliefs and concerns about performance and develop realistic…

  20. Delivering high performance BWR fuel reliably

    Schardt, J.F. [GE Nuclear Energy, Wilmington, NC (United States)

    1998-07-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  1. HPTA: High-Performance Text Analytics

    Vandierendonck, Hans; Murphy, Karen; Arif, Mahwish; Nikolopoulos, Dimitrios S.

    2017-01-01

    One of the main targets of data analytics is unstructured data, which primarily involves textual data. High-performance processing of textual data is non-trivial. We present the HPTA library for high-performance text analytics. The library helps programmers to map textual data to a dense numeric representation, which can be handled more efficiently. HPTA encapsulates three performance optimizations: (i) efficient memory management for textual data, (ii) parallel computation on associative dat...

  2. SISYPHUS: A high performance seismic inversion factory

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with

  3. High performance visual display for HENP detectors

    McGuigan, Michael; Smith, Gordon; Spiletic, John; Fine, Valeri; Nevski, Pavel

    2001-01-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactive control, including the ability to slice, search and mark areas of the detector. We incorporate the ability to make a high quality still image of a view of the detector and the ability to generate animations and a fly through of the detector and output these to MPEG or VRML models. We develop data compression hardware and software so that remote interactive visualization will be possible among dispersed collaborators. We obtain real time visual display for events accumulated during simulations

  4. DOE research in utilization of high-performance computers

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  5. Strategy Guideline. Partnering for High Performance Homes

    Prahl, Duncan [IBACOS, Inc., Pittsburgh, PA (United States)

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  6. Constructing and Validating High-Performance MIEC-SVM Models in Virtual Screening for Kinases: A Better Way for Actives Discovery.

    Sun, Huiyong; Pan, Peichen; Tian, Sheng; Xu, Lei; Kong, Xiaotian; Li, Youyong; Dan Li; Hou, Tingjun

    2016-04-22

    The MIEC-SVM approach, which combines molecular interaction energy components (MIEC) derived from free energy decomposition and support vector machine (SVM), has been found effective in capturing the energetic patterns of protein-peptide recognition. However, the performance of this approach in identifying small molecule inhibitors of drug targets has not been well assessed and validated by experiments. Thereafter, by combining different model construction protocols, the issues related to developing best MIEC-SVM models were firstly discussed upon three kinase targets (ABL, ALK, and BRAF). As for the investigated targets, the optimized MIEC-SVM models performed much better than the models based on the default SVM parameters and Autodock for the tested datasets. Then, the proposed strategy was utilized to screen the Specs database for discovering potential inhibitors of the ALK kinase. The experimental results showed that the optimized MIEC-SVM model, which identified 7 actives with IC50 < 10 μM from 50 purchased compounds (namely hit rate of 14%, and 4 in nM level) and performed much better than Autodock (3 actives with IC50 < 10 μM from 50 purchased compounds, namely hit rate of 6%, and 2 in nM level), suggesting that the proposed strategy is a powerful tool in structure-based virtual screening.

  7. Rapid determination of retinoic acid and its main isomers in plasma by second-order high-performance liquid chromatography data modeling.

    Teglia, Carla M; Cámara, María S; Goicoechea, Héctor C

    2014-12-01

    This paper reports the development of a method based on high-performance liquid chromatography (HPLC) coupled to second-order data modeling with multivariate curve resolution-alternating least-squares (MCR-ALS) for quantification of retinoic acid and its main isomers in plasma in only 5.5 min. The compounds retinoic acid (RA), 13-cis-retinoic acid, 9-cis-retinoic acid, and 9,13-di-cis-retinoic acid were partially separated by use of a Poroshell 120 EC-C18 (3.0 mm × 30 mm, 2.7 μm particle size) column. Overlapping not only among the target analytes but also with the plasma interferents was resolved by exploiting the second-order advantage of the multi-way calibration. A validation study led to the following results: trueness with recoveries of 98.5-105.9 % for RA, 95.7-110.1 % for 13-cis-RA, 97.1-110.8 % for 9-cis-RA, and 99.5-110.9 % for 9,13-di-cis-RA; repeatability with RSD of 3.5-3.1 % for RA, 3.5-1.5 % for 13-cis-RA, 4.6-2.7 % for 9-cis-RA, and 5.2-2.7 % for 9,13-di-cis-RA (low and high levels); and intermediate precision (inter-day precision) with RSD of 3.8-3.0 % for RA, 2.9-2.4 % for 13-cis-RA, 3.6-3.2 % for 9,13-di-cis-RA, and 3.2-2.9 % for 9-cis-RA (low and high levels). In addition, a robustness study revealed the method was suitable for monitoring patients with dermatological diseases treated with pharmaceutical products containing RA and 13-cis-RA.

  8. High-performance ceramics. Fabrication, structure, properties

    Petzow, G.; Tobolski, J.; Telle, R.

    1996-01-01

    The program ''Ceramic High-performance Materials'' pursued the objective to understand the chaining of cause and effect in the development of high-performance ceramics. This chain of problems begins with the chemical reactions for the production of powders, comprises the characterization, processing, shaping and compacting of powders, structural optimization, heat treatment, production and finishing, and leads to issues of materials testing and of a design appropriate to the material. The program ''Ceramic High-performance Materials'' has resulted in contributions to the understanding of fundamental interrelationships in terms of materials science, which are summarized in the present volume - broken down into eight special aspects. (orig./RHM)

  9. High Burnup Fuel Performance and Safety Research

    Bang, Je Keun; Lee, Chan Bok; Kim, Dae Ho (and others)

    2007-03-15

    The worldwide trend of nuclear fuel development is to develop a high burnup and high performance nuclear fuel with high economies and safety. Because the fuel performance evaluation code, INFRA, has a patent, and the superiority for prediction of fuel performance was proven through the IAEA CRP FUMEX-II program, the INFRA code can be utilized with commercial purpose in the industry. The INFRA code was provided and utilized usefully in the universities and relevant institutes domesticallly and it has been used as a reference code in the industry for the development of the intrinsic fuel rod design code.

  10. Brain inspired high performance electronics on flexible silicon

    Sevilla, Galo T.; Rojas, Jhonathan Prieto; Hussain, Muhammad Mustafa

    2014-01-01

    Brain's stunning speed, energy efficiency and massive parallelism makes it the role model for upcoming high performance computation systems. Although human brain components are a million times slower than state of the art silicon industry components

  11. SBIR PHASE I FINAL REPORT: Adoption of High Performance Computational (HPC) Modeling Software for Widespread Use in the Manufacture of Welded Structures

    Brust, Frederick W. [Engineering Mechanics Corporation of Columbus (Emc2), Columbus, OH (United States); Punch, Edward F. [Engineering Mechanics Corporation of Columbus (Emc2), Columbus, OH (United States); Kurth, Elizabeth A. [Engineering Mechanics Corporation of Columbus (Emc2), Columbus, OH (United States); Kennedy, James C. [Engineering Mechanics Corporation of Columbus (Emc2), Columbus, OH (United States)

    2013-12-02

    fabrication costs. VFT currently is tied to a commercial solver which makes it prohibitively expensive for use by SMEs, as there is a significant licensing cost for the solver - over and above for the relatively minimal cost for VFT. Emc2 developed this software code over a number of years in close cooperation with CAT (Peoria, IL), who currently uses this code exclusively for worldwide fabrication, product design and development activities. The use of VFT has allowed CAT to move directly from design to product fabrication and helped eliminate (to a large extent) new product prototyping and subsequent testing. Additionally, CAT has been able to eliminate/reduce costly one-of-a-kind appliances used to reduce distortion effects due to fabrication. In this context, SMEs can realize the same kind of improved product quality and reduced cost through adoption of the adapted version of VFT for design and subsequent manufacture of new products. Emc2's DOE SBIR Phase I effort successfully adapted VFT so that SMEs have access to this sophisticated and proven methodology that is quick, accurate and cost effective and available on-demand to address weld-simulation and fabrication problems prior to manufacture. The open source code, WARP3D, a high performance finite element code mainly used in fracture and damage assessment of structures, was modified so that computational weld problems can be solved efficiently on multiple processors and threads with VFT. The thermal solver for VFT, based on a series of closed form solution approximations, was enhanced for solution on multiple processors greatly increasing overall speed. In addition, the graphical user interface (GUI) has been tailored to integrate these solutions with WARP3D. The GUI is used to define all the weld pass descriptions, number of passes, material properties, consumable properties, weld speed, etc. for the structure to be modeled. The GUI was improved to make it user-friendly for engineers that are not experts in finite

  12. Performance modeling of network data services

    Haynes, R.A.; Pierson, L.G.

    1997-01-01

    Networks at major computational organizations are becoming increasingly complex. The introduction of large massively parallel computers and supercomputers with gigabyte memories are requiring greater and greater bandwidth for network data transfers to widely dispersed clients. For networks to provide adequate data transfer services to high performance computers and remote users connected to them, the networking components must be optimized from a combination of internal and external performance criteria. This paper describes research done at Sandia National Laboratories to model network data services and to visualize the flow of data from source to sink when using the data services.

  13. Sepsis reconsidered: Identifying novel metrics for behavioral landscape characterization with a high-performance computing implementation of an agent-based model.

    Cockrell, Chase; An, Gary

    2017-10-07

    Sepsis affects nearly 1 million people in the United States per year, has a mortality rate of 28-50% and requires more than $20 billion a year in hospital costs. Over a quarter century of research has not yielded a single reliable diagnostic test or a directed therapeutic agent for sepsis. Central to this insufficiency is the fact that sepsis remains a clinical/physiological diagnosis representing a multitude of molecularly heterogeneous pathological trajectories. Advances in computational capabilities offered by High Performance Computing (HPC) platforms call for an evolution in the investigation of sepsis to attempt to define the boundaries of traditional research (bench, clinical and computational) through the use of computational proxy models. We present a novel investigatory and analytical approach, derived from how HPC resources and simulation are used in the physical sciences, to identify the epistemic boundary conditions of the study of clinical sepsis via the use of a proxy agent-based model of systemic inflammation. Current predictive models for sepsis use correlative methods that are limited by patient heterogeneity and data sparseness. We address this issue by using an HPC version of a system-level validated agent-based model of sepsis, the Innate Immune Response ABM (IIRBM), as a proxy system in order to identify boundary conditions for the possible behavioral space for sepsis. We then apply advanced analysis derived from the study of Random Dynamical Systems (RDS) to identify novel means for characterizing system behavior and providing insight into the tractability of traditional investigatory methods. The behavior space of the IIRABM was examined by simulating over 70 million sepsis patients for up to 90 days in a sweep across the following parameters: cardio-respiratory-metabolic resilience; microbial invasiveness; microbial toxigenesis; and degree of nosocomial exposure. In addition to using established methods for describing parameter space, we

  14. ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS

    WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

    2002-01-01

    OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability

  15. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  16. Analytical models for development of high performance metal targets irradiated in IPEN-CNEN/SP Cyclone 30 and Cyclone 18 cyclotrons

    Oliveira, Henrique Barcellos de

    2009-01-01

    Analytical models were developed that describe the basic elements for metal targets irradiation in cyclotrons. Important parameters such as maximum beam current value and thermal power deposited on target were obtained and compared with practical situations. In an unprecedented way, were determined analytically the features found in intense thermal transient situations, when high protons concentrations in a small region of the beam cause intense temperature gradients in small regions of the target. Comparing with results found in the literature showed that the developed models are satisfactory, in view of all limitations of the proposed model. (author)

  17. Performance assessment modeling of pyrometallurgical process wasteforms

    Nutt, W.M.; Hill, R.N.; Bullen, D.B.

    1995-01-01

    Performance assessment analyses have been completed to estimate the behavior of high-level nuclear wasteforms generated from the pyrometallurgical processing of liquid metal reactor (LMR) and light water reactor (LWR) spent nuclear fuel. Waste emplaced in the proposed repository at Yucca Mountain is investigated as the basis for the study. The resulting cumulative actinide and fission product releases to the accessible environment within a 100,000 year period from the various pyrometallurgical process wasteforms are compared to those of directly disposed LWR spent fuel using the same total repository system model. The impact of differing radionuclide transport models on the overall release characteristics is investigated

  18. High performance liquid chromatographic determination of ...

    STORAGESEVER

    2010-02-08

    ) high performance liquid chromatography (HPLC) grade .... applications. These are important requirements if the reagent is to be applicable to on-line pre or post column derivatisation in a possible automation of the analytical.

  19. Analog circuit design designing high performance amplifiers

    Feucht, Dennis

    2010-01-01

    The third volume Designing High Performance Amplifiers applies the concepts from the first two volumes. It is an advanced treatment of amplifier design/analysis emphasizing both wideband and precision amplification.

  20. Embedded High Performance Scalable Computing Systems

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  1. Gradient High Performance Liquid Chromatography Method ...

    Purpose: To develop a gradient high performance liquid chromatography (HPLC) method for the simultaneous determination of phenylephrine (PHE) and ibuprofen (IBU) in solid ..... nimesulide, phenylephrine. Hydrochloride, chlorpheniramine maleate and caffeine anhydrous in pharmaceutical dosage form. Acta Pol.

  2. Highlighting High Performance: Whitman Hanson Regional High School; Whitman, Massachusetts

    2006-06-01

    This brochure describes the key high-performance building features of the Whitman-Hanson Regional High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar and wind energy, building envelope, heating and cooling systems, water conservation, and acoustics. Energy cost savings are also discussed.

  3. High performance computing in Windows Azure cloud

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  4. High-performance computing — an overview

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  5. Groundwater flow reference model of the Meuse/Haute-Marne region: implications on performance analysis of high and intermediate level and long lived radwaste repository in clay formation

    Benabderrahmane, H.; Plas, F.; Yven, B.; Cornaton, F.; Perrochet, P.; Kerrou, J.; Stucki, J.; Caloz, P.

    2010-01-01

    Document available in extended abstract form only. An integrated multi-scale hydrogeological conceptual model of the Paris basin and the Meuse/Haute-Marne sector was developed to study the groundwater flow and solute transport behaviour in the multi-layered aquifer system and around the Callovo-Oxfordian clay formation as potential host for the French high and intermediate level and long lived radioactive waste. The Paris basin system (200000 km 2 ) consists of 27 aquiferous and semi-permeable (aquitard) hydrogeological units from Trias to Quaternary affected by 80 regional faults. It produces boundary conditions to the refined aquifer system of the 250 km 2 Meuse/Haute-Marne site, which includes 27 layers from Trias to Portlandian. The Callovo-Oxfordian clay formation is located at a mean depth of 500 m, with a minimum thickness of 130 m and hydraulic conductivity values of the order of 10-14 m/s. The numerical steady-state flow solution is calculated on a finite element mesh of about 3 million 2-D and 3-D linear elements (GEOS-CHYN 2009). At the top surface, the mesh refinement (250 m to 50 m) is locally constrained by the fault lineaments and the river network. At depth, it takes into account the 10 m to 500 m throw of the faults and the vertical flow through semi-permeable units. Triangular elements are used to represent a top surface layer that includes the weathered part of the outcropping formations, and quadrangular elements allow for the representation of the faults planes at depth. The layers are discretized into 6-nodded wedge elements, while 4-nodded tetrahedrons and 5-nodded pyramids are used to handle the pinching and the outcrop of the layers. The flow boundary conditions and source-sink terms are: - Specified hydraulic heads (Dirichlet) along the rivers and elsewhere on the top surface inflow fluxes (Neumann) derived from hydrological balance (with a mean inflow of 240 mm/year). - Specified hydraulic heads (Dirichlet) at the Manche coast side. - No

  6. Governance among Malaysian high performing companies

    Asri Marsidi

    2016-07-01

    Full Text Available Well performed companies have always been linked with effective governance which is generally reflected through effective board of directors. However many issues concerning the attributes for effective board of directors remained unresolved. Nowadays diversity has been perceived as able to influence the corporate performance due to the likelihood of meeting variety of needs and demands from diverse customers and clients. The study therefore aims to provide a fundamental understanding on governance among high performing companies in Malaysia.

  7. High-performance OPCPA laser system

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J.

    2006-01-01

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  8. High-performance OPCPA laser system

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J. [Rochester Univ., Lab. for Laser Energetics, NY (United States)

    2006-06-15

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  9. Comparing Dutch and British high performing managers

    Waal, A.A. de; Heijden, B.I.J.M. van der; Selvarajah, C.; Meyer, D.

    2016-01-01

    National cultures have a strong influence on the performance of organizations and should be taken into account when studying the traits of high performing managers. At the same time, many studies that focus upon the attributes of successful managers show that there are attributes that are similar

  10. Comparison of turbulence measurements from DIII-D low-mode and high-performance plasmas to turbulence simulations and models

    Rhodes, T.L.; Leboeuf, J.-N.; Sydora, R.D.; Groebner, R.J.; Doyle, E.J.; McKee, G.R.; Peebles, W.A.; Rettig, C.L.; Zeng, L.; Wang, G.

    2002-01-01

    Measured turbulence characteristics (correlation lengths, spectra, etc.) in low-confinement (L-mode) and high-performance plasmas in the DIII-D tokamak [Luxon et al., Proceedings Plasma Physics and Controlled Nuclear Fusion Research 1986 (International Atomic Energy Agency, Vienna, 1987), Vol. I, p. 159] show many similarities with the characteristics determined from turbulence simulations. Radial correlation lengths Δr of density fluctuations from L-mode discharges are found to be numerically similar to the ion poloidal gyroradius ρ θ,s , or 5-10 times the ion gyroradius ρ s over the radial region 0.2 θ,s or 5-10 times ρ s , an experiment was performed which modified ρ θs while keeping other plasma parameters approximately fixed. It was found that the experimental Δr did not scale as ρ θ,s , which was similar to low-resolution UCAN simulations. Finally, both experimental measurements and gyrokinetic simulations indicate a significant reduction in the radial correlation length from high-performance quiescent double barrier discharges, as compared to normal L-mode, consistent with reduced transport in these high-performance plasmas

  11. Calibration of PMIS pavement performance prediction models.

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  12. A unified tool for performance modelling and prediction

    Gilmore, Stephen; Kloul, Leila

    2005-01-01

    We describe a novel performability modelling approach, which facilitates the efficient solution of performance models extracted from high-level descriptions of systems. The notation which we use for our high-level designs is the Unified Modelling Language (UML) graphical modelling language. The technology which provides the efficient representation capability for the underlying performance model is the multi-terminal binary decision diagram (MTBDD)-based PRISM probabilistic model checker. The UML models are compiled through an intermediate language, the stochastic process algebra PEPA, before translation into MTBDDs for solution. We illustrate our approach on a real-world analysis problem from the domain of mobile telephony

  13. TOWARD END-TO-END MODELING FOR NUCLEAR EXPLOSION MONITORING: SIMULATION OF UNDERGROUND NUCLEAR EXPLOSIONS AND EARTHQUAKES USING HYDRODYNAMIC AND ANELASTIC SIMULATIONS, HIGH-PERFORMANCE COMPUTING AND THREE-DIMENSIONAL EARTH MODELS

    Rodgers, A; Vorobiev, O; Petersson, A; Sjogreen, B

    2009-07-06

    This paper describes new research being performed to improve understanding of seismic waves generated by underground nuclear explosions (UNE) by using full waveform simulation, high-performance computing and three-dimensional (3D) earth models. The goal of this effort is to develop an end-to-end modeling capability to cover the range of wave propagation required for nuclear explosion monitoring (NEM) from the buried nuclear device to the seismic sensor. The goal of this work is to improve understanding of the physical basis and prediction capabilities of seismic observables for NEM including source and path-propagation effects. We are pursuing research along three main thrusts. Firstly, we are modeling the non-linear hydrodynamic response of geologic materials to underground explosions in order to better understand how source emplacement conditions impact the seismic waves that emerge from the source region and are ultimately observed hundreds or thousands of kilometers away. Empirical evidence shows that the amplitudes and frequency content of seismic waves at all distances are strongly impacted by the physical properties of the source region (e.g. density, strength, porosity). To model the near-source shock-wave motions of an UNE, we use GEODYN, an Eulerian Godunov (finite volume) code incorporating thermodynamically consistent non-linear constitutive relations, including cavity formation, yielding, porous compaction, tensile failure, bulking and damage. In order to propagate motions to seismic distances we are developing a one-way coupling method to pass motions to WPP (a Cartesian anelastic finite difference code). Preliminary investigations of UNE's in canonical materials (granite, tuff and alluvium) confirm that emplacement conditions have a strong effect on seismic amplitudes and the generation of shear waves. Specifically, we find that motions from an explosion in high-strength, low-porosity granite have high compressional wave amplitudes and weak

  14. Modeling High Pressure Micro Hollow Cathode Discharges

    Boeuf, Jean-Pierre; Pitchford, Leanne

    2004-01-01

    This report results from a contract tasking CPAT as follows: The Grantee will perform theoretical modeling of point, surface, and volume high-pressure plasmas created using Micro Hollow Cathode Discharge sources...

  15. High Performance Work Systems for Online Education

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  16. Teacher Accountability at High Performing Charter Schools

    Aguirre, Moises G.

    2016-01-01

    This study will examine the teacher accountability and evaluation policies and practices at three high performing charter schools located in San Diego County, California. Charter schools are exempted from many laws, rules, and regulations that apply to traditional school systems. By examining the teacher accountability systems at high performing…

  17. Performance model for a CCTV-MTI

    Dunn, D.R.; Dunbar, D.L.

    1978-01-01

    CCTV-MTI (closed circuit television--moving target indicator) monitors represent typical components of access control systems, as for example in a material control and accounting (MC and A) safeguards system. This report describes a performance model for a CCTV-MTI monitor. The performance of a human in an MTI role is a separate problem and is not addressed here. This work was done in conjunction with the NRC sponsored LLL assessment procedure for MC and A systems which is presently under development. We develop a noise model for a generic camera system and a model for the detection mechanism for a postulated MTI design. These models are then translated into an overall performance model. Measures of performance are probabilities of detection and false alarm as a function of intruder-induced grey level changes in the protected area. Sensor responsivity, lens F-number, source illumination and spectral response were treated as design parameters. Some specific results are illustrated for a postulated design employing a camera with a Si-target vidicon. Reflectance or light level changes in excess of 10% due to an intruder will be detected with a very high probability for the portion of the visible spectrum with wavelengths above 500 nm. The resulting false alarm rate was less than one per year. We did not address sources of nuisance alarms due to adverse environments, reliability, resistance to tampering, nor did we examine the effects of the spatial frequency response of the optics. All of these are important and will influence overall system detection performance

  18. Advanced high performance solid wall blanket concepts

    Wong, C.P.C.; Malang, S.; Nishio, S.; Raffray, R.; Sagara, A.

    2002-01-01

    First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability

  19. Modeling of long High Voltage AC Underground

    Gudmundsdottir, Unnur Stella; Bak, Claus Leth; Wiechowski, W. T.

    2010-01-01

    cable models, perform highly accurate field measurements for validating the model and identifying possible disadvantages of the cable model. Furthermore the project suggests and implements improvements and validates them against several field measurements. It is shown in this paper how a new method...

  20. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  1. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  2. Performance Modelling of Steam Turbine Performance using Fuzzy ...

    Performance Modelling of Steam Turbine Performance using Fuzzy Logic ... AFRICAN JOURNALS ONLINE (AJOL) · Journals · Advanced Search · USING AJOL · RESOURCES. Journal of Applied Sciences and Environmental Management ... A Fuzzy Inference System for predicting the performance of steam turbine

  3. Performance of several Saccharomyces strains for the alcoholic fermentation of sugar-sweetened high-strength wastewaters: Comparative analysis and kinetic modelling.

    Comelli, Raúl N; Seluy, Lisandro G; Isla, Miguel A

    2016-12-25

    This work focuses on the performance of ten commercial Saccharomyces yeast strains in the batch alcoholic fermentation of sugars contained in selected industrial wastewaters from the soft drink industry. Fermentation has been applied successfully to treat these effluents prior to their disposal. Although many strains were investigated, similar behaviour was observed between all of the Saccharomyces strains tested. When media were inoculated with 2gL -1 of yeast, all strains were able to completely consume the available sugars in less than 14h. Thus, any of the strains studied in this work could be used in non-conventional wastewater treatment processes based on alcoholic fermentation. However, ethanol production varied between strains, and these differences could be significant from a production point of view. Saccharomyces bayanus produced the most ethanol, with a mean yield of 0.44g ethanol g sugarconsumed -1 and an ethanol specific production rate of 5.96g ethanol (Lh) -1 . As the assayed soft drinks wastewaters contain about 105g sugar /L of fermentable sugars, the concentration of ethanol achieved after the fermentations process was 46.2g ethanol /L. A rigorous kinetic modelling methodology was used to model the Saccharomyces bayanus fermentation process. The kinetic model included coupled mass balances and a minimal number of parameters. A simple unstructured model based on the Andrews equation (substrate inhibition) was developed. This model satisfactorily described biomass growth, sugar consumption and bioethanol production. In addition to providing insights into the fermentative performance of potentially relevant strains, this work can facilitate the design of large-scale ethanol production processes that use wastewaters from the sugar-sweetened beverage industry as feedstock. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Experimental and modeling study of high performance direct carbon solid oxide fuel cell with in situ catalytic steam-carbon gasification reaction

    Xu, Haoran; Chen, Bin; Zhang, Houcheng; Tan, Peng; Yang, Guangming; Irvine, John T. S.; Ni, Meng

    2018-04-01

    In this paper, 2D models for direct carbon solid oxide fuel cells (DC-SOFCs) with in situ catalytic steam-carbon gasification reaction are developed. The simulation results are found to be in good agreement with experimental data. The performance of DC-SOFCs with and without catalyst are compared at different operating potential, anode inlet gas flow rate and operating temperature. It is found that adding suitable catalyst can significantly speed up the in situ steam-carbon gasification reaction and improve the performance of DC-SOFC with H2O as gasification agent. The potential of syngas and electricity co-generation from the fuel cell is also evaluated, where the composition of H2 and CO in syngas can be adjusted by controlling the anode inlet gas flow rate. In addition, the performance DC-SOFCs and the percentage of fuel in the outlet gas are both increased with increasing operating temperature. At a reduced temperature (below 800 °C), good performance of DC-SOFC can still be obtained with in-situ catalytic carbon gasification by steam. The results of this study form a solid foundation to understand the important effect of catalyst and related operating conditions on H2O-assisted DC-SOFCs.

  5. High performance bio-integrated devices

    Kim, Dae-Hyeong; Lee, Jongha; Park, Minjoon

    2014-06-01

    In recent years, personalized electronics for medical applications, particularly, have attracted much attention with the rise of smartphones because the coupling of such devices and smartphones enables the continuous health-monitoring in patients' daily life. Especially, it is expected that the high performance biomedical electronics integrated with the human body can open new opportunities in the ubiquitous healthcare. However, the mechanical and geometrical constraints inherent in all standard forms of high performance rigid wafer-based electronics raise unique integration challenges with biotic entities. Here, we describe materials and design constructs for high performance skin-mountable bio-integrated electronic devices, which incorporate arrays of single crystalline inorganic nanomembranes. The resulting electronic devices include flexible and stretchable electrophysiology electrodes and sensors coupled with active electronic components. These advances in bio-integrated systems create new directions in the personalized health monitoring and/or human-machine interfaces.

  6. Investigation the performance of 0-D and 3-d combustion simulation softwares for modelling HCCI engine with high air excess ratios

    Gökhan Coşkun

    2017-10-01

    Full Text Available In this study, performance of zero and three dimensional simulations codes that used for simulate a homogenous charge compression ignition (HCCI engine fueled with Primary Reference Fuel PRF (85% iso-octane and 15% n-heptane were investigated. 0-D code, called as SRM Suite (Stochastic Reactor Model which can simulate engine combustion by using stochastic reactor model technique were used. Ansys-Fluent which can simulate computational fluid dynamics (CFD was used for 3-D engine combustion simulations. Simulations were evaluated for both commercial codes in terms of combustion, heat transfer and emissions in a HCCI engine. Chemical kinetic mechanisms which developed by Tsurushima including 33 species and 38 reactions for surrogate PRF fuel were used for combustion simulations. Analysis showed that both codes have advantages over each other.

  7. Designing a High Performance Parallel Personal Cluster

    Kapanova, K. G.; Sellier, J. M.

    2016-01-01

    Today, many scientific and engineering areas require high performance computing to perform computationally intensive experiments. For example, many advances in transport phenomena, thermodynamics, material properties, computational chemistry and physics are possible only because of the availability of such large scale computing infrastructures. Yet many challenges are still open. The cost of energy consumption, cooling, competition for resources have been some of the reasons why the scientifi...

  8. vSphere high performance cookbook

    Sarkar, Prasenjit

    2013-01-01

    vSphere High Performance Cookbook is written in a practical, helpful style with numerous recipes focusing on answering and providing solutions to common, and not-so common, performance issues and problems.The book is primarily written for technical professionals with system administration skills and some VMware experience who wish to learn about advanced optimization and the configuration features and functions for vSphere 5.1.

  9. Alternative High-Performance Ceramic Waste Forms

    Sundaram, S. K. [Alfred Univ., NY (United States)

    2017-02-01

    This final report (M5NU-12-NY-AU # 0202-0410) summarizes the results of the project titled “Alternative High-Performance Ceramic Waste Forms,” funded in FY12 by the Nuclear Energy University Program (NEUP Project # 12-3809) being led by Alfred University in collaboration with Savannah River National Laboratory (SRNL). The overall focus of the project is to advance fundamental understanding of crystalline ceramic waste forms and to demonstrate their viability as alternative waste forms to borosilicate glasses. We processed single- and multiphase hollandite waste forms based on simulated waste streams compositions provided by SRNL based on the advanced fuel cycle initiative (AFCI) aqueous separation process developed in the Fuel Cycle Research and Development (FCR&D). For multiphase simulated waste forms, oxide and carbonate precursors were mixed together via ball milling with deionized water using zirconia media in a polyethylene jar for 2 h. The slurry was dried overnight and then separated from the media. The blended powders were then subjected to melting or spark plasma sintering (SPS) processes. Microstructural evolution and phase assemblages of these samples were studied using x-ray diffraction (XRD), scanning electron microscopy (SEM), energy dispersion analysis of x-rays (EDAX), wavelength dispersive spectrometry (WDS), transmission electron spectroscopy (TEM), selective area x-ray diffraction (SAXD), and electron backscatter diffraction (EBSD). These results showed that the processing methods have significant effect on the microstructure and thus the performance of these waste forms. The Ce substitution into zirconolite and pyrochlore materials was investigated using a combination of experimental (in situ XRD and x-ray absorption near edge structure (XANES)) and modeling techniques to study these single phases independently. In zirconolite materials, a transition from the 2M to the 4M polymorph was observed with increasing Ce content. The resulting

  10. High performance parallel I/O

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  11. High burnup issues and modelling strategies

    Dutta, B.K.

    2005-01-01

    The performance of high burnup fuel is affected by a number of phenomena, such as, conductivity degradation, modified radial flux profile, fission gas release from high burnup structures, PCMI, burnup dependent thermo-mechanical properties, etc. The modelling strategies of some of these phenomena are available in literature. These can be readily incorporated in a fuel modelling performance code. The computer code FAIR has been developed in BARC over the years to evaluate the fuel performance at extended burnup and modelling of the fuel rods for advanced fuel cycles. The present paper deals with the high burnup issues in the fuel pins, their modelling strategies and results of the case studies specifically involving high burnup fuel. (author)

  12. Photovoltaic array performance simulation models

    Menicucci, D. F.

    1986-09-15

    The experience of the solar industry confirms that, despite recent cost reductions, the profitability of photovoltaic (PV) systems is often marginal and the configuration and sizing of a system is a critical problem for the design engineer. Construction and evaluation of experimental systems are expensive and seldom justifiable. A mathematical model or computer-simulation program is a desirable alternative, provided reliable results can be obtained. Sandia National Laboratories, Albuquerque (SNLA), has been studying PV-system modeling techniques in an effort to develop an effective tool to be used by engineers and architects in the design of cost-effective PV systems. This paper reviews two of the sources of error found in previous PV modeling programs, presents the remedies developed to correct these errors, and describes a new program that incorporates these improvements.

  13. Intelligent Facades for High Performance Green Buildings

    Dyson, Anna [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2017-03-01

    Progress Towards Net-Zero and Net-Positive-Energy Commercial Buildings and Urban Districts Through Intelligent Building Envelope Strategies Previous research and development of intelligent facades systems has been limited in their contribution towards national goals for achieving on-site net zero buildings, because this R&D has failed to couple the many qualitative requirements of building envelopes such as the provision of daylighting, access to exterior views, satisfying aesthetic and cultural characteristics, with the quantitative metrics of energy harvesting, storage and redistribution. To achieve energy self-sufficiency from on-site solar resources, building envelopes can and must address this gamut of concerns simultaneously. With this project, we have undertaken a high-performance building integrated combined-heat and power concentrating photovoltaic system with high temperature thermal capture, storage and transport towards multiple applications (BICPV/T). The critical contribution we are offering with the Integrated Concentrating Solar Façade (ICSF) is conceived to improve daylighting quality for improved health of occupants and mitigate solar heat gain while maximally capturing and transferring onsite solar energy. The ICSF accomplishes this multi-functionality by intercepting only the direct-normal component of solar energy (which is responsible for elevated cooling loads) thereby transforming a previously problematic source of energy into a high quality resource that can be applied to building demands such as heating, cooling, dehumidification, domestic hot water, and possible further augmentation of electrical generation through organic Rankine cycles. With the ICSF technology, our team is addressing the global challenge in transitioning commercial and residential building stock towards on-site clean energy self-sufficiency, by fully integrating innovative environmental control systems strategies within an intelligent and responsively dynamic building

  14. Strategy Guideline: Partnering for High Performance Homes

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  15. Determinants of Students Academic Performance in Senior High ...

    A binary logit model is used to investigate the determinants of students' performance in the final high school examination. Questionnaires were administered to a sample of 1,129 final year students (614 boys and 515 girls) in ten senior high schools (SHSs) during the 2008/2009 academic year. Respondents were requested ...

  16. Long-term bridge performance high priority bridge performance issues.

    2014-10-01

    Bridge performance is a multifaceted issue involving performance of materials and protective systems, : performance of individual components of the bridge, and performance of the structural system as a whole. The : Long-Term Bridge Performance (LTBP)...

  17. Validated High Performance Liquid Chromatography Method for ...

    Purpose: To develop a simple, rapid and sensitive high performance liquid chromatography (HPLC) method for the determination of cefadroxil monohydrate in human plasma. Methods: Schimadzu HPLC with LC solution software was used with Waters Spherisorb, C18 (5 μm, 150mm × 4.5mm) column. The mobile phase ...

  18. An Introduction to High Performance Fortran

    John Merlin

    1995-01-01

    Full Text Available High Performance Fortran (HPF is an informal standard for extensions to Fortran 90 to assist its implementation on parallel architectures, particularly for data-parallel computation. Among other things, it includes directives for specifying data distribution across multiple memories, and concurrent execution features. This article provides a tutorial introduction to the main features of HPF.

  19. High performance computing on vector systems

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  20. High Performance Electronics on Flexible Silicon

    Sevilla, Galo T.

    2016-09-01

    Over the last few years, flexible electronic systems have gained increased attention from researchers around the world because of their potential to create new applications such as flexible displays, flexible energy harvesters, artificial skin, and health monitoring systems that cannot be integrated with conventional wafer based complementary metal oxide semiconductor processes. Most of the current efforts to create flexible high performance devices are based on the use of organic semiconductors. However, inherent material\\'s limitations make them unsuitable for big data processing and high speed communications. The objective of my doctoral dissertation is to develop integration processes that allow the transformation of rigid high performance electronics into flexible ones while maintaining their performance and cost. In this work, two different techniques to transform inorganic complementary metal-oxide-semiconductor electronics into flexible ones have been developed using industry compatible processes. Furthermore, these techniques were used to realize flexible discrete devices and circuits which include metal-oxide-semiconductor field-effect-transistors, the first demonstration of flexible Fin-field-effect-transistors, and metal-oxide-semiconductors-based circuits. Finally, this thesis presents a new technique to package, integrate, and interconnect flexible high performance electronics using low cost additive manufacturing techniques such as 3D printing and inkjet printing. This thesis contains in depth studies on electrical, mechanical, and thermal properties of the fabricated devices.

  1. Debugging a high performance computing program

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  2. Technology Leadership in Malaysia's High Performance School

    Yieng, Wong Ai; Daud, Khadijah Binti

    2017-01-01

    Headmaster as leader of the school also plays a role as a technology leader. This applies to the high performance schools (HPS) headmaster as well. The HPS excel in all aspects of education. In this study, researcher is interested in examining the role of the headmaster as a technology leader through interviews with three headmasters of high…

  3. Validated high performance liquid chromatographic (HPLC) method ...

    STORAGESEVER

    2010-02-22

    Feb 22, 2010 ... specific and accurate high performance liquid chromatographic method for determination of ZER in micro-volumes ... tional medicine as a cure for swelling, sores, loss of appetite and ... Receptor Activator for Nuclear Factor κ B Ligand .... The effect of ... be suitable for preclinical pharmacokinetic studies. The.

  4. Validated High Performance Liquid Chromatography Method for ...

    Purpose: To develop a simple, rapid and sensitive high performance liquid ... response, tailing factor and resolution of six replicate injections was < 3 %. ... Cefadroxil monohydrate, Human plasma, Pharmacokinetics Bioequivalence ... Drug-free plasma was obtained from the local .... Influence of probenicid on the renal.

  5. Project materials [Commercial High Performance Buildings Project

    None

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  6. High performance structural ceramics for nuclear industry

    Pujari, Vimal K.; Faker, Paul

    2006-01-01

    A family of Saint-Gobain structural ceramic materials and products produced by its High performance Refractory Division is described. Over the last fifty years or so, Saint-Gobain has been a leader in developing non oxide ceramic based novel materials, processes and products for application in Nuclear, Chemical, Automotive, Defense and Mining industries

  7. A new high performance current transducer

    Tang Lijun; Lu Songlin; Li Deming

    2003-01-01

    A DC-100 kHz current transducer is developed using a new technique on zero-flux detecting principle. It was shown that the new current transducer is of high performance, its magnetic core need not be selected very stringently, and it is easy to manufacture

  8. Maintenance Personnel Performance Simulation (MAPPS) model

    Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Knee, H.E.; Haas, P.M.

    1984-01-01

    A stochastic computer model for simulating the actions and behavior of nuclear power plant maintenance personnel is described. The model considers personnel, environmental, and motivational variables to yield predictions of maintenance performance quality and time to perform. The mode has been fully developed and sensitivity tested. Additional evaluation of the model is now taking place

  9. Strategy Guideline. High Performance Residential Lighting

    Holton, J. [IBACOS, Inc., Pittsburgh, PA (United States)

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  10. Physical models for high burnup fuel

    Kanyukova, V.; Khoruzhii, O.; Likhanskii, V.; Solodovnikov, G.; Sorokin, A.

    2003-01-01

    In this paper some models of processes in high burnup fuel developed in Src of Russia Troitsk Institute for Innovation and Fusion Research are presented. The emphasis is on the description of the degradation of the fuel heat conductivity, radial profiles of the burnup and the plutonium accumulation, restructuring of the pellet rim, mechanical pellet-cladding interaction. The results demonstrate the possibility of rather accurate description of the behaviour of the fuel of high burnup on the base of simplified models in frame of the fuel performance code if the models are physically ground. The development of such models requires the performance of the detailed physical analysis to serve as a test for a correct choice of allowable simplifications. This approach was applied in the SRC of Russia TRINITI to develop a set of models for the WWER fuel resulting in high reliability of predictions in simulation of the high burnup fuel

  11. Optical Thermal Characterization Enables High-Performance Electronics Applications

    2016-02-01

    NREL developed a modeling and experimental strategy to characterize thermal performance of materials. The technique provides critical data on thermal properties with relevance for electronics packaging applications. Thermal contact resistance and bulk thermal conductivity were characterized for new high-performance materials such as thermoplastics, boron-nitride nanosheets, copper nanowires, and atomically bonded layers. The technique is an important tool for developing designs and materials that enable power electronics packaging with small footprint, high power density, and low cost for numerous applications.

  12. Cognitive performance modeling based on general systems performance theory.

    Kondraske, George V

    2010-01-01

    General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).

  13. Architecting Web Sites for High Performance

    Arun Iyengar

    2002-01-01

    Full Text Available Web site applications are some of the most challenging high-performance applications currently being developed and deployed. The challenges emerge from the specific combination of high variability in workload characteristics and of high performance demands regarding the service level, scalability, availability, and costs. In recent years, a large body of research has addressed the Web site application domain, and a host of innovative software and hardware solutions have been proposed and deployed. This paper is an overview of recent solutions concerning the architectures and the software infrastructures used in building Web site applications. The presentation emphasizes three of the main functions in a complex Web site: the processing of client requests, the control of service levels, and the interaction with remote network caches.

  14. High performance anode for advanced Li batteries

    Lake, Carla [Applied Sciences, Inc., Cedarville, OH (United States)

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  15. NINJA: Java for High Performance Numerical Computing

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  16. Assembly line performance and modeling

    Rane, Arun B.; Sunnapwar, Vivek K.

    2017-09-01

    Automobile sector forms the backbone of manufacturing sector. Vehicle assembly line is important section in automobile plant where repetitive tasks are performed one after another at different workstations. In this thesis, a methodology is proposed to reduce cycle time and time loss due to important factors like equipment failure, shortage of inventory, absenteeism, set-up, material handling, rejection and fatigue to improve output within given cost constraints. Various relationships between these factors, corresponding cost and output are established by scientific approach. This methodology is validated in three different vehicle assembly plants. Proposed methodology may help practitioners to optimize the assembly line using lean techniques.

  17. High performance computations using dynamical nucleation theory

    Windus, T L; Crosby, L D; Kathmann, S M

    2008-01-01

    Chemists continue to explore the use of very large computations to perform simulations that describe the molecular level physics of critical challenges in science. In this paper, we describe the Dynamical Nucleation Theory Monte Carlo (DNTMC) model - a model for determining molecular scale nucleation rate constants - and its parallel capabilities. The potential for bottlenecks and the challenges to running on future petascale or larger resources are delineated. A 'master-slave' solution is proposed to scale to the petascale and will be developed in the NWChem software. In addition, mathematical and data analysis challenges are described

  18. A Linux Workstation for High Performance Graphics

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  19. The path toward HEP High Performance Computing

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  20. A PRACTICAL MODEL OF LOW-VOLUME HIGH-INTENSITY INTERVAL TRAINING INDUCES PERFORMANCE AND METABOLIC ADAPTATIONS THAT RESEMBLE 'ALL-OUT' SPRINT INTERVAL TRAINING

    Mahdi Bayati

    2011-09-01

    Full Text Available Recently, a novel type of high-intensity interval training known as sprint interval training has demonstrated increases in aerobic and anaerobic performance with very low time commitment. However, this type of training program is unpractical for general populations. The present study compared the impact of a low-volume high-intensity interval training to a "all-out" sprint interval training. Twenty-four active young males were recruited and randomized into three groups: (G1: 3-5 cycling bouts × 30-s all-out with 4 min recovery; G2: 6- 10 cycling bouts × 125% Pmax with 2 min recovery and a non-trained control group. They all performed a VO2max test, a time to exhaustion at Pmax (Tmax and a Wingate test before and after the intervention. Capillary blood lactate was taken at rest, 3, and 20 min after the Wingate trial. Training was performed 3 sessions per week for 4 weeks. In G1, significant improvements (p < 0.05 following training were found in VO2max (9.6%, power at VO2max (12.8%, Tmax (48.4%, peak power output (10.3% and mean power output (17.1%. In G2, significant improvements following training were found in VO2max (9.7%, power at VO2max (16.1%, Tmax (54.2%, peak power output (7.4%; p < 0.05, but mean power output did not change significantly. Blood lactate recovery (20th min significantly decreased in G1 and G2 when compared with pre-testing and the CON group (p < 0.05. In conclusion, the results of the current study agree with earlier work demonstrating the effectiveness of 30-s all-out training program to aerobic and anaerobic adaptations. Of substantial interest is that the low volume high intensity training provides similar results but involves only half the intensity with double the repetitions

  1. High Performance Commercial Fenestration Framing Systems

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  2. Fracture toughness of ultra high performance concrete by flexural performance

    Manolova Emanuela

    2016-01-01

    Full Text Available This paper describes the fracture toughness of the innovative structural material - Ultra High Performance Concrete (UHPC, evaluated by flexural performance. For determination the material behaviour by static loading are used adapted standard test methods for flexural performance of fiber-reinforced concrete (ASTM C 1609 and ASTM C 1018. Fracture toughness is estimated by various deformation parameters derived from the load-deflection curve, obtained by testing simple supported beam under third-point loading, using servo-controlled testing system. This method is used to be estimated the contribution of the embedded fiber-reinforcement into improvement of the fractural behaviour of UHPC by changing the crack-resistant capacity, fracture toughness and energy absorption capacity with various mechanisms. The position of the first crack has been formulated based on P-δ (load- deflection response and P-ε (load - longitudinal deformation in the tensile zone response, which are used for calculation of the two toughness indices I5 and I10. The combination of steel fibres with different dimensions leads to a composite, having at the same time increased crack resistance, first crack formation, ductility and post-peak residual strength.

  3. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K. [Cray Inc., St. Paul, MN 55101 (United States); Porter, D. [Minnesota Supercomputing Institute for Advanced Computational Research, Minneapolis, MN USA (United States); O’Neill, B. J.; Nolting, C.; Donnert, J. M. F.; Jones, T. W. [School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455 (United States); Edmon, P., E-mail: pjm@cray.com, E-mail: nradclif@cray.com, E-mail: kkandalla@cray.com, E-mail: oneill@astro.umn.edu, E-mail: nolt0040@umn.edu, E-mail: donnert@ira.inaf.it, E-mail: twj@umn.edu, E-mail: dhp@umn.edu, E-mail: pedmon@cfa.harvard.edu [Institute for Theory and Computation, Center for Astrophysics, Harvard University, Cambridge, MA 02138 (United States)

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  4. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.; Porter, D.; O’Neill, B. J.; Nolting, C.; Donnert, J. M. F.; Jones, T. W.; Edmon, P.

    2017-01-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  5. HIGH PERFORMANCE CERIA BASED OXYGEN MEMBRANE

    2014-01-01

    The invention describes a new class of highly stable mixed conducting materials based on acceptor doped cerium oxide (CeO2-8 ) in which the limiting electronic conductivity is significantly enhanced by co-doping with a second element or co- dopant, such as Nb, W and Zn, so that cerium and the co......-dopant have an ionic size ratio between 0.5 and 1. These materials can thereby improve the performance and extend the range of operating conditions of oxygen permeation membranes (OPM) for different high temperature membrane reactor applications. The invention also relates to the manufacturing of supported...

  6. Playa: High-Performance Programmable Linear Algebra

    Victoria E. Howle

    2012-01-01

    Full Text Available This paper introduces Playa, a high-level user interface layer for composing algorithms for complex multiphysics problems out of objects from other Trilinos packages. Among other features, Playa provides very high-performance overloaded operators implemented through an expression template mechanism. In this paper, we give an overview of the central Playa objects from a user's perspective, show application to a sequence of increasingly complex solver algorithms, provide timing results for Playa's overloaded operators and other functions, and briefly survey some of the implementation issues involved.

  7. Optimizing the design of very high power, high performance converters

    Edwards, R.J.; Tiagha, E.A.; Ganetis, G.; Nawrocky, R.J.

    1980-01-01

    This paper describes how various technologies are used to achieve the desired performance in a high current magnet power converter system. It is hoped that the discussions of the design approaches taken will be applicable to other power supply systems where stringent requirements in stability, accuracy and reliability must be met

  8. Robust High Performance Aquaporin based Biomimetic Membranes

    Helix Nielsen, Claus; Zhao, Yichun; Qiu, C.

    2013-01-01

    on top of a support membrane. Control membranes, either without aquaporins or with the inactive AqpZ R189A mutant aquaporin served as controls. The separation performance of the membranes was evaluated by cross-flow forward osmosis (FO) and reverse osmosis (RO) tests. In RO the ABM achieved a water......Aquaporins are water channel proteins with high water permeability and solute rejection, which makes them promising for preparing high-performance biomimetic membranes. Despite the growing interest in aquaporin-based biomimetic membranes (ABMs), it is challenging to produce robust and defect...... permeability of ~ 4 L/(m2 h bar) with a NaCl rejection > 97% at an applied hydraulic pressure of 5 bar. The water permeability was ~40% higher compared to a commercial brackish water RO membrane (BW30) and an order of magnitude higher compared to a seawater RO membrane (SW30HR). In FO, the ABMs had > 90...

  9. Evaluation of high-performance computing software

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  10. High performance cloud auditing and applications

    Choi, Baek-Young; Song, Sejun

    2014-01-01

    This book mainly focuses on cloud security and high performance computing for cloud auditing. The book discusses emerging challenges and techniques developed for high performance semantic cloud auditing, and presents the state of the art in cloud auditing, computing and security techniques with focus on technical aspects and feasibility of auditing issues in federated cloud computing environments.   In summer 2011, the United States Air Force Research Laboratory (AFRL) CyberBAT Cloud Security and Auditing Team initiated the exploration of the cloud security challenges and future cloud auditing research directions that are covered in this book. This work was supported by the United States government funds from the Air Force Office of Scientific Research (AFOSR), the AFOSR Summer Faculty Fellowship Program (SFFP), the Air Force Research Laboratory (AFRL) Visiting Faculty Research Program (VFRP), the National Science Foundation (NSF) and the National Institute of Health (NIH). All chapters were partially suppor...

  11. Monitoring SLAC High Performance UNIX Computing Systems

    Lettsome, Annette K.

    2005-01-01

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface

  12. High performance parallel computers for science

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  13. Toward a theory of high performance.

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance.

  14. AHPCRC - Army High Performance Computing Research Center

    2010-01-01

    computing. Of particular interest is the ability of a distrib- uted jamming network (DJN) to jam signals in all or part of a sensor or communications net...and reasoning, assistive technologies. FRIEDRICH (FRITZ) PRINZ Finmeccanica Professor of Engineering, Robert Bosch Chair, Department of Engineering...High Performance Computing Research Center www.ahpcrc.org BARBARA BRYAN AHPCRC Research and Outreach Manager, HPTi (650) 604-3732 bbryan@hpti.com Ms

  15. DURIP: High Performance Computing in Biomathematics Applications

    2017-05-10

    Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied

  16. High Performance Computing Operations Review Report

    Cupps, Kimberly C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  17. Planning for high performance project teams

    Reed, W.; Keeney, J.; Westney, R.

    1997-01-01

    Both industry-wide research and corporate benchmarking studies confirm the significant savings in cost and time that result from early planning of a project. Amoco's Team Planning Workshop combines long-term strategic project planning and short-term tactical planning with team building to provide the basis for high performing project teams, better project planning, and effective implementation of the Amoco Common Process for managing projects

  18. Model for conductometric detection of carbohydrates and alcohols as complexes with boric acid and borate ion in high-performance liquid chromatography

    Bertrand, G.L.; Armstrong, D.W.

    1989-01-01

    In recent articles, Okada has demonstrated the utility of indirect conductometric detection of electrically neutral sugars and alcohols through their complexes in boric acid solution. The use of a boric acid eluent provides a highly sensitive means of detection for monosaccharides, lactose, and sugar alcohols but not for polysaccharides (other than lactose) and simple alcohols. Addition of sorbitol, mannitol, or fructose to the boric acid eluent allows detection of the polysaccharides and simple alcohols, as well as lactose, glucose, fructose, and presumably other monosaccharides and sugar alcohols. These results were interpreted in terms of the ability of an analyte to form either dissociated or undissociated complexes with boric acid. This interpretation was quantified with a mathematical description of the complexation equilibria and the conductivity due to ionic species. Unfortunately, the mathematical model contains some incorrect assumptions that severely limit the utility of the derived equations and may prevent optimization of this potentially important technique. We present here a more general mathematical model that does not suffer from these limitations

  19. High-performance commercial building systems

    Selkowitz, Stephen

    2003-10-01

    This report summarizes key technical accomplishments resulting from the three year PIER-funded R&D program, ''High Performance Commercial Building Systems'' (HPCBS). The program targets the commercial building sector in California, an end-use sector that accounts for about one-third of all California electricity consumption and an even larger fraction of peak demand, at a cost of over $10B/year. Commercial buildings also have a major impact on occupant health, comfort and productivity. Building design and operations practices that influence energy use are deeply engrained in a fragmented, risk-averse industry that is slow to change. Although California's aggressive standards efforts have resulted in new buildings designed to use less energy than those constructed 20 years ago, the actual savings realized are still well below technical and economic potentials. The broad goal of this program is to develop and deploy a set of energy-saving technologies, strategies, and techniques, and improve processes for designing, commissioning, and operating commercial buildings, while improving health, comfort, and performance of occupants, all in a manner consistent with sound economic investment practices. Results are to be broadly applicable to the commercial sector for different building sizes and types, e.g. offices and schools, for different classes of ownership, both public and private, and for owner-occupied as well as speculative buildings. The program aims to facilitate significant electricity use savings in the California commercial sector by 2015, while assuring that these savings are affordable and promote high quality indoor environments. The five linked technical program elements contain 14 projects with 41 distinct R&D tasks. Collectively they form a comprehensive Research, Development, and Demonstration (RD&D) program with the potential to capture large savings in the commercial building sector, providing significant economic benefits to

  20. High performance separation of lanthanides and actinides

    Sivaraman, N.; Vasudeva Rao, P.R.

    2011-01-01

    The major advantage of High Performance Liquid Chromatography (HPLC) is its ability to provide rapid and high performance separations. It is evident from Van Deemter curve for particle size versus resolution that packing materials with particle sizes less than 2 μm provide better resolution for high speed separations and resolving complex mixtures compared to 5 μm based supports. In the recent past, chromatographic support material using monolith has been studied extensively at our laboratory. Monolith column consists of single piece of porous, rigid material containing mesopores and micropores, which provide fast analyte mass transfer. Monolith support provides significantly higher separation efficiency than particle-packed columns. A clear advantage of monolith is that it could be operated at higher flow rates but with lower back pressure. Higher operating flow rate results in higher column permeability, which drastically reduces analysis time and provides high separation efficiency. The above developed fast separation methods were applied to assay the lanthanides and actinides from the dissolver solutions of nuclear reactor fuels

  1. Work domain constraints for modelling surgical performance.

    Morineau, Thierry; Riffaud, Laurent; Morandi, Xavier; Villain, Jonathan; Jannin, Pierre

    2015-10-01

    Three main approaches can be identified for modelling surgical performance: a competency-based approach, a task-based approach, both largely explored in the literature, and a less known work domain-based approach. The work domain-based approach first describes the work domain properties that constrain the agent's actions and shape the performance. This paper presents a work domain-based approach for modelling performance during cervical spine surgery, based on the idea that anatomical structures delineate the surgical performance. This model was evaluated through an analysis of junior and senior surgeons' actions. Twenty-four cervical spine surgeries performed by two junior and two senior surgeons were recorded in real time by an expert surgeon. According to a work domain-based model describing an optimal progression through anatomical structures, the degree of adjustment of each surgical procedure to a statistical polynomial function was assessed. Each surgical procedure showed a significant suitability with the model and regression coefficient values around 0.9. However, the surgeries performed by senior surgeons fitted this model significantly better than those performed by junior surgeons. Analysis of the relative frequencies of actions on anatomical structures showed that some specific anatomical structures discriminate senior from junior performances. The work domain-based modelling approach can provide an overall statistical indicator of surgical performance, but in particular, it can highlight specific points of interest among anatomical structures that the surgeons dwelled on according to their level of expertise.

  2. High Performance OLED Panel and Luminaire

    Spindler, Jeffrey [OLEDWorks LLC, Rochester, NY (United States)

    2017-02-20

    In this project, OLEDWorks developed and demonstrated the technology required to produce OLED lighting panels with high energy efficiency and excellent light quality. OLED panels developed in this program produce high quality warm white light with CRI greater than 85 and efficacy up to 80 lumens per watt (LPW). An OLED luminaire employing 24 of the high performance panels produces practical levels of illumination for general lighting, with a flux of over 2200 lumens at 60 LPW. This is a significant advance in the state of the art for OLED solid-state lighting (SSL), which is expected to be a complementary light source to the more advanced LED SSL technology that is rapidly replacing all other traditional forms of lighting.

  3. The path toward HEP High Performance Computing

    Apostolakis, John; Brun, René; Gheata, Andrei; Wenzel, Sandro; Carminati, Federico

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit

  4. Constrained bayesian inference of project performance models

    Sunmola, Funlade

    2013-01-01

    Project performance models play an important role in the management of project success. When used for monitoring projects, they can offer predictive ability such as indications of possible delivery problems. Approaches for monitoring project performance relies on available project information including restrictions imposed on the project, particularly the constraints of cost, quality, scope and time. We study in this paper a Bayesian inference methodology for project performance modelling in ...

  5. Information processing among high-performance managers

    S.C. Garcia-Santos

    2010-01-01

    Full Text Available The purpose of this study was to evaluate the information processing of 43 business managers with a professional superior performance. The theoretical framework considers three models: the Theory of Managerial Roles of Henry Mintzberg, the Theory of Information Processing, and Process Model Response to Rorschach by John Exner. The participants have been evaluated by Rorschach method. The results show that these managers are able to collect data, evaluate them and establish rankings properly. At same time, they are capable of being objective and accurate in the problems assessment. This information processing style permits an interpretation of the world around on basis of a very personal and characteristic processing way or cognitive style.

  6. High temperature performance of polymer composites

    Keller, Thomas

    2014-01-01

    The authors explain the changes in the thermophysical and thermomechanical properties of polymer composites under elevated temperatures and fire conditions. Using microscale physical and chemical concepts they allow researchers to find reliable solutions to their engineering needs on the macroscale. In a unique combination of experimental results and quantitative models, a framework is developed to realistically predict the behavior of a variety of polymer composite materials over a wide range of thermal and mechanical loads. In addition, the authors treat extreme fire scenarios up to more than 1000°C for two hours, presenting heat-protection methods to improve the fire resistance of composite materials and full-scale structural members, and discuss their performance after fire exposure. Thanks to the microscopic approach, the developed models are valid for a variety of polymer composites and structural members, making this work applicable to a wide audience, including materials scientists, polymer chemist...

  7. A High Performance COTS Based Computer Architecture

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  8. Management issues for high performance storage systems

    Louis, S. [Lawrence Livermore National Lab., CA (United States); Burris, R. [Oak Ridge National Lab., TN (United States)

    1995-03-01

    Managing distributed high-performance storage systems is complex and, although sharing common ground with traditional network and systems management, presents unique storage-related issues. Integration technologies and frameworks exist to help manage distributed network and system environments. Industry-driven consortia provide open forums where vendors and users cooperate to leverage solutions. But these new approaches to open management fall short addressing the needs of scalable, distributed storage. We discuss the motivation and requirements for storage system management (SSM) capabilities and describe how SSM manages distributed servers and storage resource objects in the High Performance Storage System (HPSS), a new storage facility for data-intensive applications and large-scale computing. Modem storage systems, such as HPSS, require many SSM capabilities, including server and resource configuration control, performance monitoring, quality of service, flexible policies, file migration, file repacking, accounting, and quotas. We present results of initial HPSS SSM development including design decisions and implementation trade-offs. We conclude with plans for follow-on work and provide storage-related recommendations for vendors and standards groups seeking enterprise-wide management solutions.

  9. Enhanced fluorescence sensitivity by coupling yttrium-analyte complexes and three-way fast high-performance liquid chromatography data modeling

    Alcaraz, Mirta R.; Culzoni, María J., E-mail: mculzoni@fbcb.unl.edu.ar; Goicoechea, Héctor C., E-mail: hgoico@fbcb.unl.edu.ar

    2016-01-01

    The present study reports a sensitive chromatographic method for the analysis of seven fluoroquinolones (FQs) in environmental water samples, by coupling yttrium-analyte complex and three-way chromatographic data modeling. This method based on the use of HPLC-FSFD does not require complex or tedious sample treatments or enrichment processes before the analysis, due to the significant fluorescence increments of the analytes reached by the presence of Y{sup 3+}. Enhancement achieved for the FQs signals obtained after Y{sup 3+} addition reaches 103- to 1743-fold. Prediction results corresponding to the application of MCR-ALS to the validation set showed relative error of prediction (REP%) values below 10% in all cases. A recovery study that includes the simultaneous determination of the seven FQs in three different environmental aqueous matrices was conducted. The recovery studies assert the efficiency and the accuracy of the proposed method. The LOD values calculated are in the order of part per trillion (below 0.5 ng mL{sup −1} for all the FQs, except for enoxacin). It is noteworthy to mention that the method herein proposed, which does not include pre-concentration steps, allows reaching LOD values in the same order of magnitude than those achieved by more sophisticated methods based on SPE and UHPLC-MS/MS. - Highlights: • Highly sensitive method for the analysis of seven fluoroquinolones. • Coupling of yttrium-analyte complex and three-way modeling. • Complex or tedious sample treatments or enrichment processes are nor required. • Accuracy on the quantitation of fluoroquinolones in real water river samples.

  10. Transmutation Fuel Performance Code Thermal Model Verification

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  11. Automatic Energy Schemes for High Performance Applications

    Sundriyal, Vaibhav [Iowa State Univ., Ames, IA (United States)

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  12. High-performance computing in seismology

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  13. A high performance architecture for accelerator controls

    Allen, M.; Hunt, S.M; Lue, H.; Saltmarsh, C.G.; Parker, C.R.C.B.

    1991-01-01

    The demands placed on the Superconducting Super Collider (SSC) control system due to large distances, high bandwidth and fast response time required for operation will require a fresh approach to the data communications architecture of the accelerator. The prototype design effort aims at providing deterministic communication across the accelerator complex with a response time of < 100 ms and total bandwidth of 2 Gbits/sec. It will offer a consistent interface for a large number of equipment types, from vacuum pumps to beam position monitors, providing appropriate communications performance for each equipment type. It will consist of highly parallel links to all equipment: those with computing resources, non-intelligent direct control interfaces, and data concentrators. This system will give each piece of equipment a dedicated link of fixed bandwidth to the control system. Application programs will have access to all accelerator devices which will be memory mapped into a global virtual addressing scheme. Links to devices in the same geographical area will be multiplexed using commercial Time Division Multiplexing equipment. Low-level access will use reflective memory techniques, eliminating processing overhead and complexity of traditional data communication protocols. The use of commercial standards and equipment will enable a high performance system to be built at low cost

  14. A high performance architecture for accelerator controls

    Allen, M.; Hunt, S.M.; Lue, H.; Saltmarsh, C.G.; Parker, C.R.C.B.

    1991-03-01

    The demands placed on the Superconducting Super Collider (SSC) control system due to large distances, high bandwidth and fast response time required for operation will require a fresh approach to the data communications architecture of the accelerator. The prototype design effort aims at providing deterministic communication across the accelerator complex with a response time of <100 ms and total bandwidth of 2 Gbits/sec. It will offer a consistent interface for a large number of equipment types, from vacuum pumps to beam position monitors, providing appropriate communications performance for each equipment type. It will consist of highly parallel links to all equipments: those with computing resources, non-intelligent direct control interfaces, and data concentrators. This system will give each piece of equipment a dedicated link of fixed bandwidth to the control system. Application programs will have access to all accelerator devices which will be memory mapped into a global virtual addressing scheme. Links to devices in the same geographical area will be multiplexed using commercial Time Division Multiplexing equipment. Low-level access will use reflective memory techniques, eliminating processing overhead and complexity of traditional data communication protocols. The use of commercial standards and equipment will enable a high performance system to be built at low cost. 1 fig

  15. High performance computing in linear control

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  16. Model performance analysis and model validation in logistic regression

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  17. Intel Xeon Phi coprocessor high performance programming

    Jeffers, James

    2013-01-01

    Authors Jim Jeffers and James Reinders spent two years helping educate customers about the prototype and pre-production hardware before Intel introduced the first Intel Xeon Phi coprocessor. They have distilled their own experiences coupled with insights from many expert customers, Intel Field Engineers, Application Engineers and Technical Consulting Engineers, to create this authoritative first book on the essentials of programming for this new architecture and these new products. This book is useful even before you ever touch a system with an Intel Xeon Phi coprocessor. To ensure that your applications run at maximum efficiency, the authors emphasize key techniques for programming any modern parallel computing system whether based on Intel Xeon processors, Intel Xeon Phi coprocessors, or other high performance microprocessors. Applying these techniques will generally increase your program performance on any system, and better prepare you for Intel Xeon Phi coprocessors and the Intel MIC architecture. It off...

  18. Development of high-performance blended cements

    Wu, Zichao

    2000-10-01

    This thesis presents the development of high-performance blended cements from industrial by-products. To overcome the low-early strength of blended cements, several chemicals were studied as the activators for cement hydration. Sodium sulfate was discovered as the best activator. The blending proportions were optimized by Taguchi experimental design. The optimized blended cements containing up to 80% fly ash performed better than Type I cement in strength development and durability. Maintaining a constant cement content, concrete produced from the optimized blended cements had equal or higher strength and higher durability than that produced from Type I cement alone. The key for the activation mechanism was the reaction between added SO4 2- and Ca2+ dissolved from cement hydration products.

  19. The COD Model: Simulating Workgroup Performance

    Biggiero, Lucio; Sevi, Enrico

    Though the question of the determinants of workgroup performance is one of the most central in organization science, precise theoretical frameworks and formal demonstrations are still missing. In order to fill in this gap the COD agent-based simulation model is here presented and used to study the effects of task interdependence and bounded rationality on workgroup performance. The first relevant finding is an algorithmic demonstration of the ordering of interdependencies in terms of complexity, showing that the parallel mode is the most simplex, followed by the sequential and then by the reciprocal. This result is far from being new in organization science, but what is remarkable is that now it has the strength of an algorithmic demonstration instead of being based on the authoritativeness of some scholar or on some episodic empirical finding. The second important result is that the progressive introduction of realistic limits to agents' rationality dramatically reduces workgroup performance and addresses to a rather interesting result: when agents' rationality is severely bounded simple norms work better than complex norms. The third main finding is that when the complexity of interdependence is high, then the appropriate coordination mechanism is agents' direct and active collaboration, which means teamwork.

  20. High Performance Computing Software Applications for Space Situational Awareness

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  1. Development and Performance of a Highly Sensitive Model Formulation Based on Torasemide to Enhance Hot-Melt Extrusion Process Understanding and Process Development.

    Evans, Rachel C; Kyeremateng, Samuel O; Asmus, Lutz; Degenhardt, Matthias; Rosenberg, Joerg; Wagner, Karl G

    2018-02-27

    The aim of this work was to investigate the use of torasemide as a highly sensitive indicator substance and to develop a formulation thereof for establishing quantitative relationships between hot-melt extrusion process conditions and critical quality attributes (CQAs). Using solid-state characterization techniques and a 10 mm lab-scale co-rotating twin-screw extruder, we studied torasemide in a Soluplus® (SOL)-polyethylene glycol 1500 (PEG 1500) matrix, and developed and characterized a formulation which was used as a process indicator to study thermal- and hydrolysis-induced degradation, as well as residual crystallinity. We found that torasemide first dissolved into the matrix and then degraded. Based on this mechanism, extrudates with measurable levels of degradation and residual crystallinity were produced, depending strongly on the main barrel and die temperature and residence time applied. In addition, we found that 10% w/w PEG 1500 as plasticizer resulted in the widest operating space with the widest range of measurable residual crystallinity and degradant levels. Torasemide as an indicator substance behaves like a challenging-to-process API, only with higher sensitivity and more pronounced effects, e.g., degradation and residual crystallinity. Application of a model formulation containing torasemide will enhance the understanding of the dynamic environment inside an extruder and elucidate the cumulative thermal and hydrolysis effects of the extrusion process. The use of such a formulation will also facilitate rational process development and scaling by establishing clear links between process conditions and CQAs.

  2. A new modeling strategy for third-order fast high-performance liquid chromatographic data with fluorescence detection. Quantitation of fluoroquinolones in water samples.

    Alcaráz, Mirta R; Bortolato, Santiago A; Goicoechea, Héctor C; Olivieri, Alejandro C

    2015-03-01

    Matrix augmentation is regularly employed in extended multivariate curve resolution-alternating least-squares (MCR-ALS), as applied to analytical calibration based on second- and third-order data. However, this highly useful concept has almost no correspondence in parallel factor analysis (PARAFAC) of third-order data. In the present work, we propose a strategy to process third-order chromatographic data with matrix fluorescence detection, based on an Augmented PARAFAC model. The latter involves decomposition of a three-way data array augmented along the elution time mode with data for the calibration samples and for each of the test samples. A set of excitation-emission fluorescence matrices, measured at different chromatographic elution times for drinking water samples, containing three fluoroquinolones and uncalibrated interferences, were evaluated using this approach. Augmented PARAFAC exploits the second-order advantage, even in the presence of significant changes in chromatographic profiles from run to run. The obtained relative errors of prediction were ca. 10 % for ofloxacin, ciprofloxacin, and danofloxacin, with a significant enhancement in analytical figures of merit in comparison with previous reports. The results are compared with those furnished by MCR-ALS.

  3. An integrated high performance fastbus slave interface

    Christiansen, J.; Ljuslin, C.

    1992-01-01

    A high performance Fastbus slave interface ASIC is presented. The Fastbus slave integrated circuit (FASIC) is a programmable device, enabling its direct use in many different applications. The FASIC acts as an interface between Fastbus and a 'standard' processor/memory bus. It can work stand-alone or together with a microprocessor. A set of address mapping windows can map Fastbus addresses to convenient memory addresses and at the same time act as address decoding logic. Data rates of 100 MBytes/s to Fastbus can be obtained using an internal FIFO buffer in the FASIC. (orig.)

  4. High performance visual display for HENP detectors

    McGuigan, M; Spiletic, J; Fine, V; Nevski, P

    2001-01-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactiv...

  5. High-Performance Vertical Organic Electrochemical Transistors.

    Donahue, Mary J; Williamson, Adam; Strakosas, Xenofon; Friedlein, Jacob T; McLeod, Robert R; Gleskova, Helena; Malliaras, George G

    2018-02-01

    Organic electrochemical transistors (OECTs) are promising transducers for biointerfacing due to their high transconductance, biocompatibility, and availability in a variety of form factors. Most OECTs reported to date, however, utilize rather large channels, limiting the transistor performance and resulting in a low transistor density. This is typically a consequence of limitations associated with traditional fabrication methods and with 2D substrates. Here, the fabrication and characterization of OECTs with vertically stacked contacts, which overcome these limitations, is reported. The resulting vertical transistors exhibit a reduced footprint, increased intrinsic transconductance of up to 57 mS, and a geometry-normalized transconductance of 814 S m -1 . The fabrication process is straightforward and compatible with sensitive organic materials, and allows exceptional control over the transistor channel length. This novel 3D fabrication method is particularly suited for applications where high density is needed, such as in implantable devices. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. High Performance Data Distribution for Scientific Community

    Tirado, Juan M.; Higuero, Daniel; Carretero, Jesus

    2010-05-01

    Institutions such as NASA, ESA or JAXA find solutions to distribute data from their missions to the scientific community, and their long term archives. This is a complex problem, as it includes a vast amount of data, several geographically distributed archives, heterogeneous architectures with heterogeneous networks, and users spread around the world. We propose a novel architecture (HIDDRA) that solves this problem aiming to reduce user intervention in data acquisition and processing. HIDDRA is a modular system that provides a highly efficient parallel multiprotocol download engine, using a publish/subscribe policy which helps the final user to obtain data of interest transparently. Our system can deal simultaneously with multiple protocols (HTTP,HTTPS, FTP, GridFTP among others) to obtain the maximum bandwidth, reducing the workload in data server and increasing flexibility. It can also provide high reliability and fault tolerance, as several sources of data can be used to perform one file download. HIDDRA architecture can be arranged into a data distribution network deployed on several sites that can cooperate to provide former features. HIDDRA has been addressed by the 2009 e-IRG Report on Data Management as a promising initiative for data interoperability. Our first prototype has been evaluated in collaboration with the ESAC centre in Villafranca del Castillo (Spain) that shows a high scalability and performance, opening a wide spectrum of opportunities. Some preliminary results have been published in the Journal of Astrophysics and Space Science [1]. [1] D. Higuero, J.M. Tirado, J. Carretero, F. Félix, and A. de La Fuente. HIDDRA: a highly independent data distribution and retrieval architecture for space observation missions. Astrophysics and Space Science, 321(3):169-175, 2009

  7. Photovoltaic performance models - A report card

    Smith, J. H.; Reiter, L. R.

    1985-01-01

    Models for the analysis of photovoltaic (PV) systems' designs, implementation policies, and economic performance, have proliferated while keeping pace with rapid changes in basic PV technology and extensive empirical data compiled for such systems' performance. Attention is presently given to the results of a comparative assessment of ten well documented and widely used models, which range in complexity from first-order approximations of PV system performance to in-depth, circuit-level characterizations. The comparisons were made on the basis of the performance of their subsystem, as well as system, elements. The models fall into three categories in light of their degree of aggregation into subsystems: (1) simplified models for first-order calculation of system performance, with easily met input requirements but limited capability to address more than a small variety of design considerations; (2) models simulating PV systems in greater detail, encompassing types primarily intended for either concentrator-incorporating or flat plate collector PV systems; and (3) models not specifically designed for PV system performance modeling, but applicable to aspects of electrical system design. Models ignoring subsystem failure or degradation are noted to exclude operating and maintenance characteristics as well.

  8. Quantum Accelerators for High-Performance Computing Systems

    Britt, Keith A.; Mohiyaddin, Fahd A.; Humble, Travis S.

    2017-01-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantu...

  9. High-performance laboratories and cleanrooms; TOPICAL

    Tschudi, William; Sartor, Dale; Mills, Evan; Xu, Tengfang

    2002-01-01

    The California Energy Commission sponsored this roadmap to guide energy efficiency research and deployment for high performance cleanrooms and laboratories. Industries and institutions utilizing these building types (termed high-tech buildings) have played an important part in the vitality of the California economy. This roadmap's key objective to present a multi-year agenda to prioritize and coordinate research efforts. It also addresses delivery mechanisms to get the research products into the market. Because of the importance to the California economy, it is appropriate and important for California to take the lead in assessing the energy efficiency research needs, opportunities, and priorities for this market. In addition to the importance to California's economy, energy demand for this market segment is large and growing (estimated at 9400 GWH for 1996, Mills et al. 1996). With their 24hr. continuous operation, high tech facilities are a major contributor to the peak electrical demand. Laboratories and cleanrooms constitute the high tech building market, and although each building type has its unique features, they are similar in that they are extremely energy intensive, involve special environmental considerations, have very high ventilation requirements, and are subject to regulations-primarily safety driven-that tend to have adverse energy implications. High-tech buildings have largely been overlooked in past energy efficiency research. Many industries and institutions utilize laboratories and cleanrooms. As illustrated, there are many industries operating cleanrooms in California. These include semiconductor manufacturing, semiconductor suppliers, pharmaceutical, biotechnology, disk drive manufacturing, flat panel displays, automotive, aerospace, food, hospitals, medical devices, universities, and federal research facilities

  10. High-performance computing for airborne applications

    Quinn, Heather M.; Manuzatto, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-01-01

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  11. High-performance vertical organic transistors.

    Kleemann, Hans; Günther, Alrun A; Leo, Karl; Lüssem, Björn

    2013-11-11

    Vertical organic thin-film transistors (VOTFTs) are promising devices to overcome the transconductance and cut-off frequency restrictions of horizontal organic thin-film transistors. The basic physical mechanisms of VOTFT operation, however, are not well understood and VOTFTs often require complex patterning techniques using self-assembly processes which impedes a future large-area production. In this contribution, high-performance vertical organic transistors comprising pentacene for p-type operation and C60 for n-type operation are presented. The static current-voltage behavior as well as the fundamental scaling laws of such transistors are studied, disclosing a remarkable transistor operation with a behavior limited by injection of charge carriers. The transistors are manufactured by photolithography, in contrast to other VOTFT concepts using self-assembled source electrodes. Fluorinated photoresist and solvent compounds allow for photolithographical patterning directly and strongly onto the organic materials, simplifying the fabrication protocol and making VOTFTs a prospective candidate for future high-performance applications of organic transistors. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Performance of the CMS High Level Trigger

    Perrotta, Andrea

    2015-01-01

    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increases in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. The increase in the number of interactions per bunch crossing, on average 25 in 2012, and expected to be around 40 in Run II, will be an additional complication. We present here the expected performance of the main triggers that will be used during the 2015 data taking campaign, paying particular attention to the new approaches that have been developed to cope with the challenges of the new run. This includes improvements in HLT electron and photon reconstruction as well as better performing muon triggers. We will also present the performance of the improved trac...

  13. Low cost high performance uncertainty quantification

    Bekas, C.

    2009-01-01

    Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost which quickly becomes intractable with the current explosion of data sizes. In this work we reduce this complexity to quadratic with the synergy of two algorithms that gracefully complement each other and lead to a radically different approach. First, we turned to stochastic estimation of the diagonal. This allowed us to cast the problem as a linear system with a relatively small number of multiple right hand sides. Second, for this linear system we developed a novel, mixed precision, iterative refinement scheme, which uses iterative solvers instead of matrix factorizations. We demonstrate that the new framework not only achieves the much needed quadratic cost but in addition offers excellent opportunities for scaling at massively parallel environments. We based our implementation on BLAS 3 kernels that ensure very high processor performance. We achieved a peak performance of 730 TFlops on 72 BG/P racks, with a sustained performance 73% of theoretical peak. We stress that the techniques presented in this work are quite general and applicable to several other important applications. Copyright © 2009 ACM.

  14. Energy Efficient Graphene Based High Performance Capacitors.

    Bae, Joonwon; Kwon, Oh Seok; Lee, Chang-Soo

    2017-07-10

    Graphene (GRP) is an interesting class of nano-structured electronic materials for various cutting-edge applications. To date, extensive research activities have been performed on the investigation of diverse properties of GRP. The incorporation of this elegant material can be very lucrative in terms of practical applications in energy storage/conversion systems. Among various those systems, high performance electrochemical capacitors (ECs) have become popular due to the recent need for energy efficient and portable devices. Therefore, in this article, the application of GRP for capacitors is described succinctly. In particular, a concise summary on the previous research activities regarding GRP based capacitors is also covered extensively. It was revealed that a lot of secondary materials such as polymers and metal oxides have been introduced to improve the performance. Also, diverse devices have been combined with capacitors for better use. More importantly, recent patents related to the preparation and application of GRP based capacitors are also introduced briefly. This article can provide essential information for future study. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  15. Ultra high performance concrete dematerialization study

    NONE

    2004-03-01

    Concrete is the most widely used building material in the world and its use is expected to grow. It is well recognized that the production of portland cement results in the release of large amounts of carbon dioxide, a greenhouse gas (GHG). The main challenge facing the industry is to produce concrete in an environmentally sustainable manner. Reclaimed industrial by-proudcts such as fly ash, silica fume and slag can reduce the amount of portland cement needed to make concrete, thereby reducing the amount of GHGs released to the atmosphere. The use of these supplementary cementing materials (SCM) can also enhance the long-term strength and durability of concrete. The intention of the EcoSmart{sup TM} Concrete Project is to develop sustainable concrete through innovation in supply, design and construction. In particular, the project focuses on finding a way to minimize the GHG signature of concrete by maximizing the replacement of portland cement in the concrete mix with SCM while improving the cost, performance and constructability. This paper describes the use of Ductal{sup R} Ultra High Performance Concrete (UHPC) for ramps in a condominium. It examined the relationship between the selection of UHPC and the overall environmental performance, cost, constructability maintenance and operational efficiency as it relates to the EcoSmart Program. The advantages and challenges of using UHPC were outlined. In addition to its very high strength, UHPC has been shown to have very good potential for GHG emission reduction due to the reduced material requirements, reduced transport costs and increased SCM content. refs., tabs., figs.

  16. JT-60U high performance regimes

    Ishida, S.

    1999-01-01

    High performance regimes of JT-60U plasmas are presented with an emphasis upon the results from the use of a semi-closed pumped divertor with W-shaped geometry. Plasma performance in transient and quasi steady states has been significantly improved in reversed shear and high- βp regimes. The reversed shear regime elevated an equivalent Q DT eq transiently up to 1.25 (n D (0)τ E T i (0)=8.6x10 20 m-3·s·keV) in a reactor-relevant thermonuclear dominant regime. Long sustainment of enhanced confinement with internal transport barriers (ITBs) with a fully non-inductive current drive in a reversed shear discharge was successfully demonstrated with LH wave injection. Performance sustainment has been extended in the high- bp regime with a high triangularity achieving a long sustainment of plasma conditions equivalent to Q DT eq ∼0.16 (n D (0)τ E T i (0)∼1.4x10 20 m -3 ·s·keV) for ∼4.5 s with a large non-inductive current drive fraction of 60-70% of the plasma current. Thermal and particle transport analyses show significant reduction of thermal and particle diffusivities around ITB resulting in a strong Er shear in the ITB region. The W-shaped divertor is effective for He ash exhaust demonstrating steady exhaust capability of τ He */τ E ∼3-10 in support of ITER. Suppression of neutral back flow and chemical sputtering effect have been observed while MARFE onset density is rather decreased. Negative-ion based neutral beam injection (N-NBI) experiments have created a clear H-mode transition. Enhanced ionization cross- section due to multi-step ionization processes was confirmed as theoretically predicted. A current density profile driven by N-NBI is measured in a good agreement with theoretical prediction. N-NBI induced TAE modes characterized as persistent and bursting oscillations have been observed from a low hot beta of h >∼0.1-0.2% without a significant loss of fast ions. (author)

  17. Performance of different radiotherapy workload models

    Barbera, Lisa; Jackson, Lynda D.; Schulze, Karleen; Groome, Patti A.; Foroudi, Farshad; Delaney, Geoff P.; Mackillop, William J.

    2003-01-01

    Purpose: The purpose of this study was to evaluate the performance of different radiotherapy workload models using a prospectively collected dataset of patient and treatment information from a single center. Methods and Materials: Information about all individual radiotherapy treatments was collected for 2 weeks from the three linear accelerators (linacs) in our department. This information included diagnosis code, treatment site, treatment unit, treatment time, fields per fraction, technique, beam type, blocks, wedges, junctions, port films, and Eastern Cooperative Oncology Group (ECOG) performance status. We evaluated the accuracy and precision of the original and revised basic treatment equivalent (BTE) model, the simple and complex Addenbrooke models, the equivalent simple treatment visit (ESTV) model, fields per hour, and two local standards of workload measurement. Results: Data were collected for 2 weeks in June 2001. During this time, 151 patients were treated with 857 fractions. The revised BTE model performed better than the other models with a mean vertical bar observed - predicted vertical bar of 2.62 (2.44-2.80). It estimated 88.0% of treatment times within 5 min, which is similar to the previously reported accuracy of the model. Conclusion: The revised BTE model had similar accuracy and precision for data collected in our center as it did for the original dataset and performed the best of the models assessed. This model would have uses for patient scheduling, and describing workloads and case complexity

  18. Statistical and Machine Learning Models to Predict Programming Performance

    Bergin, Susan

    2006-01-01

    This thesis details a longitudinal study on factors that influence introductory programming success and on the development of machine learning models to predict incoming student performance. Although numerous studies have developed models to predict programming success, the models struggled to achieve high accuracy in predicting the likely performance of incoming students. Our approach overcomes this by providing a machine learning technique, using a set of three significant...

  19. Development of high performance ODS alloys

    Shao, Lin [Texas A & M Univ., College Station, TX (United States); Gao, Fei [Univ. of Michigan, Ann Arbor, MI (United States); Garner, Frank [Texas A & M Univ., College Station, TX (United States)

    2018-01-29

    This project aims to capitalize on insights developed from recent high-dose self-ion irradiation experiments in order to develop and test the next generation of optimized ODS alloys needed to meet the nuclear community's need for high strength, radiation-tolerant cladding and core components, especially with enhanced resistance to void swelling. Two of these insights are that ferrite grains swell earlier than tempered martensite grains, and oxide dispersions currently produced only in ferrite grains require a high level of uniformity and stability to be successful. An additional insight is that ODS particle stability is dependent on as-yet unidentified compositional combinations of dispersoid and alloy matrix, such as dispersoids are stable in MA957 to doses greater than 200 dpa but dissolve in MA956 at doses less than 200 dpa. These findings focus attention on candidate next-generation alloys which address these concerns. Collaboration with two Japanese groups provides this project with two sets of first-round candidate alloys that have already undergone extensive development and testing for unirradiated properties, but have not yet been evaluated for their irradiation performance. The first set of candidate alloys are dual phase (ferrite + martensite) ODS alloys with oxide particles uniformly distributed in both ferrite and martensite phases. The second set of candidate alloys are ODS alloys containing non-standard dispersoid compositions with controllable oxide particle sizes, phases and interfaces.

  20. Low-Cost High-Performance MRI

    Sarracanie, Mathieu; Lapierre, Cristen D.; Salameh, Najat; Waddington, David E. J.; Witzel, Thomas; Rosen, Matthew S.

    2015-10-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5-3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm3 imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (standards for affordable (<$50,000) and robust portable devices.

  1. Iowa calibration of MEPDG performance prediction models.

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  2. Advanced Performance Modeling with Combined Passive and Active Monitoring

    Dovrolis, Constantine [Georgia Inst. of Technology, Atlanta, GA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-04-15

    To improve the efficiency of resource utilization and scheduling of scientific data transfers on high-speed networks, the "Advanced Performance Modeling with combined passive and active monitoring" (APM) project investigates and models a general-purpose, reusable and expandable network performance estimation framework. The predictive estimation model and the framework will be helpful in optimizing the performance and utilization of networks as well as sharing resources with predictable performance for scientific collaborations, especially in data intensive applications. Our prediction model utilizes historical network performance information from various network activity logs as well as live streaming measurements from network peering devices. Historical network performance information is used without putting extra load on the resources by active measurement collection. Performance measurements collected by active probing is used judiciously for improving the accuracy of predictions.

  3. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  4. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  5. A high performance totally ordered multicast protocol

    Montgomery, Todd; Whetten, Brian; Kaplan, Simon

    1995-01-01

    This paper presents the Reliable Multicast Protocol (RMP). RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service such as IP Multicasting. RMP is fully and symmetrically distributed so that no site bears un undue portion of the communication load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These QoS guarantees are selectable on a per packet basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, an implicit naming service, mutually exclusive handlers for messages, and mutually exclusive locks. It has commonly been held that a large performance penalty must be paid in order to implement total ordering -- RMP discounts this. On SparcStation 10's on a 1250 KB/sec Ethernet, RMP provides totally ordered packet delivery to one destination at 842 KB/sec throughput and with 3.1 ms packet latency. The performance stays roughly constant independent of the number of destinations. For two or more destinations on a LAN, RMP provides higher throughput than any protocol that does not use multicast or broadcast.

  6. High Performance, Three-Dimensional Bilateral Filtering

    Bethel, E. Wes

    2008-01-01

    Image smoothing is a fundamental operation in computer vision and image processing. This work has two main thrusts: (1) implementation of a bilateral filter suitable for use in smoothing, or denoising, 3D volumetric data; (2) implementation of the 3D bilateral filter in three different parallelization models, along with parallel performance studies on two modern HPC architectures. Our bilateral filter formulation is based upon the work of Tomasi [11], but extended to 3D for use on volumetric data. Our three parallel implementations use POSIX threads, the Message Passing Interface (MPI), and Unified Parallel C (UPC), a Partitioned Global Address Space (PGAS) language. Our parallel performance studies, which were conducted on a Cray XT4 supercomputer and aquad-socket, quad-core Opteron workstation, show our algorithm to have near-perfect scalability up to 120 processors. Parallel algorithms, such as the one we present here, will have an increasingly important role for use in production visual analysis systems as the underlying computational platforms transition from single- to multi-core architectures in the future.

  7. High Performance, Three-Dimensional Bilateral Filtering

    Bethel, E. Wes

    2008-06-05

    Image smoothing is a fundamental operation in computer vision and image processing. This work has two main thrusts: (1) implementation of a bilateral filter suitable for use in smoothing, or denoising, 3D volumetric data; (2) implementation of the 3D bilateral filter in three different parallelization models, along with parallel performance studies on two modern HPC architectures. Our bilateral filter formulation is based upon the work of Tomasi [11], but extended to 3D for use on volumetric data. Our three parallel implementations use POSIX threads, the Message Passing Interface (MPI), and Unified Parallel C (UPC), a Partitioned Global Address Space (PGAS) language. Our parallel performance studies, which were conducted on a Cray XT4 supercomputer and aquad-socket, quad-core Opteron workstation, show our algorithm to have near-perfect scalability up to 120 processors. Parallel algorithms, such as the one we present here, will have an increasingly important role for use in production visual analysis systems as the underlying computational platforms transition from single- to multi-core architectures in the future.

  8. Thermal interface pastes nanostructured for high performance

    Lin, Chuangang

    Thermal interface materials in the form of pastes are needed to improve thermal contacts, such as that between a microprocessor and a heat sink of a computer. High-performance and low-cost thermal pastes have been developed in this dissertation by using polyol esters as the vehicle and various nanoscale solid components. The proportion of a solid component needs to be optimized, as an excessive amount degrades the performance, due to the increase in the bond line thickness. The optimum solid volume fraction tends to be lower when the mating surfaces are smoother, and higher when the thermal conductivity is higher. Both a low bond line thickness and a high thermal conductivity help the performance. When the surfaces are smooth, a low bond line thickness can be even more important than a high thermal conductivity, as shown by the outstanding performance of the nanoclay paste of low thermal conductivity in the smooth case (0.009 mum), with the bond line thickness less than 1 mum, as enabled by low storage modulus G', low loss modulus G" and high tan delta. However, for rough surfaces, the thermal conductivity is important. The rheology affects the bond line thickness, but it does not correlate well with the performance. This study found that the structure of carbon black is an important parameter that governs the effectiveness of a carbon black for use in a thermal paste. By using a carbon black with a lower structure (i.e., a lower DBP value), a thermal paste that is more effective than the previously reported carbon black paste was obtained. Graphite nanoplatelet (GNP) was found to be comparable in effectiveness to carbon black (CB) pastes for rough surfaces, but it is less effective for smooth surfaces. At the same filler volume fraction, GNP gives higher thermal conductivity than carbon black paste. At the same pressure, GNP gives higher bond line thickness than CB (Tokai or Cabot). The effectiveness of GNP is limited, due to the high bond line thickness. A

  9. High performance liquid chromatography in pharmaceutical analyses

    Branko Nikolin

    2004-05-01

    Full Text Available In testing the pre-sale procedure the marketing of drugs and their control in the last ten years, high performance liquid chromatographyreplaced numerous spectroscopic methods and gas chromatography in the quantitative and qualitative analysis. In the first period of HPLC application it was thought that it would become a complementary method of gas chromatography, however, today it has nearly completely replaced gas chromatography in pharmaceutical analysis. The application of the liquid mobile phase with the possibility of transformation of mobilized polarity during chromatography and all other modifications of mobile phase depending upon the characteristics of substance which are being tested, is a great advantage in the process of separation in comparison to other methods. The greater choice of stationary phase is the next factor which enables realization of good separation. The separation line is connected to specific and sensitive detector systems, spectrafluorimeter, diode detector, electrochemical detector as other hyphernated systems HPLC-MS and HPLC-NMR, are the basic elements on which is based such wide and effective application of the HPLC method. The purpose high performance liquid chromatography(HPLC analysis of any drugs is to confirm the identity of a drug and provide quantitative results and also to monitor the progress of the therapy of a disease.1 Measuring presented on the Fig. 1. is chromatogram obtained for the plasma of depressed patients 12 h before oral administration of dexamethasone. It may also be used to further our understanding of the normal and disease process in the human body trough biomedical and therapeutically research during investigation before of the drugs registration. The analyses of drugs and metabolites in biological fluids, particularly plasma, serum or urine is one of the most demanding but one of the most common uses of high performance of liquid chromatography. Blood, plasma or

  10. Quantum Accelerators for High-performance Computing Systems

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  11. Integrating advanced facades into high performance buildings

    Selkowitz, Stephen E.

    2001-01-01

    Glass is a remarkable material but its functionality is significantly enhanced when it is processed or altered to provide added intrinsic capabilities. The overall performance of glass elements in a building can be further enhanced when they are designed to be part of a complete facade system. Finally the facade system delivers the greatest performance to the building owner and occupants when it becomes an essential element of a fully integrated building design. This presentation examines the growing interest in incorporating advanced glazing elements into more comprehensive facade and building systems in a manner that increases comfort, productivity and amenity for occupants, reduces operating costs for building owners, and contributes to improving the health of the planet by reducing overall energy use and negative environmental impacts. We explore the role of glazing systems in dynamic and responsive facades that provide the following functionality: Enhanced sun protection and cooling load control while improving thermal comfort and providing most of the light needed with daylighting; Enhanced air quality and reduced cooling loads using natural ventilation schemes employing the facade as an active air control element; Reduced operating costs by minimizing lighting, cooling and heating energy use by optimizing the daylighting-thermal tradeoffs; Net positive contributions to the energy balance of the building using integrated photovoltaic systems; Improved indoor environments leading to enhanced occupant health, comfort and performance. In addressing these issues facade system solutions must, of course, respect the constraints of latitude, location, solar orientation, acoustics, earthquake and fire safety, etc. Since climate and occupant needs are dynamic variables, in a high performance building the facade solution have the capacity to respond and adapt to these variable exterior conditions and to changing occupant needs. This responsive performance capability

  12. The need for high performance breeder reactors

    Vaughan, R.D.; Chermanne, J.

    1977-01-01

    It can be easily demonstrated, on the basis of realistic estimates of continued high oil costs, that an increasing portion of the growth in energy demand must be supplied by nuclear power and that this one might account for 20% of all the energy production by the end of the century. Such assumptions lead very quickly to the conclusion that the discovery, extraction and processing of the uranium will not be able to follow the demand; the bottleneck will essentially be related to the rate at which the ore can be discovered and extracted, and not to the existing quantities nor their grade. Figures as high as 150.000 T/annum and more would be quickly reached, and it is necessary to wonder already now if enough capital can be attracted to meet these requirements. There is only one solution to this problem: improve the conversion ratio of the nuclear system and quickly reach the breeding; this would lead to the reduction of the natural uranium consumption by a factor of about 50. However, this condition is not sufficient; the commercial breeder must have a breeding gain as high as possible because the Pu out-of-pile time and the Pu losses in the cycle could lead to an unacceptable doubling time for the system, if the breeding gain is too low. That is the reason why it is vital to develop high performance breeder reactors. The present paper indicates how the Gas-cooled Breeder Reactor [GBR] can meet the problems mentioned above, on the basis of recent and realistic studies. It briefly describes the present status of GBR development, from the predecessors in the gas cooled reactor line, particularly the AGR. It shows how the GBR fuel takes mostly profit from the LMFBR fuel irradiation experience. It compares the GBR performance on a consistent basis with that of the LMFBR. The GBR capital and fuel cycle costs are compared with those of thermal and fast reactors respectively. The conclusion is, based on a cost-benefit study, that the GBR must be quickly developed in order

  13. High performance nano-composite technology development

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D. [KAERI, Taejon (Korea, Republic of); Kim, E. K.; Jung, S. Y.; Ryu, H. J. [KRICT, Taejon (Korea, Republic of); Hwang, S. S.; Kim, J. K.; Hong, S. M. [KIST, Taejon (Korea, Republic of); Chea, Y. B. [KIGAM, Taejon (Korea, Republic of); Choi, C. H.; Kim, S. D. [ATS, Taejon (Korea, Republic of); Cho, B. G.; Lee, S. H. [HGREC, Taejon (Korea, Republic of)

    1999-06-15

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  14. How to create high-performing teams.

    Lam, Samuel M

    2010-02-01

    This article is intended to discuss inspirational aspects on how to lead a high-performance team. Cogent topics discussed include how to hire staff through methods of "topgrading" with reference to Geoff Smart and "getting the right people on the bus" referencing Jim Collins' work. In addition, once the staff is hired, this article covers how to separate the "eagles from the ducks" and how to inspire one's staff by creating the right culture with suggestions for further reading by Don Miguel Ruiz (The four agreements) and John Maxwell (21 Irrefutable laws of leadership). In addition, Simon Sinek's concept of "Start with Why" is elaborated to help a leader know what the core element should be with any superior culture. Thieme Medical Publishers.

  15. High performance nano-composite technology development

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D. [KAERI, Taejon (Korea, Republic of); Kim, E. K.; Jung, S. Y.; Ryu, H. J. [KRICT, Taejon (Korea, Republic of); Hwang, S. S.; Kim, J. K.; Hong, S. M. [KIST, Taejon (Korea, Republic of); Chea, Y. B. [KIGAM, Taejon (Korea, Republic of); Choi, C. H.; Kim, S. D. [ATS, Taejon (Korea, Republic of); Cho, B. G.; Lee, S. H. [HGREC, Taejon (Korea, Republic of)

    1999-06-15

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  16. High performance nano-composite technology development

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D.; Kim, E. K.; Jung, S. Y.; Ryu, H. J.; Hwang, S. S.; Kim, J. K.; Hong, S. M.; Chea, Y. B.; Choi, C. H.; Kim, S. D.; Cho, B. G.; Lee, S. H.

    1999-06-01

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  17. High Performance with Prescriptive Optimization and Debugging

    Jensen, Nicklas Bo

    parallelization and automatic vectorization is attractive as it transparently optimizes programs. The thesis contributes an improved dependence analysis for explicitly parallel programs. These improvements lead to more loops being vectorized, on average we achieve a speedup of 1.46 over the existing dependence...... analysis and vectorizer in GCC. Automatic optimizations often fail for theoretical and practical reasons. When they fail we argue that a hybrid approach can be effective. Using compiler feedback, we propose to use the programmer’s intuition and insight to achieve high performance. Compiler feedback...... enlightens the programmer why a given optimization was not applied, and suggest how to change the source code to make it more amenable to optimizations. We show how this can yield significant speedups and achieve 2.4 faster execution on a real industrial use case. To aid in parallel debugging we propose...

  18. Numerical Model of High Strength Concrete

    Wang, R. Z.; Wang, C. Y.; Lin, Y. L.

    2018-03-01

    The purpose of this paper is to present a three-dimensional constitutive model based on the concept of equivalent uniaxial strain. closed Menetrey-Willam (CMW) failure surfaces which combined with Menetrey-Willam meridian and the cap model are introduced in this paper. Saenz stress-strain model is applied and adjusted by the ultimate strength parameters from CMW failure surface to reflect the latest stress or strain condition. The high strength concrete (HSC) under tri-axial non-proportional loading is considered and the model in this paper performed a good prediction.

  19. Cost and Performance Model for Photovoltaic Systems

    Borden, C. S.; Smith, J. H.; Davisson, M. C.; Reiter, L. J.

    1986-01-01

    Lifetime cost and performance (LCP) model assists in assessment of design options for photovoltaic systems. LCP is simulation of performance, cost, and revenue streams associated with photovoltaic power systems connected to electric-utility grid. LCP provides user with substantial flexibility in specifying technical and economic environment of application.

  20. Identifying the connective strength between model parameters and performance criteria

    B. Guse

    2017-11-01

    Full Text Available In hydrological models, parameters are used to represent the time-invariant characteristics of catchments and to capture different aspects of hydrological response. Hence, model parameters need to be identified based on their role in controlling the hydrological behaviour. For the identification of meaningful parameter values, multiple and complementary performance criteria are used that compare modelled and measured discharge time series. The reliability of the identification of hydrologically meaningful model parameter values depends on how distinctly a model parameter can be assigned to one of the performance criteria. To investigate this, we introduce the new concept of connective strength between model parameters and performance criteria. The connective strength assesses the intensity in the interrelationship between model parameters and performance criteria in a bijective way. In our analysis of connective strength, model simulations are carried out based on a latin hypercube sampling. Ten performance criteria including Nash–Sutcliffe efficiency (NSE, Kling–Gupta efficiency (KGE and its three components (alpha, beta and r as well as RSR (the ratio of the root mean square error to the standard deviation for different segments of the flow duration curve (FDC are calculated. With a joint analysis of two regression tree (RT approaches, we derive how a model parameter is connected to different performance criteria. At first, RTs are constructed using each performance criterion as the target variable to detect the most relevant model parameters for each performance criterion. Secondly, RTs are constructed using each parameter as the target variable to detect which performance criteria are impacted by changes in the values of one distinct model parameter. Based on this, appropriate performance criteria are identified for each model parameter. In this study, a high bijective connective strength between model parameters and performance criteria

  1. Optimizing High Performance Self Compacting Concrete

    Raymond A Yonathan

    2017-01-01

    Full Text Available This paper’s objectives are to learn the effect of glass powder, silica fume, Polycarboxylate Ether, and gravel to optimizing composition of each factor in making High Performance SCC. Taguchi method is proposed in this paper as best solution to minimize specimen variable which is more than 80 variations. Taguchi data analysis method is applied to provide composition, optimizing, and the effect of contributing materials for nine variable of specimens. Concrete’s workability was analyzed using Slump flow test, V-funnel test, and L-box test. Compressive and porosity test were performed for the hardened state. With a dimension of 100×200 mm the cylindrical specimens were cast for compressive test with the age of 3, 7, 14, 21, 28 days. Porosity test was conducted at 28 days. It is revealed that silica fume contributes greatly to slump flow and porosity. Coarse aggregate shows the greatest contributing factor to L-box and compressive test. However, all factors show unclear result to V-funnel test.

  2. Predicting High-Power Performance in Professional Cyclists.

    Sanders, Dajo; Heijboer, Mathieu; Akubat, Ibrahim; Meijer, Kenneth; Hesselink, Matthijs K

    2017-03-01

    To assess if short-duration (5 to ~300 s) high-power performance can accurately be predicted using the anaerobic power reserve (APR) model in professional cyclists. Data from 4 professional cyclists from a World Tour cycling team were used. Using the maximal aerobic power, sprint peak power output, and an exponential constant describing the decrement in power over time, a power-duration relationship was established for each participant. To test the predictive accuracy of the model, several all-out field trials of different durations were performed by each cyclist. The power output achieved during the all-out trials was compared with the predicted power output by the APR model. The power output predicted by the model showed very large to nearly perfect correlations to the actual power output obtained during the all-out trials for each cyclist (r = .88 ± .21, .92 ± .17, .95 ± .13, and .97 ± .09). Power output during the all-out trials remained within an average of 6.6% (53 W) of the predicted power output by the model. This preliminary pilot study presents 4 case studies on the applicability of the APR model in professional cyclists using a field-based approach. The decrement in all-out performance during high-intensity exercise seems to conform to a general relationship with a single exponential-decay model describing the decrement in power vs increasing duration. These results are in line with previous studies using the APR model to predict performance during brief all-out trials. Future research should evaluate the APR model with a larger sample size of elite cyclists.

  3. Crystal and molecular simulation of high-performance polymers.

    Colquhoun, H M; Williams, D J

    2000-03-01

    Single-crystal X-ray analyses of oligomeric models for high-performance aromatic polymers, interfaced to computer-based molecular modeling and diffraction simulation, have enabled the determination of a range of previously unknown polymer crystal structures from X-ray powder data. Materials which have been successfully analyzed using this approach include aromatic polyesters, polyetherketones, polythioetherketones, polyphenylenes, and polycarboranes. Pure macrocyclic homologues of noncrystalline polyethersulfones afford high-quality single crystals-even at very large ring sizes-and have provided the first examples of a "protein crystallographic" approach to the structures of conventionally amorphous synthetic polymers.

  4. Driver Performance Model: 1. Conceptual Framework

    Heimerl, Joseph

    2001-01-01

    ...'. At the present time, no such comprehensive model exists. This report discusses a conceptual framework designed to encompass the relationships, conditions, and constraints related to direct, indirect, and remote modes of driving and thus provides a guide or 'road map' for the construction and creation of a comprehensive driver performance model.

  5. Performance engineering in the community atmosphere model

    Worley, P; Mirin, A; Drake, J; Sawyer, W

    2006-01-01

    The Community Atmosphere Model (CAM) is the atmospheric component of the Community Climate System Model (CCSM) and is the primary consumer of computer resources in typical CCSM simulations. Performance engineering has been an important aspect of CAM development throughout its existence. This paper briefly summarizes these efforts and their impacts over the past five years

  6. Performance of hedging strategies in interval models

    Roorda, Berend; Engwerda, Jacob; Schumacher, J.M.

    2005-01-01

    For a proper assessment of risks associated with the trading of derivatives, the performance of hedging strategies should be evaluated not only in the context of the idealized model that has served as the basis of strategy development, but also in the context of other models. In this paper we

  7. Performability assessment by model checking of Markov reward models

    Baier, Christel; Cloth, L.; Haverkort, Boudewijn R.H.M.; Hermanns, H.; Katoen, Joost P.

    2010-01-01

    This paper describes efficient procedures for model checking Markov reward models, that allow us to evaluate, among others, the performability of computer-communication systems. We present the logic CSRL (Continuous Stochastic Reward Logic) to specify performability measures. It provides flexibility

  8. Analysing the temporal dynamics of model performance for hydrological models

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or

  9. Development of an Integrated Process, Modeling and Simulation Platform for Performance-Based Design of Low-Energy and High IEQ Buildings

    Chen, Yixing

    2013-01-01

    The objective of this study was to develop a "Virtual Design Studio (VDS)": a software platform for integrated, coordinated and optimized design of green building systems with low energy consumption, high indoor environmental quality (IEQ), and high level of sustainability. The VDS is intended to assist collaborating architects,…

  10. High Performance Circularly Polarized Microstrip Antenna

    Bondyopadhyay, Probir K. (Inventor)

    1997-01-01

    A microstrip antenna for radiating circularly polarized electromagnetic waves comprising a cluster array of at least four microstrip radiator elements, each of which is provided with dual orthogonal coplanar feeds in phase quadrature relation achieved by connection to an asymmetric T-junction power divider impedance notched at resonance. The dual fed circularly polarized reference element is positioned with its axis at a 45 deg angle with respect to the unit cell axis. The other three dual fed elements in the unit cell are positioned and fed with a coplanar feed structure with sequential rotation and phasing to enhance the axial ratio and impedance matching performance over a wide bandwidth. The centers of the radiator elements are disposed at the corners of a square with each side of a length d in the range of 0.7 to 0.9 times the free space wavelength of the antenna radiation and the radiator elements reside in a square unit cell area of sides equal to 2d and thereby permit the array to be used as a phased array antenna for electronic scanning and is realizable in a high temperature superconducting thin film material for high efficiency.

  11. Analytical performance modeling for computer systems

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  12. NCI's Transdisciplinary High Performance Scientific Data Platform

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  13. High Power Flex-Propellant Arcjet Performance

    Litchford, Ron J.

    2011-01-01

    implied nearly frozen flow in the nozzle and yielded performance ranges of 800-1100 sec for hydrogen and 400-600 sec for ammonia. Inferred thrust-to-power ratios were in the range of 30-10 lbf/MWe for hydrogen and 60-20 lbf/MWe for ammonia. Successful completion of this test series represents a fundamental milestone in the progression of high power arcjet technology, and it is hoped that the results may serve as a reliable touchstone for the future development of MW-class regeneratively-cooled flex-propellant plasma rockets.

  14. Silicon Photomultiplier Performance in High ELectric Field

    Montoya, J.; Morad, J.

    2016-12-01

    Roughly 27% of the universe is thought to be composed of dark matter. The Large Underground Xenon (LUX) relies on the emission of light from xenon atoms after a collision with a dark matter particle. After a particle interaction in the detector, two things can happen: the xenon will emit light and charge. The charge (electrons), in the liquid xenon needs to be pulled into the gas section so that it can interact with gas and emit light. This allows LUX to convert a single electron into many photons. This is done by applying a high voltage across the liquid and gas regions, effectively ripping electrons out of the liquid xenon and into the gas. The current device used to detect photons is the photomultiplier tube (PMT). These devices are large and costly. In recent years, a new technology that is capable of detecting single photons has emerged, the silicon photomultiplier (SiPM). These devices are cheaper and smaller than PMTs. Their performance in a high electric fields, such as those found in LUX, are unknown. It is possible that a large electric field could introduce noise on the SiPM signal, drowning the single photon detection capability. My hypothesis is that SiPMs will not observe a significant increase is noise at an electric field of roughly 10kV/cm (an electric field within the range used in detectors like LUX). I plan to test this hypothesis by first rotating the SiPMs with no applied electric field between two metal plates roughly 2 cm apart, providing a control data set. Then using the same angles test the dark counts with the constant electric field applied. Possibly the most important aspect of LUX, is the photon detector because it's what detects the signals. Dark matter is detected in the experiment by looking at the ratio of photons to electrons emitted for a given interaction in the detector. Interactions with a low electron to photon ratio are more like to be dark matter events than those with a high electron to photon ratio. The ability to

  15. Protective design of critical infrastructure with high performance concretes

    Riedel, W.; Nöldgen, M.; Stolz, A.; Roller, C.

    2012-01-01

    Conclusions: High performance concrete constructions will allow innovative design solutions for critical infrastructures. Validation of engineering methods can reside on large and model scale experiments conducted on conventional concrete structures. New consistent impact experiments show extreme protection potential for UHPC. Modern FEM with concrete models and explicit rebar can model HPC and UHPC penetration resistance. SDOF and TDOF approaches are valuable design tools on local and global level. Combination of at least 2 out of 3 design methods FEM – XDOF- EXP allow reliable prediction and efficient innovative designs

  16. High-performance heat pipes for heat recovery applications

    Saaski, E. W.; Hartl, J. H.

    1980-01-01

    Methods to improve the performance of reflux heat pipes for heat recovery applications were examined both analytically and experimentally. Various models for the estimation of reflux heat pipe transport capacity were surveyed in the literature and compared with experimental data. A high transport capacity reflux heat pipe was developed that provides up to a factor of 10 capacity improvement over conventional open tube designs; analytical models were developed for this device and incorporated into a computer program HPIPE. Good agreement of the model predictions with data for R-11 and benzene reflux heat pipes was obtained.

  17. Performance modeling, loss networks, and statistical multiplexing

    Mazumdar, Ravi

    2009-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of understanding the phenomenon of statistical multiplexing. The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the important ideas of Palm distributions associated with traffic models and their role in performance measures. Also presented are recent ideas of large buffer, and many sources asymptotics that play an important role in understanding statistical multiplexing. I

  18. Shock circle model for ejector performance evaluation

    Zhu, Yinhai; Cai, Wenjian; Wen, Changyun; Li, Yanzhong

    2007-01-01

    In this paper, a novel shock circle model for the prediction of ejector performance at the critical mode operation is proposed. By introducing the 'shock circle' at the entrance of the constant area chamber, a 2D exponential expression for velocity distribution is adopted to approximate the viscosity flow near the ejector inner wall. The advantage of the 'shock circle' analysis is that the calculation of ejector performance is independent of the flows in the constant area chamber and diffuser. Consequently, the calculation is even simpler than many 1D modeling methods and can predict the performance of critical mode operation ejectors much more accurately. The effectiveness of the method is validated by two experimental results reported earlier. The proposed modeling method using two coefficients is shown to produce entrainment ratio, efficiency and coefficient of performance (COP) accurately and much closer to experimental results than those of 1D analysis methods

  19. Advances in HTGR fuel performance models

    Stansfield, O.M.; Goodin, D.T.; Hanson, D.L.; Turner, R.F.

    1985-01-01

    Advances in HTGR fuel performance models have improved the agreement between observed and predicted performance and contributed to an enhanced position of the HTGR with regard to investment risk and passive safety. Heavy metal contamination is the source of about 55% of the circulating activity in the HTGR during normal operation, and the remainder comes primarily from particles which failed because of defective or missing buffer coatings. These failed particles make up about 5 x 10 -4 fraction of the total core inventory. In addition to prediction of fuel performance during normal operation, the models are used to determine fuel failure and fission product release during core heat-up accident conditions. The mechanistic nature of the models, which incorporate all important failure modes, permits the prediction of performance from the relatively modest accident temperatures of a passively safe HTGR to the much more severe accident conditions of the larger 2240-MW/t HTGR. (author)

  20. Performance Evaluation Model for Application Layer Firewalls.

    Xuan, Shichang; Yang, Wu; Dong, Hui; Zhang, Jiangchuan

    2016-01-01

    Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  1. Performance Evaluation Model for Application Layer Firewalls.

    Shichang Xuan

    Full Text Available Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers. Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  2. The Role of Performance Management in the High Performance Organisation

    de Waal, André A.; van der Heijden, Beatrice I.J.M.

    2014-01-01

    The allegiance of partnering organisations and their employees to an Extended Enterprise performance is its proverbial sword of Damocles. Literature on Extended Enterprises focuses on collaboration, inter-organizational integration and learning to avoid diminishing or missing allegiance becoming an

  3. Tailored model abstraction in performance assessments

    Kessler, J.H.

    1995-01-01

    Total System Performance Assessments (TSPAs) are likely to be one of the most significant parts of making safety cases for the continued development and licensing of geologic repositories for the disposal of spent fuel and HLW. Thus, it is critical that the TSPA model capture the 'essence' of the physical processes relevant to demonstrating the appropriate regulation is met. But how much detail about the physical processes must be modeled and understood before there is enough confidence that the appropriate essence has been captured? In this summary the level of model abstraction that is required is discussed. Approaches for subsystem and total system performance analyses are outlined, and the role of best estimate models is examined. It is concluded that a conservative approach for repository performance, based on limited amount of field and laboratory data, can provide sufficient confidence for a regulatory decision

  4. Sustaining Team Performance: A Systems Model\\

    1979-07-31

    member performance of specific behaviors" ( Nivea et al., 1978, p. 59). They have identified four major performance categories, and several performance...within the fire direction center several artillerymen work additively. The number of men in the fire direction center does not add steps to the sequence...Instructional strategies for training men of high and low aptitude. HumRRO-TR-73-10. Alexandria, VA: Human Resources Organization, April 1973. Blum, M.L. and

  5. Performance of high-rate gravel-packed oil wells

    Unneland, Trond

    2001-05-01

    Improved methods for the prediction, evaluation, and monitoring of performance in high-rate cased-hole gravel-packed oil wells are presented in this thesis. The ability to predict well performance prior to the gravel-pack operations, evaluate the results after the operation, and monitor well performance over time has been improved. This lifetime approach to performance analysis of gravel-packed oil wells contributes to increase oil production and field profitability. First, analytical models available for prediction of performance in gravel-packed oil wells are reviewed, with particular emphasis on high-velocity flow effects. From the analysis of field data from three North Sea oil fields, improved and calibrated cased-hole gravel-pack performance prediction models are presented. The recommended model is based on serial flow through formation sand and gravel in the perforation tunnels. In addition, new correlations for high-velocity flow in high-rate gravel-packed oil wells are introduced. Combined, this improves the performance prediction for gravel-packed oil wells, and specific areas can be targeted for optimized well design. Next, limitations in the current methods and alternative methods for evaluation and comparison of well performance are presented. The most widely used parameter, the skin factor, remains a convenient and important parameter. However, using the skin concept in direct comparisons between wells with different reservoir properties may result in misleading or even invalid conclusions. A discussion of the parameters affecting the skin value, with a clarification of limitations, is included. A methodology for evaluation and comparison of gravel-packed well performance is presented, and this includes the use of results from production logs and the use of effective perforation tunnel permeability as a parameter. This contributes to optimized operational procedures from well to well and from field to field. Finally, the data sources available for

  6. Modeling attacking of high skills volleyball players

    Vladimir Gamaliy

    2014-12-01

    Full Text Available Purpose: to determine the model indicators of technical and tactical actions in the attack highly skilled volleyball players. Material and Methods: the study used statistical data of major international competitions: Olympic Games – 2012 World Championships – 2010, World League – 2010–2014 European Championship – 2010–2014. A total of 130 analyzed games. Methods were used: analysis and generalization of scientific and methodological literature, analysis of competitive activity highly skilled volleyball players, teacher observation, modeling technical and tactical actions in attacking highly skilled volleyball players. Results: it was found that the largest volume application of technical and tactical actions in the attack belongs to the group tactics «supple movement», whose indicator is 21,3%. The smallest amount of application belongs to the group tactics «flight level» model whose indicators is 5,4%, the efficiency of 3,4%, respectively. It is found that the power service in the jump from model parameters used in 51,6% of cases, the planning targets – 21,7% and 4,4% planning to reduce. Attacks performed with the back line, on model parameters used in the amount of 20,8% efficiency –13,7%. Conclusions: we prove that the performance of technical and tactical actions in the attack can be used as model in the control system of training and competitive process highly skilled volleyball players

  7. Evaluating performance of high efficiency mist eliminators

    Waggoner, Charles A.; Parsons, Michael S.; Giffin, Paxton K. [Mississippi State University, Institute for Clean Energy Technology, 205 Research Blvd, Starkville, MS (United States)

    2013-07-01

    Processing liquid wastes frequently generates off gas streams with high humidity and liquid aerosols. Droplet laden air streams can be produced from tank mixing or sparging and processes such as reforming or evaporative volume reduction. Unfortunately these wet air streams represent a genuine threat to HEPA filters. High efficiency mist eliminators (HEME) are one option for removal of liquid aerosols with high dissolved or suspended solids content. HEMEs have been used extensively in industrial applications, however they have not seen widespread use in the nuclear industry. Filtering efficiency data along with loading curves are not readily available for these units and data that exist are not easily translated to operational parameters in liquid waste treatment plants. A specialized test stand has been developed to evaluate the performance of HEME elements under use conditions of a US DOE facility. HEME elements were tested at three volumetric flow rates using aerosols produced from an iron-rich waste surrogate. The challenge aerosol included submicron particles produced from Laskin nozzles and super micron particles produced from a hollow cone spray nozzle. Test conditions included ambient temperature and relative humidities greater than 95%. Data collected during testing HEME elements from three different manufacturers included volumetric flow rate, differential temperature across the filter housing, downstream relative humidity, and differential pressure (dP) across the filter element. Filter challenge was discontinued at three intermediate dPs and the filter to allow determining filter efficiency using dioctyl phthalate and then with dry surrogate aerosols. Filtering efficiencies of the clean HEME, the clean HEME loaded with water, and the HEME at maximum dP were also collected using the two test aerosols. Results of the testing included differential pressure vs. time loading curves for the nine elements tested along with the mass of moisture and solid

  8. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  9. Video performance for high security applications

    Connell, Jack C.; Norman, Bradley C.

    2010-01-01

    The complexity of physical protection systems has increased to address modern threats to national security and emerging commercial technologies. A key element of modern physical protection systems is the data presented to the human operator used for rapid determination of the cause of an alarm, whether false (e.g., caused by an animal, debris, etc.) or real (e.g., a human adversary). Alarm assessment, the human validation of a sensor alarm, primarily relies on imaging technologies and video systems. Developing measures of effectiveness (MOE) that drive the design or evaluation of a video system or technology becomes a challenge, given the subjectivity of the application (e.g., alarm assessment). Sandia National Laboratories has conducted empirical analysis using field test data and mathematical models such as binomial distribution and Johnson target transfer functions to develop MOEs for video system technologies. Depending on the technology, the task of the security operator and the distance to the target, the Probability of Assessment (PAs) can be determined as a function of a variety of conditions or assumptions. PAs used as an MOE allows the systems engineer to conduct trade studies, make informed design decisions, or evaluate new higher-risk technologies. This paper outlines general video system design trade-offs, discusses ways video can be used to increase system performance and lists MOEs for video systems used in subjective applications such as alarm assessment.

  10. GPU-based high-performance computing for radiation therapy

    Jia, Xun; Jiang, Steve B; Ziegenhein, Peter

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. (topical review)

  11. High Accuracy Transistor Compact Model Calibrations

    Hembree, Charles E. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Mar, Alan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robertson, Perry J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.

  12. Performance modeling, stochastic networks, and statistical multiplexing

    Mazumdar, Ravi R

    2013-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of introducing an appropriate mathematical framework for modeling and analysis as well as understanding the phenomenon of statistical multiplexing. The models, techniques, and results presented form the core of traffic engineering methods used to design, control and allocate resources in communication networks.The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the importan

  13. A statistical model for predicting muscle performance

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  14. An integrated high performance Fastbus slave interface

    Christiansen, J.; Ljuslin, C.

    1993-01-01

    A high performance CMOS Fastbus slave interface ASIC (Application Specific Integrated Circuit) supporting all addressing and data transfer modes defined in the IEEE 960 - 1986 standard is presented. The FAstbus Slave Integrated Circuit (FASIC) is an interface between the asynchronous Fastbus and a clock synchronous processor/memory bus. It can work stand-alone or together with a 32 bit microprocessor. The FASIC is a programmable device enabling its direct use in many different applications. A set of programmable address mapping windows can map Fastbus addresses to convenient memory addresses and at the same time act as address decoding logic. Data rates of 100 MBytes/sec to Fastbus can be obtained using an internal FIFO in the FASIC to buffer data between the two buses during block transfers. Message passing from Fastbus to a microprocessor on the slave module is supported. A compact (70 mm x 170 mm) Fastbus slave piggy back sub-card interface including level conversion between ECL and TTL signal levels has been implemented using surface mount components and the 208 pin FASIC chip

  15. High Performance Graphene Oxide Based Rubber Composites

    Mao, Yingyan; Wen, Shipeng; Chen, Yulong; Zhang, Fazhong; Panine, Pierre; Chan, Tung W.; Zhang, Liqun; Liang, Yongri; Liu, Li

    2013-01-01

    In this paper, graphene oxide/styrene-butadiene rubber (GO/SBR) composites with complete exfoliation of GO sheets were prepared by aqueous-phase mixing of GO colloid with SBR latex and a small loading of butadiene-styrene-vinyl-pyridine rubber (VPR) latex, followed by their co-coagulation. During co-coagulation, VPR not only plays a key role in the prevention of aggregation of GO sheets but also acts as an interface-bridge between GO and SBR. The results demonstrated that the mechanical properties of the GO/SBR composite with 2.0 vol.% GO is comparable with those of the SBR composite reinforced with 13.1 vol.% of carbon black (CB), with a low mass density and a good gas barrier ability to boot. The present work also showed that GO-silica/SBR composite exhibited outstanding wear resistance and low-rolling resistance which make GO-silica/SBR very competitive for the green tire application, opening up enormous opportunities to prepare high performance rubber composites for future engineering applications. PMID:23974435

  16. Initial rheological description of high performance concretes

    Alessandra Lorenzetti de Castro

    2006-12-01

    Full Text Available Concrete is defined as a composite material and, in rheological terms, it can be understood as a concentrated suspension of solid particles (aggregates in a viscous liquid (cement paste. On a macroscopic scale, concrete flows as a liquid. It is known that the rheological behavior of the concrete is close to that of a Bingham fluid and two rheological parameters regarding its description are needed: yield stress and plastic viscosity. The aim of this paper is to present the initial rheological description of high performance concretes using the modified slump test. According to the results, an increase of yield stress was observed over time, while a slight variation in plastic viscosity was noticed. The incorporation of silica fume showed changes in the rheological properties of fresh concrete. The behavior of these materials also varied with the mixing procedure employed in their production. The addition of superplasticizer meant that there was a large reduction in the mixture's yield stress, while plastic viscosity remained practically constant.

  17. High thermoelectric performance of graphite nanofibers.

    Tran, Van-Truong; Saint-Martin, Jérôme; Dollfus, Philippe; Volz, Sebastian

    2018-02-22

    Graphite nanofibers (GNFs) have been demonstrated to be a promising material for hydrogen storage and heat management in electronic devices. Here, by means of first-principles and transport simulations, we show that GNFs can also be an excellent material for thermoelectric applications thanks to the interlayer weak van der Waals interaction that induces low thermal conductance and a step-like shape in the electronic transmission with mini-gaps, which are necessary ingredients to achieve high thermoelectric performance. This study unveils that the platelet form of GNFs in which graphite layers are perpendicular to the fiber axis can exhibit outstanding thermoelectric properties with a figure of merit ZT reaching 3.55 in a 0.5 nm diameter fiber and 1.1 in a 1.1 nm diameter one. Interestingly, by introducing 14 C isotope doping, ZT can even be enhanced up to more than 5, and more than 8 if we include the effect of finite phonon mean free path, which demonstrates the amazing thermoelectric potential of GNFs.

  18. Durability of high performance concrete in seawater

    Amjad Hussain Memon; Salihuddin Radin Sumadi; Rabitah Handan

    2000-01-01

    This paper presents a report on the effects of blended cements on the durability of high performance concrete (HPC) in seawater. In this research the effect of seawater was investigated. The specimens were initially subjected to water curing for seven days inside the laboratory at room temperature, followed by seawater curing exposed to tidal zone until testing. In this study three levels of cement replacement (0%, 30% and 70%) were used. The combined use of chemical and mineral admixtures has resulted in a new generation of concrete called HPC. The HPC has been identified as one of the most important advanced materials necessary in the effort to build a nation's infrastructure. HPC opens new opportunities in the utilization of the industrial by-products (mineral admixtures) in the construction industry. As a matter of fact permeability is considered as one of the fundamental properties governing the durability of concrete in the marine environment. Results of this investigation indicated that the oxygen permeability values for the blended cement concretes at the age of one year are reduced by a factor of about 2 as compared to OPC control mix concrete. Therefore both blended cement concretes are expected to withstand in the seawater exposed to tidal zone without serious deterioration. (Author)

  19. Design of High Performance Permanent-Magnet Synchronous Wind Generators

    Chun-Yu Hsiao

    2014-11-01

    Full Text Available This paper is devoted to the analysis and design of high performance permanent-magnet synchronous wind generators (PSWGs. A systematic and sequential methodology for the design of PMSGs is proposed with a high performance wind generator as a design model. Aiming at high induced voltage, low harmonic distortion as well as high generator efficiency, optimal generator parameters such as pole-arc to pole-pitch ratio and stator-slot-shoes dimension, etc. are determined with the proposed technique using Maxwell 2-D, Matlab software and the Taguchi method. The proposed double three-phase and six-phase winding configurations, which consist of six windings in the stator, can provide evenly distributed current for versatile applications regarding the voltage and current demands for practical consideration. Specifically, windings are connected in series to increase the output voltage at low wind speed, and in parallel during high wind speed to generate electricity even when either one winding fails, thereby enhancing the reliability as well. A PMSG is designed and implemented based on the proposed method. When the simulation is performed with a 6 Ω load, the output power for the double three-phase winding and six-phase winding are correspondingly 10.64 and 11.13 kW. In addition, 24 Ω load experiments show that the efficiencies of double three-phase winding and six-phase winding are 96.56% and 98.54%, respectively, verifying the proposed high performance operation.

  20. A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations

    Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel; Buluc, Aydin; Shao, Meiyue

    2017-01-01

    As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using the compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.

  1. Low Cost High Performance Nanostructured Spectrally Selective Coating

    Jin, Sungho [Univ. of California, San Diego, CA (United States)

    2017-04-05

    Sunlight absorbing coating is a key enabling technology to achieve high-temperature high-efficiency concentrating solar power operation. A high-performance solar absorbing material must simultaneously meet all the following three stringent requirements: high thermal efficiency (usually measured by figure of merit), high-temperature durability, and oxidation resistance. The objective of this research is to employ a highly scalable process to fabricate and coat black oxide nanoparticles onto solar absorber surface to achieve ultra-high thermal efficiency. Black oxide nanoparticles have been synthesized using a facile process and coated onto absorber metal surface. The material composition, size distribution and morphology of the nanoparticle are guided by numeric modeling. Optical and thermal properties have been both modeled and measured. High temperature durability has been achieved by using nanocomposites and high temperature annealing. Mechanical durability on thermal cycling have also been investigated and optimized. This technology is promising for commercial applications in next-generation high-temperature concentration solar power (CSP) plants.

  2. Three-Dimensional Smoothed Particle Hydrodynamics Modeling of Preferential Flow Dynamics at Fracture Intersections on a High-Performance Computing Platform

    Kordilla, J.; Bresinsky, L. T.

    2017-12-01

    The physical mechanisms that govern preferential flow dynamics in unsaturated fractured rock formations are complex and not well understood. Fracture intersections may act as an integrator of unsaturated flow, leading to temporal delay, intermittent flow and partitioning dynamics. In this work, a three-dimensional Pairwise-Force Smoothed Particle Hydrodynamics (PF-SPH) model is being applied in order to simulate gravity-driven multiphase flow at synthetic fracture intersections. SPH, as a meshless Lagrangian method, is particularly suitable for modeling deformable interfaces, such as three-phase contact dynamics of droplets, rivulets and free-surface films. The static and dynamic contact angle can be recognized as the most important parameter of gravity-driven free-surface flow. In SPH, surface tension and adhesion naturally emerges from the implemented pairwise fluid-fluid (sff) and solid-fluid (ssf) interaction force. The model was calibrated to a contact angle of 65°, which corresponds to the wetting properties of water on Poly(methyl methacrylate). The accuracy of the SPH simulations were validated against an analytical solution of Poiseuille flow between two parallel plates and against laboratory experiments. Using the SPH model, the complex flow mode transitions from droplet to rivulet flow of an experimental study were reproduced. Additionally, laboratory dimensionless scaling experiments of water droplets were successfully replicated in SPH. Finally, SPH simulations were used to investigate the partitioning dynamics of single droplets into synthetic horizontal fractures with various apertures (Δdf = 0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0 mm) and offsets (Δdoff = -1.5, -1.0, -0.5, 0, 1.0, 2.0, 3.0 mm). Fluid masses were measured in the domains R1, R2 and R3. The perfect conditions of ideally smooth surfaces and the SPH inherent advantage of particle tracking allow the recognition of small scale partitioning mechanisms and its importance for bulk flow

  3. High energy model for irregular absorbing particles

    Chiappetta, Pierre.

    1979-05-01

    In the framework of a high energy formulation of relativistic quantum scattering a model is presented which describes the scattering functions and polarization of irregular absorbing particles, whose dimensions are greater than the incident wavelength. More precisely in the forward direction an amplitude parametrization of eikonal type is defined which generalizes the usual diffraction theory, and in the backward direction a reflective model is used including a shadow function. The model predictions are in good agreement with the scattering measurements off irregular compact and fluffy particles performed by Zerull, Giese and Weiss (1977)

  4. High-performance computing in accelerating structure design and analysis

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  5. High-brightness injector modeling

    Lewellen, J.W.

    2004-01-01

    There are many aspects to the successful conception, design, fabrication, and operation of high-brightness electron beam sources. Accurate and efficient modeling of the injector are critical to all phases of the process, from evaluating initial ideas to successful diagnosis of problems during routine operation. The basic modeling tasks will vary from design to design, according to the basic nature of the injector (dc, rf, hybrid, etc.), the type of cathode used (thermionic, photo, field emitter, etc.), and 'macro' factors such as average beam current and duty factor, as well as the usual list of desired beam properties. The injector designer must be at least aware of, if not proficient at addressing, the multitude of issues that arise from these considerations; and, as high-brightness injectors continue to move out of the laboratory, the number of such issues will continue to expand.

  6. Wave and Wind Model Performance Metrics Tools

    Choi, J. K.; Wang, D. W.

    2016-02-01

    Continual improvements and upgrades of Navy ocean wave and wind models are essential to the assurance of battlespace environment predictability of ocean surface wave and surf conditions in support of Naval global operations. Thus, constant verification and validation of model performance is equally essential to assure the progress of model developments and maintain confidence in the predictions. Global and regional scale model evaluations may require large areas and long periods of time. For observational data to compare against, altimeter winds and waves along the tracks from past and current operational satellites as well as moored/drifting buoys can be used for global and regional coverage. Using data and model runs in previous trials such as the planned experiment, the Dynamics of the Adriatic in Real Time (DART), we demonstrated the use of accumulated altimeter wind and wave data over several years to obtain an objective evaluation of the performance the SWAN (Simulating Waves Nearshore) model running in the Adriatic Sea. The assessment provided detailed performance of wind and wave models by using cell-averaged statistical variables maps with spatial statistics including slope, correlation, and scatter index to summarize model performance. Such a methodology is easily generalized to other regions and at global scales. Operational technology currently used by subject matter experts evaluating the Navy Coastal Ocean Model and the Hybrid Coordinate Ocean Model can be expanded to evaluate wave and wind models using tools developed for ArcMAP, a GIS application developed by ESRI. Recent inclusion of altimeter and buoy data into a format through the Naval Oceanographic Office's (NAVOCEANO) quality control system and the netCDF standards applicable to all model output makes it possible for the fusion of these data and direct model verification. Also, procedures were developed for the accumulation of match-ups of modelled and observed parameters to form a data base

  7. Generating Performance Models for Irregular Applications

    Friese, Ryan D.; Tallent, Nathan R.; Vishnu, Abhinav; Kerbyson, Darren J.; Hoisie, Adolfy

    2017-05-30

    Many applications have irregular behavior --- non-uniform input data, input-dependent solvers, irregular memory accesses, unbiased branches --- that cannot be captured using today's automated performance modeling techniques. We describe new hierarchical critical path analyses for the \\Palm model generation tool. To create a model's structure, we capture tasks along representative MPI critical paths. We create a histogram of critical tasks with parameterized task arguments and instance counts. To model each task, we identify hot instruction-level sub-paths and model each sub-path based on data flow, instruction scheduling, and data locality. We describe application models that generate accurate predictions for strong scaling when varying CPU speed, cache speed, memory speed, and architecture. We present results for the Sweep3D neutron transport benchmark; Page Rank on multiple graphs; Support Vector Machine with pruning; and PFLOTRAN's reactive flow/transport solver with domain-induced load imbalance.

  8. Measurement-based reliability/performability models

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  9. High-performance HR practices, positive affect and employee outcomes

    Mostafa, Ahmed

    2017-01-01

    Purpose – The purpose of this paper is to provide insight into the affective or emotional mechanisms that underlie the relationship between high-performance HR practices (HPHRP) and employee attitudes and behaviours. Drawing on affective events theory (AET), this paper examines a mediation model in which HPHRP influence positive affect which in turn affects job satisfaction and organizational citizenship behaviours (OCBs). Design/methodology/approach – Two-wave data was collected from a sampl...

  10. High Performance Object-Oriented Scientific Programming in Fortran 90

    Norton, Charles D.; Decyk, Viktor K.; Szymanski, Boleslaw K.

    1997-01-01

    We illustrate how Fortran 90 supports object-oriented concepts by example of plasma particle computations on the IBM SP. Our experience shows that Fortran 90 and object-oriented methodology give high performance while providing a bridge from Fortran 77 legacy codes to modern programming principles. All of our object-oriented Fortran 90 codes execute more quickly thatn the equeivalent C++ versions, yet the abstraction modelling capabilities used for scentific programming are comparably powereful.

  11. Performance Measurement Model A TarBase model with ...

    rohit

    Model A 8.0 2.0 94.52% 88.46% 76 108 12 12 0.86 0.91 0.78 0.94. Model B 2.0 2.0 93.18% 89.33% 64 95 10 9 0.88 0.90 0.75 0.98. The above results for TEST – 1 show details for our two models (Model A and Model B).Performance of Model A after adding of 32 negative dataset of MiRTif on our testing set(MiRecords) ...

  12. A Procurement Performance Model for Construction Frameworks

    Terence Y M Lam

    2015-07-01

    Full Text Available Collaborative construction frameworks have been developed in the United Kingdom (UK to create longer term relationships between clients and suppliers in order to improve project outcomes. Research undertaken into highways maintenance set within a major county council has confirmed that such collaborative procurement methods can improve time, cost and quality of construction projects. Building upon this and examining the same single case, this research aims to develop a performance model through identification of performance drivers in the whole project delivery process including pre and post contract phases. A priori performance model based on operational and sociological constructs was proposed and then checked by a pilot study. Factor analysis and central tendency statistics from the questionnaires as well as content analysis from the interview transcripts were conducted. It was confirmed that long term relationships, financial and non-financial incentives and stronger communication are the sociological behaviour factors driving performance. The interviews also established that key performance indicators (KPIs can be used as an operational measure to improve performance. With the posteriori performance model, client project managers can effectively collaboratively manage contractor performance through procurement measures including use of longer term and KPIs for the contract so that the expected project outcomes can be achieved. The findings also make significant contribution to construction framework procurement theory by identifying the interrelated sociological and operational performance drivers. This study is set predominantly in the field of highways civil engineering. It is suggested that building based projects or other projects that share characteristics are grouped together and used for further research of the phenomena discovered.

  13. Stutter-Step Models of Performance in School

    Morgan, Stephen L.; Leenman, Theodore S.; Todd, Jennifer J.; Kentucky; Weeden, Kim A.

    2013-01-01

    To evaluate a stutter-step model of academic performance in high school, this article adopts a unique measure of the beliefs of 12,591 high school sophomores from the Education Longitudinal Study, 2002-2006. Verbatim responses to questions on occupational plans are coded to capture specific job titles, the listing of multiple jobs, and the listing…

  14. Performance Evaluation and Modelling of Container Terminals

    Venkatasubbaiah, K.; Rao, K. Narayana; Rao, M. Malleswara; Challa, Suresh

    2018-02-01

    The present paper evaluates and analyzes the performance of 28 container terminals of south East Asia through data envelopment analysis (DEA), principal component analysis (PCA) and hybrid method of DEA-PCA. DEA technique is utilized to identify efficient decision making unit (DMU)s and to rank DMUs in a peer appraisal mode. PCA is a multivariate statistical method to evaluate the performance of container terminals. In hybrid method, DEA is integrated with PCA to arrive the ranking of container terminals. Based on the composite ranking, performance modelling and optimization of container terminals is carried out through response surface methodology (RSM).

  15. Simulations of KSTAR high performance steady state operation scenarios

    Na, Yong-Su; Kessel, C.E.; Park, J.M.; Yi, Sumin; Kim, J.Y.; Becoulet, A.; Sips, A.C.C.

    2009-01-01

    We report the results of predictive modelling of high performance steady state operation scenarios in KSTAR. Firstly, the capabilities of steady state operation are investigated with time-dependent simulations using a free-boundary plasma equilibrium evolution code coupled with transport calculations. Secondly, the reproducibility of high performance steady state operation scenarios developed in the DIII-D tokamak, of similar size to that of KSTAR, is investigated using the experimental data taken from DIII-D. Finally, the capability of ITER-relevant steady state operation is investigated in KSTAR. It is found that KSTAR is able to establish high performance steady state operation scenarios; β N above 3, H 98 (y, 2) up to 2.0, f BS up to 0.76 and f NI equals 1.0. In this work, a realistic density profile is newly introduced for predictive simulations by employing the scaling law of a density peaking factor. The influence of the current ramp-up scenario and the transport model is discussed with respect to the fusion performance and non-inductive current drive fraction in the transport simulations. As observed in the experiments, both the heating and the plasma current waveforms in the current ramp-up phase produce a strong effect on the q-profile, the fusion performance and also on the non-inductive current drive fraction in the current flattop phase. A criterion in terms of q min is found to establish ITER-relevant steady state operation scenarios. This will provide a guideline for designing the current ramp-up phase in KSTAR. It is observed that the transport model also affects the predictive values of fusion performance as well as the non-inductive current drive fraction. The Weiland transport model predicts the highest fusion performance as well as non-inductive current drive fraction in KSTAR. In contrast, the GLF23 model exhibits the lowest ones. ITER-relevant advanced scenarios cannot be obtained with the GLF23 model in the conditions given in this work

  16. High performance APCS conceptual design and evaluation scoping study

    Soelberg, N.; Liekhus, K.; Chambers, A.; Anderson, G.

    1998-02-01

    This Air Pollution Control System (APCS) Conceptual Design and Evaluation study was conducted to evaluate a high-performance (APC) system for minimizing air emissions from mixed waste thermal treatment systems. Seven variations of high-performance APCS designs were conceptualized using several design objectives. One of the system designs was selected for detailed process simulation using ASPEN PLUS to determine material and energy balances and evaluate performance. Installed system capital costs were also estimated. Sensitivity studies were conducted to evaluate the incremental cost and benefit of added carbon adsorber beds for mercury control, specific catalytic reduction for NO x control, and offgas retention tanks for holding the offgas until sample analysis is conducted to verify that the offgas meets emission limits. Results show that the high-performance dry-wet APCS can easily meet all expected emission limits except for possibly mercury. The capability to achieve high levels of mercury control (potentially necessary for thermally treating some DOE mixed streams) could not be validated using current performance data for mercury control technologies. The engineering approach and ASPEN PLUS modeling tool developed and used in this study identified APC equipment and system performance, size, cost, and other issues that are not yet resolved. These issues need to be addressed in feasibility studies and conceptual designs for new facilities or for determining how to modify existing facilities to meet expected emission limits. The ASPEN PLUS process simulation with current and refined input assumptions and calculations can be used to provide system performance information for decision-making, identifying best options, estimating costs, reducing the potential for emission violations, providing information needed for waste flow analysis, incorporating new APCS technologies in existing designs, or performing facility design and permitting activities

  17. High burnup models in computer code fair

    Dutta, B K; Swami Prasad, P; Kushwaha, H S; Mahajan, S C; Kakodar, A [Bhabha Atomic Research Centre, Bombay (India)

    1997-08-01

    An advanced fuel analysis code FAIR has been developed for analyzing the behavior of fuel rods of water cooled reactors under severe power transients and high burnups. The code is capable of analyzing fuel pins of both collapsible clad, as in PHWR and free standing clad as in LWR. The main emphasis in the development of this code is on evaluating the fuel performance at extended burnups and modelling of the fuel rods for advanced fuel cycles. For this purpose, a number of suitable models have been incorporated in FAIR. For modelling the fission gas release, three different models are implemented, namely Physically based mechanistic model, the standard ANS 5.4 model and the Halden model. Similarly the pellet thermal conductivity can be modelled by the MATPRO equation, the SIMFUEL relation or the Halden equation. The flux distribution across the pellet is modelled by using the model RADAR. For modelling pellet clad interaction (PCMI)/ stress corrosion cracking (SCC) induced failure of sheath, necessary routines are provided in FAIR. The validation of the code FAIR is based on the analysis of fuel rods of EPRI project ``Light water reactor fuel rod modelling code evaluation`` and also the analytical simulation of threshold power ramp criteria of fuel rods of pressurized heavy water reactors. In the present work, a study is carried out by analysing three CRP-FUMEX rods to show the effect of various combinations of fission gas release models and pellet conductivity models, on the fuel analysis parameters. The satisfactory performance of FAIR may be concluded through these case studies. (author). 12 refs, 5 figs.

  18. High burnup models in computer code fair

    Dutta, B.K.; Swami Prasad, P.; Kushwaha, H.S.; Mahajan, S.C.; Kakodar, A.

    1997-01-01

    An advanced fuel analysis code FAIR has been developed for analyzing the behavior of fuel rods of water cooled reactors under severe power transients and high burnups. The code is capable of analyzing fuel pins of both collapsible clad, as in PHWR and free standing clad as in LWR. The main emphasis in the development of this code is on evaluating the fuel performance at extended burnups and modelling of the fuel rods for advanced fuel cycles. For this purpose, a number of suitable models have been incorporated in FAIR. For modelling the fission gas release, three different models are implemented, namely Physically based mechanistic model, the standard ANS 5.4 model and the Halden model. Similarly the pellet thermal conductivity can be modelled by the MATPRO equation, the SIMFUEL relation or the Halden equation. The flux distribution across the pellet is modelled by using the model RADAR. For modelling pellet clad interaction (PCMI)/ stress corrosion cracking (SCC) induced failure of sheath, necessary routines are provided in FAIR. The validation of the code FAIR is based on the analysis of fuel rods of EPRI project ''Light water reactor fuel rod modelling code evaluation'' and also the analytical simulation of threshold power ramp criteria of fuel rods of pressurized heavy water reactors. In the present work, a study is carried out by analysing three CRP-FUMEX rods to show the effect of various combinations of fission gas release models and pellet conductivity models, on the fuel analysis parameters. The satisfactory performance of FAIR may be concluded through these case studies. (author). 12 refs, 5 figs

  19. Improving the high performance concrete (HPC behaviour in high temperatures

    Cattelan Antocheves De Lima, R.

    2003-12-01

    Full Text Available High performance concrete (HPC is an interesting material that has been long attracting the interest from the scientific and technical community, due to the clear advantages obtained in terms of mechanical strength and durability. Given these better characteristics, HFC, in its various forms, has been gradually replacing normal strength concrete, especially in structures exposed to severe environments. However, the veiy dense microstructure and low permeability typical of HPC can result in explosive spalling under certain thermal and mechanical conditions, such as when concrete is subject to rapid temperature rises, during a f¡re. This behaviour is caused by the build-up of internal water pressure, in the pore structure, during heating, and by stresses originating from thermal deformation gradients. Although there are still a limited number of experimental programs in this area, some researchers have reported that the addition of polypropylene fibers to HPC is a suitable way to avoid explosive spalling under f re conditions. This change in behavior is derived from the fact that polypropylene fibers melt in high temperatures and leave a pathway for heated gas to escape the concrete matrix, therefore allowing the outward migration of water vapor and resulting in the reduction of interned pore pressure. The present research investigates the behavior of high performance concrete on high temperatures, especially when polypropylene fibers are added to the mix.

    El hormigón de alta resistencia (HAR es un material de gran interés para la comunidad científica y técnica, debido a las claras ventajas obtenidas en término de resistencia mecánica y durabilidad. A causa de estas características, el HAR, en sus diversas formas, en algunas aplicaciones está reemplazando gradualmente al hormigón de resistencia normal, especialmente en estructuras expuestas a ambientes severos. Sin embargo, la microestructura muy densa y la baja permeabilidad t

  20. A practical model for sustainable operational performance

    Vlek, C.A.J.; Steg, E.M.; Feenstra, D.; Gerbens-Leenis, W.; Lindenberg, S.; Moll, H.; Schoot Uiterkamp, A.; Sijtsma, F.; Van Witteloostuijn, A.

    2002-01-01

    By means of a concrete model for sustainable operational performance enterprises can report uniformly on the sustainability of their contributions to the economy, welfare and the environment. The development and design of a three-dimensional monitoring system is presented and discussed [nl