WorldWideScience

Sample records for high performance vector

  1. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  2. High-Performance Matrix-Vector Multiplication on the GPU

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik Brandenborg

    2012-01-01

    In this paper, we develop a high-performance GPU kernel for one of the most popular dense linear algebra operations, the matrix-vector multiplication. The target hardware is the most recent Nvidia Tesla 20-series (Fermi architecture), which is designed from the ground up for scientific computing...

  3. Command vector memory systems: high performance at low cost

    OpenAIRE

    Corbal San Adrián, Jesús; Espasa Sans, Roger; Valero Cortés, Mateo

    1998-01-01

    The focus of this paper is on designing both a low cost and high performance, high bandwidth vector memory system that takes advantage of modern commodity SDRAM memory chips. To successfully extract the full bandwidth from SDRAM parts, we propose a new memory system organization based on sending commands to the memory system as opposed to sending individual addresses. A command specifies, in a few bytes, a request for multiple independent memory words. A command is similar to a burst found in...

  4. Performance and optimization of support vector machines in high-energy physics classification problems

    International Nuclear Information System (INIS)

    Sahin, M.Ö.; Krücker, D.; Melzer-Pellmann, I.-A.

    2016-01-01

    In this paper we promote the use of Support Vector Machines (SVM) as a machine learning tool for searches in high-energy physics. As an example for a new-physics search we discuss the popular case of Supersymmetry at the Large Hadron Collider. We demonstrate that the SVM is a valuable tool and show that an automated discovery-significance based optimization of the SVM hyper-parameters is a highly efficient way to prepare an SVM for such applications.

  5. Performance and optimization of support vector machines in high-energy physics classification problems

    Energy Technology Data Exchange (ETDEWEB)

    Sahin, M.Ö., E-mail: ozgur.sahin@desy.de; Krücker, D., E-mail: dirk.kruecker@desy.de; Melzer-Pellmann, I.-A., E-mail: isabell.melzer@desy.de

    2016-12-01

    In this paper we promote the use of Support Vector Machines (SVM) as a machine learning tool for searches in high-energy physics. As an example for a new-physics search we discuss the popular case of Supersymmetry at the Large Hadron Collider. We demonstrate that the SVM is a valuable tool and show that an automated discovery-significance based optimization of the SVM hyper-parameters is a highly efficient way to prepare an SVM for such applications.

  6. Performance and optimization of support vector machines in high-energy physics classification problems

    Energy Technology Data Exchange (ETDEWEB)

    Sahin, M.Oe.; Kruecker, D.; Melzer-Pellmann, I.A.

    2016-01-15

    In this paper we promote the use of Support Vector Machines (SVM) as a machine learning tool for searches in high-energy physics. As an example for a new-physics search we discuss the popular case of Supersymmetry at the Large Hadron Collider. We demonstrate that the SVM is a valuable tool and show that an automated discovery-significance based optimization of the SVM hyper-parameters is a highly efficient way to prepare an SVM for such applications. A new C++ LIBSVM interface called SVM-HINT is developed and available on Github.

  7. Performance and optimization of support vector machines in high-energy physics classification problems

    International Nuclear Information System (INIS)

    Sahin, M.Oe.; Kruecker, D.; Melzer-Pellmann, I.A.

    2016-01-01

    In this paper we promote the use of Support Vector Machines (SVM) as a machine learning tool for searches in high-energy physics. As an example for a new-physics search we discuss the popular case of Supersymmetry at the Large Hadron Collider. We demonstrate that the SVM is a valuable tool and show that an automated discovery-significance based optimization of the SVM hyper-parameters is a highly efficient way to prepare an SVM for such applications. A new C++ LIBSVM interface called SVM-HINT is developed and available on Github.

  8. Performance and optimization of support vector machines in high-energy physics classification problems

    Energy Technology Data Exchange (ETDEWEB)

    Sahin, Mehmet Oezguer; Kruecker, Dirk; Melzer-Pellmann, Isabell [DESY, Hamburg (Germany)

    2016-07-01

    In this talk, the use of Support Vector Machines (SVM) is promoted for new-physics searches in high-energy physics. We developed an interface, called SVM HEP Interface (SVM-HINT), for a popular SVM library, LibSVM, and introduced a statistical-significance based hyper-parameter optimization algorithm for the new-physics searches. As example case study, a search for Supersymmetry at the Large Hadron Collider is given to demonstrate the capabilities of SVM using SVM-HINT.

  9. Digitally controlled high-performance dc SQUID readout electronics for a 304-channel vector magnetometer

    Science.gov (United States)

    Bechstein, S.; Petsche, F.; Scheiner, M.; Drung, D.; Thiel, F.; Schnabel, A.; Schurig, Th

    2006-06-01

    Recently, we have developed a family of dc superconducting quantum interference device (SQUID) readout electronics for several applications. These electronics comprise a low-noise preamplifier followed by an integrator, and an analog SQUID bias circuit. A highly-compact low-power version with a flux-locked loop bandwidth of 0.3 MHz and a white noise level of 1 nV/√Hz was specially designed for a 304-channel low-Tc dc SQUID vector magnetometer, intended to operate in the new Berlin Magnetically Shielded Room (BMSR-2). In order to minimize the space needed to mount the electronics on top of the dewar and to minimize the power consumption, we have integrated four electronics channels on one 3 cm × 10 cm sized board. Furthermore we embedded the analog components of these four channels into a digitally controlled system including an in-system programmable microcontroller. Four of these integrated boards were combined to one module with a size of 4 cm × 4 cm × 16 cm. 19 of these modules were implemented, resulting in a total power consumption of about 61 W. To initialize the 304 channels and to service the system we have developed software tools running on a laptop computer. By means of these software tools the microcontrollers are fed with all required data such as the working points, the characteristic parameters of the sensors (noise, voltage swing), or the sensor position inside of the vector magnetometer system. In this paper, the developed electronics including the software tools are described, and first results are presented.

  10. Digitally controlled high-performance dc SQUID readout electronics for a 304-channel vector magnetometer

    International Nuclear Information System (INIS)

    Bechstein, S; Petsche, F; Scheiner, M; Drung, D; Thiel, F; Schnabel, A; Schurig, Th

    2006-01-01

    Recently, we have developed a family of dc superconducting quantum interference device (SQUID) readout electronics for several applications. These electronics comprise a low-noise preamplifier followed by an integrator, and an analog SQUID bias circuit. A highly-compact low-power version with a flux-locked loop bandwidth of 0.3 MHz and a white noise level of 1 nV/√Hz was specially designed for a 304-channel low-T c dc SQUID vector magnetometer, intended to operate in the new Berlin Magnetically Shielded Room (BMSR-2). In order to minimize the space needed to mount the electronics on top of the dewar and to minimize the power consumption, we have integrated four electronics channels on one 3 cm x 10 cm sized board. Furthermore we embedded the analog components of these four channels into a digitally controlled system including an in-system programmable microcontroller. Four of these integrated boards were combined to one module with a size of 4 cm x 4 cm x 16 cm. 19 of these modules were implemented, resulting in a total power consumption of about 61 W. To initialize the 304 channels and to service the system we have developed software tools running on a laptop computer. By means of these software tools the microcontrollers are fed with all required data such as the working points, the characteristic parameters of the sensors (noise, voltage swing), or the sensor position inside of the vector magnetometer system. In this paper, the developed electronics including the software tools are described, and first results are presented

  11. Digitally controlled high-performance dc SQUID readout electronics for a 304-channel vector magnetometer

    Energy Technology Data Exchange (ETDEWEB)

    Bechstein, S [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany); Petsche, F [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany); Scheiner, M [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany); Drung, D [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany); Thiel, F [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany); Schnabel, A [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany); Schurig, Th [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany)

    2006-06-01

    Recently, we have developed a family of dc superconducting quantum interference device (SQUID) readout electronics for several applications. These electronics comprise a low-noise preamplifier followed by an integrator, and an analog SQUID bias circuit. A highly-compact low-power version with a flux-locked loop bandwidth of 0.3 MHz and a white noise level of 1 nV/{radical}Hz was specially designed for a 304-channel low-T{sub c} dc SQUID vector magnetometer, intended to operate in the new Berlin Magnetically Shielded Room (BMSR-2). In order to minimize the space needed to mount the electronics on top of the dewar and to minimize the power consumption, we have integrated four electronics channels on one 3 cm x 10 cm sized board. Furthermore we embedded the analog components of these four channels into a digitally controlled system including an in-system programmable microcontroller. Four of these integrated boards were combined to one module with a size of 4 cm x 4 cm x 16 cm. 19 of these modules were implemented, resulting in a total power consumption of about 61 W. To initialize the 304 channels and to service the system we have developed software tools running on a laptop computer. By means of these software tools the microcontrollers are fed with all required data such as the working points, the characteristic parameters of the sensors (noise, voltage swing), or the sensor position inside of the vector magnetometer system. In this paper, the developed electronics including the software tools are described, and first results are presented.

  12. Development of three-dimensional neoclassical transport simulation code with high performance Fortran on a vector-parallel computer

    International Nuclear Information System (INIS)

    Satake, Shinsuke; Okamoto, Masao; Nakajima, Noriyoshi; Takamaru, Hisanori

    2005-11-01

    A neoclassical transport simulation code (FORTEC-3D) applicable to three-dimensional configurations has been developed using High Performance Fortran (HPF). Adoption of computing techniques for parallelization and a hybrid simulation model to the δf Monte-Carlo method transport simulation, including non-local transport effects in three-dimensional configurations, makes it possible to simulate the dynamism of global, non-local transport phenomena with a self-consistent radial electric field within a reasonable computation time. In this paper, development of the transport code using HPF is reported. Optimization techniques in order to achieve both high vectorization and parallelization efficiency, adoption of a parallel random number generator, and also benchmark results, are shown. (author)

  13. Vector Boson Scattering at High Mass

    CERN Document Server

    The ATLAS collaboration

    2009-01-01

    In the absence of a light Higgs boson, the mechanism of electroweak symmetry breaking will be best studied in processes of vector boson scattering at high mass. Various models predict resonances in this channel. Here, we investigate $WW $scalar and vector resonances, $WZ$ vector resonances and a $ZZ$ scalar resonance over a range of diboson centre-of-mass energies. Particular attention is paid to the application of forward jet tagging and to the reconstruction of dijet pairs with low opening angle resulting from the decay of highly boosted vector bosons. The performances of different jet algorithms are compared. We find that resonances in vector boson scattering can be discovered with a few tens of inverse femtobarns of integrated luminosity.

  14. Radar target classification method with high accuracy and decision speed performance using MUSIC spectrum vectors and PCA projection

    Science.gov (United States)

    Secmen, Mustafa

    2011-10-01

    This paper introduces the performance of an electromagnetic target recognition method in resonance scattering region, which includes pseudo spectrum Multiple Signal Classification (MUSIC) algorithm and principal component analysis (PCA) technique. The aim of this method is to classify an "unknown" target as one of the "known" targets in an aspect-independent manner. The suggested method initially collects the late-time portion of noise-free time-scattered signals obtained from different reference aspect angles of known targets. Afterward, these signals are used to obtain MUSIC spectrums in real frequency domain having super-resolution ability and noise resistant feature. In the final step, PCA technique is applied to these spectrums in order to reduce dimensionality and obtain only one feature vector per known target. In the decision stage, noise-free or noisy scattered signal of an unknown (test) target from an unknown aspect angle is initially obtained. Subsequently, MUSIC algorithm is processed for this test signal and resulting test vector is compared with feature vectors of known targets one by one. Finally, the highest correlation gives the type of test target. The method is applied to wire models of airplane targets, and it is shown that it can tolerate considerable noise levels although it has a few different reference aspect angles. Besides, the runtime of the method for a test target is sufficiently low, which makes the method suitable for real-time applications.

  15. A Simple and High Performing Rate Control Initialization Method for H.264 AVC Coding Based on Motion Vector Map and Spatial Complexity at Low Bitrate

    Directory of Open Access Journals (Sweden)

    Yalin Wu

    2014-01-01

    Full Text Available The temporal complexity of video sequences can be characterized by motion vector map which consists of motion vectors of each macroblock (MB. In order to obtain the optimal initial QP (quantization parameter for the various video sequences which have different spatial and temporal complexities, this paper proposes a simple and high performance initial QP determining method based on motion vector map and temporal complexity to decide an initial QP in given target bit rate. The proposed algorithm produces the reconstructed video sequences with outstanding and stable quality. For any video sequences, the initial QP can be easily determined from matrices by target bit rate and mapped spatial complexity using proposed mapping method. Experimental results show that the proposed algorithm can show more outstanding objective and subjective performance than other conventional determining methods.

  16. The employment of Support Vector Machine to classify high and low performance archers based on bio-physiological variables

    Science.gov (United States)

    Taha, Zahari; Muazu Musa, Rabiu; Majeed, Anwar P. P. Abdul; Razali Abdullah, Mohamad; Amirul Abdullah, Muhammad; Hasnun Arif Hassan, Mohd; Khalil, Zubair

    2018-04-01

    The present study employs a machine learning algorithm namely support vector machine (SVM) to classify high and low potential archers from a collection of bio-physiological variables trained on different SVMs. 50 youth archers with the average age and standard deviation of (17.0 ±.056) gathered from various archery programmes completed a one end shooting score test. The bio-physiological variables namely resting heart rate, resting respiratory rate, resting diastolic blood pressure, resting systolic blood pressure, as well as calories intake, were measured prior to their shooting tests. k-means cluster analysis was applied to cluster the archers based on their scores on variables assessed. SVM models i.e. linear, quadratic and cubic kernel functions, were trained on the aforementioned variables. The k-means clustered the archers into high (HPA) and low potential archers (LPA), respectively. It was demonstrated that the linear SVM exhibited good accuracy with a classification accuracy of 94% in comparison the other tested models. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from the selected bio-physiological variables examined.

  17. High Accuracy Vector Helium Magnetometer

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed HAVHM instrument is a laser-pumped helium magnetometer with both triaxial vector and omnidirectional scalar measurement capabilities in a single...

  18. Axisymmetric thrust-vectoring nozzle performance prediction

    International Nuclear Information System (INIS)

    Wilson, E. A.; Adler, D.; Bar-Yoseph, P.Z

    1998-01-01

    Throat-hinged geometrically variable converging-diverging thrust-vectoring nozzles directly affect the jet flow geometry and rotation angle at the nozzle exit as a function of the nozzle geometry, the nozzle pressure ratio and flight velocity. The consideration of nozzle divergence in the effective-geometric nozzle relation is theoretically considered here for the first time. In this study, an explicit calculation procedure is presented as a function of nozzle geometry at constant nozzle pressure ratio, zero velocity and altitude, and compared with experimental results in a civil thrust-vectoring scenario. This procedure may be used in dynamic thrust-vectoring nozzle design performance predictions or analysis for civil and military nozzles as well as in the definition of initial jet flow conditions in future numerical VSTOL/TV jet performance studies

  19. Performance of a vector velocity estimator

    DEFF Research Database (Denmark)

    Munk, Peter; Jensen, Jørgen Arendt

    1998-01-01

    tracking can be found in the literature, but no method with a satisfactory performance has been found that can be used in a commercial implementation. A method for estimation of the velocity vector is presented. Here an oscillation transverse to the ultrasound beam is generated, so that a transverse motion...... in an autocorrelation approach that yields both the axial and the lateral velocity, and thus the velocity vector. The method has the advantage that a standard array transducer and a modified digital beamformer, like those used in modern ultrasound scanners, is sufficient to obtain the information needed. The signal...

  20. Vector Boson Scattering at High Mass

    CERN Document Server

    Sherwood, P

    2009-01-01

    In the absence of a light Higgs boson, the mechanism of electroweak symmetry breaking will be best studied in processes of vector boson scattering at high mass. Various models predict resonances in this channel. Here, we investigate W W scalar and vector resonances, W Z vector resonances and a Z Z scalar resonance over a range of diboson centre-of-mass energies. Particular attention is paid to the application reconstruction of dijet pairs with low opening angle resulting from the decay of highly boosted vector bosons.

  1. Design and performance of an ultra-high vacuum spin-polarized scanning tunneling microscope operating at 30 mK and in a vector magnetic field.

    Science.gov (United States)

    von Allwörden, Henning; Eich, Andreas; Knol, Elze J; Hermenau, Jan; Sonntag, Andreas; Gerritsen, Jan W; Wegner, Daniel; Khajetoorians, Alexander A

    2018-03-01

    We describe the design and performance of a scanning tunneling microscope (STM) that operates at a base temperature of 30 mK in a vector magnetic field. The cryogenics is based on an ultra-high vacuum (UHV) top-loading wet dilution refrigerator that contains a vector magnet allowing for fields up to 9 T perpendicular and 4 T parallel to the sample. The STM is placed in a multi-chamber UHV system, which allows in situ preparation and exchange of samples and tips. The entire system rests on a 150-ton concrete block suspended by pneumatic isolators, which is housed in an acoustically isolated and electromagnetically shielded laboratory optimized for extremely low noise scanning probe measurements. We demonstrate the overall performance by illustrating atomic resolution and quasiparticle interference imaging and detail the vibrational noise of both the laboratory and microscope. We also determine the electron temperature via measurement of the superconducting gap of Re(0001) and illustrate magnetic field-dependent measurements of the spin excitations of individual Fe atoms on Pt(111). Finally, we demonstrate spin resolution by imaging the magnetic structure of the Fe double layer on W(110).

  2. Design and performance of an ultra-high vacuum spin-polarized scanning tunneling microscope operating at 30 mK and in a vector magnetic field

    Science.gov (United States)

    von Allwörden, Henning; Eich, Andreas; Knol, Elze J.; Hermenau, Jan; Sonntag, Andreas; Gerritsen, Jan W.; Wegner, Daniel; Khajetoorians, Alexander A.

    2018-03-01

    We describe the design and performance of a scanning tunneling microscope (STM) that operates at a base temperature of 30 mK in a vector magnetic field. The cryogenics is based on an ultra-high vacuum (UHV) top-loading wet dilution refrigerator that contains a vector magnet allowing for fields up to 9 T perpendicular and 4 T parallel to the sample. The STM is placed in a multi-chamber UHV system, which allows in situ preparation and exchange of samples and tips. The entire system rests on a 150-ton concrete block suspended by pneumatic isolators, which is housed in an acoustically isolated and electromagnetically shielded laboratory optimized for extremely low noise scanning probe measurements. We demonstrate the overall performance by illustrating atomic resolution and quasiparticle interference imaging and detail the vibrational noise of both the laboratory and microscope. We also determine the electron temperature via measurement of the superconducting gap of Re(0001) and illustrate magnetic field-dependent measurements of the spin excitations of individual Fe atoms on Pt(111). Finally, we demonstrate spin resolution by imaging the magnetic structure of the Fe double layer on W(110).

  3. Variance inflation in high dimensional Support Vector Machines

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2013-01-01

    Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...

  4. High-performance computing — an overview

    Science.gov (United States)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  5. High-speed vector-processing system of the MELCOM-COSMO 900II

    Energy Technology Data Exchange (ETDEWEB)

    Masuda, K; Mori, H; Fujikake, J; Sasaki, Y

    1983-01-01

    Progress in scientific and technical calculations has lead to a growing demand for high-speed vector calculations. Mitsubishi electric has developed an integrated array processor and automatic-vectorizing fortran compiler as an option for the MELCOM-COSMO 900II computer system. This facilitates the performance of vector calculations and matrix calculations, achieving significant gains in cost-effectiveness. The article outlines the high-speed vector system, includes discussion of compiler structuring, and cites examples of effective system application. 1 reference.

  6. Oracle Inequalities for High Dimensional Vector Autoregressions

    DEFF Research Database (Denmark)

    Callot, Laurent; Kock, Anders Bredahl

    This paper establishes non-asymptotic oracle inequalities for the prediction error and estimation accuracy of the LASSO in stationary vector autoregressive models. These inequalities are used to establish consistency of the LASSO even when the number of parameters is of a much larger order...

  7. A task-based parallelism and vectorized approach to 3D Method of Characteristics (MOC) reactor simulation for high performance computing architectures

    Science.gov (United States)

    Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.

    2016-05-01

    In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.

  8. Center type performance of differentiable vector fields in R2

    International Nuclear Information System (INIS)

    Rabanal, Roland

    2007-08-01

    Let X : R 2 / D → R 2 be a differentiable vector field, where D is compact. If the eigenvalues of the jacobian matrix DX z are (nonzero) purely imaginary, for all z element of R 2 / D . Then, X + v has a center type performance at infinity, for some v element of R 2 . More precisely, X + v has a periodic trajectory Γ subset of R2/ D which is surrounding D such that in the unbounded component of (R 2 / D )/ Γ all the trajectories of X + v are nontrivial cycles. In the case of global vector fields Y : R 2 → R 2 with Y (0) = 0, we prove that such eigenvalue condition implies the topological equivalency of Y with the linear vector field (x, y) → (-y, x). (author)

  9. Vectors

    DEFF Research Database (Denmark)

    Boeriis, Morten; van Leeuwen, Theo

    2017-01-01

    should be taken into account in discussing ‘reactions’, which Kress and van Leeuwen link only to eyeline vectors. Finally, the question can be raised as to whether actions are always realized by vectors. Drawing on a re-reading of Rudolf Arnheim’s account of vectors, these issues are outlined......This article revisits the concept of vectors, which, in Kress and van Leeuwen’s Reading Images (2006), plays a crucial role in distinguishing between ‘narrative’, action-oriented processes and ‘conceptual’, state-oriented processes. The use of this concept in image analysis has usually focused...

  10. Modeling and prediction of flotation performance using support vector regression

    Directory of Open Access Journals (Sweden)

    Despotović Vladimir

    2017-01-01

    Full Text Available Continuous efforts have been made in recent year to improve the process of paper recycling, as it is of critical importance for saving the wood, water and energy resources. Flotation deinking is considered to be one of the key methods for separation of ink particles from the cellulose fibres. Attempts to model the flotation deinking process have often resulted in complex models that are difficult to implement and use. In this paper a model for prediction of flotation performance based on Support Vector Regression (SVR, is presented. Representative data samples were created in laboratory, under a variety of practical control variables for the flotation deinking process, including different reagents, pH values and flotation residence time. Predictive model was created that was trained on these data samples, and the flotation performance was assessed showing that Support Vector Regression is a promising method even when dataset used for training the model is limited.

  11. High energy beta rays and vectors of Bilharzia and Fasciola

    International Nuclear Information System (INIS)

    Fletcher, J.J.; Akpa, T.C.; Dim, L.A.; Ogunsusi, R.

    1988-01-01

    Preliminary investigations of the effects of high energy beta rays on Lymnea natalensis, the snail vector of Schistosoma haematobium have been conducted. Results show that in both stream and tap water, about 70% of the snails die when irradiated for up to 18 hours using a 15m Ci Sr-90 beta source. The rest of the snails die without further irradiation in 24 hours. It may then be possible to control the vectors of Bilharzia and Fasciola by using both the direct and indirect effects of high energy betas. (author)

  12. High energy beta rays and vectors of Bilharzia and Fasciola

    Energy Technology Data Exchange (ETDEWEB)

    Fletcher, J.J.; Akpa, T.C.; Dim, L.A.; Ogunsusi, R.

    1988-01-01

    Preliminary investigations of the effects of high energy beta rays on Lymnea natalensis, the snail vector of Schistosoma haematobium have been conducted. Results show that in both stream and tap water, about 70% of the snails die when irradiated for up to 18 hours using a 15m Ci Sr-90 beta source. The rest of the snails die without further irradiation in 24 hours. It may then be possible to control the vectors of Bilharzia and Fasciola by using both the direct and indirect effects of high energy betas.

  13. High Frame Rate Synthetic Aperture 3D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Holbek, Simon; Stuart, Matthias Bo

    2016-01-01

    , current volumetric ultrasonic flow methods are limited to one velocity component or restricted to a reduced field of view (FOV), e.g. fixed imaging planes, in exchange for higher temporal resolutions. To solve these problems, a previously proposed accurate 2-D high frame rate vector flow imaging (VFI...

  14. High performance in software development

    CERN Multimedia

    CERN. Geneva; Haapio, Petri; Liukkonen, Juha-Matti

    2015-01-01

    What are the ingredients of high-performing software? Software development, especially for large high-performance systems, is one the most complex tasks mankind has ever tried. Technological change leads to huge opportunities but challenges our old ways of working. Processing large data sets, possibly in real time or with other tight computational constraints, requires an efficient solution architecture. Efficiency requirements span from the distributed storage and large-scale organization of computation and data onto the lowest level of processor and data bus behavior. Integrating performance behavior over these levels is especially important when the computation is resource-bounded, as it is in numerics: physical simulation, machine learning, estimation of statistical models, etc. For example, memory locality and utilization of vector processing are essential for harnessing the computing power of modern processor architectures due to the deep memory hierarchies of modern general-purpose computers. As a r...

  15. Improved stability and performance from sigma-delta modulators using 1-bit vector quantization

    DEFF Research Database (Denmark)

    Risbo, Lars

    1993-01-01

    A novel class of sigma-delta modulators is presented. The usual scalar 1-b quantizer in a sigma-delta modulator is replaced by a 1-b vector quantizer with a N-dimensional input state-vector from the linear feedback filter. Generally, the vector quantizer changes the nonlinear dynamics...... of the modulator, and a proper choice of vector quantizer can improve both system stability and coding performance. It is shown how to construct the vector quantizer in order to limit the excursions in state-space. The proposed method is demonstrated graphically for a simple second-order modulator...

  16. STUDENT ACADEMIC PERFORMANCE PREDICTION USING SUPPORT VECTOR MACHINE

    OpenAIRE

    S.A. Oloruntoba1 ,J.L.Akinode2

    2017-01-01

    This paper investigates the relationship between students' preadmission academic profile and final academic performance. Data Sample of students in one of the Federal Polytechnic in south West part of Nigeria was used. The preadmission academic profile used for this study is the 'O' level grades(terminal high school results).The academic performance is defined using student's Grade Point Average(GPA). This research focused on using data mining technique to develop a model for predicting stude...

  17. In-Vivo High Dynamic Range Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Jensen, Jørgen Arendt

    2015-01-01

    example with a high dynamic velocity range. Velocities with an order of magnitude apart are detected on the femoral artery of a 41 years old healthy individual. Three distinct heart cycles are captured during a 3 secs acquisition. The estimated vector velocities are compared against each other within...... the heart cycle. The relative standard deviation of the measured velocity magnitude between the three peak systoles was found to be 5.11% with a standard deviation on the detected angle of 1.06◦ . In the diastole, it was 1.46% and 6.18◦ , respectively. Results proves that the method is able to estimate flow...

  18. High performance homes

    DEFF Research Database (Denmark)

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    Can prefabrication contribute to the development of high performance homes? To answer this question, this chapter defines high performance in more broadly inclusive terms, acknowledging the technical, architectural, social and economic conditions under which energy consumption and production occur....... Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  19. A Performance Management Initiative for Local Health Department Vector Control Programs.

    Science.gov (United States)

    Gerding, Justin; Kirshy, Micaela; Moran, John W; Bialek, Ron; Lamers, Vanessa; Sarisky, John

    2016-01-01

    Local health department (LHD) vector control programs have experienced reductions in funding and capacity. Acknowledging this situation and its potential effect on the ability to respond to vector-borne diseases, the U.S. Centers for Disease Control and Prevention and the Public Health Foundation partnered on a performance management initiative for LHD vector control programs. The initiative involved 14 programs that conducted a performance assessment using the Environmental Public Health Performance Standards. The programs, assisted by quality improvement (QI) experts, used the assessment results to prioritize improvement areas that were addressed with QI projects intended to increase effectiveness and efficiency in the delivery of services such as responding to mosquito complaints and educating the public about vector-borne disease prevention. This article describes the initiative as a process LHD vector control programs may adapt to meet their performance management needs. This study also reviews aggregate performance assessment results and QI projects, which may reveal common aspects of LHD vector control program performance and priority improvement areas. LHD vector control programs interested in performance assessment and improvement may benefit from engaging in an approach similar to this performance management initiative.

  20. High frequency vibration analysis by the complex envelope vectorization.

    Science.gov (United States)

    Giannini, O; Carcaterra, A; Sestieri, A

    2007-06-01

    The complex envelope displacement analysis (CEDA) is a procedure to solve high frequency vibration and vibro-acoustic problems, providing the envelope of the physical solution. CEDA is based on a variable transformation mapping the high frequency oscillations into signals of low frequency content and has been successfully applied to one-dimensional systems. However, the extension to plates and vibro-acoustic fields met serious difficulties so that a general revision of the theory was carried out, leading finally to a new method, the complex envelope vectorization (CEV). In this paper the CEV method is described, underlying merits and limits of the procedure, and a set of applications to vibration and vibro-acoustic problems of increasing complexity are presented.

  1. Data access performance through parallelization and vectored access. Some results

    International Nuclear Information System (INIS)

    Furano, F; Hanushevsky, A

    2008-01-01

    High Energy Physics data processing and analysis applications typically deal with the problem of accessing and processing data at high speed. Recent studies, development and test work have shown that the latencies due to data access can often be hidden by parallelizing them with the data processing, thus giving the ability to have applications which process remote data with a high level of efficiency. Techniques and algorithms able to reach this result have been implemented in the client side of the Scalla/xrootd system, and in this contribution we describe the results of some tests done in order to compare their performance and characteristics. These techniques, if used together with multiple streams data access, can also be effective in allowing to efficiently and transparently deal with data repositories accessible via a Wide Area Network

  2. High Performance Marine Vessels

    CERN Document Server

    Yun, Liang

    2012-01-01

    High Performance Marine Vessels (HPMVs) range from the Fast Ferries to the latest high speed Navy Craft, including competition power boats and hydroplanes, hydrofoils, hovercraft, catamarans and other multi-hull craft. High Performance Marine Vessels covers the main concepts of HPMVs and discusses historical background, design features, services that have been successful and not so successful, and some sample data of the range of HPMVs to date. Included is a comparison of all HPMVs craft and the differences between them and descriptions of performance (hydrodynamics and aerodynamics). Readers will find a comprehensive overview of the design, development and building of HPMVs. In summary, this book: Focuses on technology at the aero-marine interface Covers the full range of high performance marine vessel concepts Explains the historical development of various HPMVs Discusses ferries, racing and pleasure craft, as well as utility and military missions High Performance Marine Vessels is an ideal book for student...

  3. High-dimensional statistical inference: From vector to matrix

    Science.gov (United States)

    Zhang, Anru

    Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The

  4. High performance systems

    Energy Technology Data Exchange (ETDEWEB)

    Vigil, M.B. [comp.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  5. High-energy vector boson scattering after the Higgs discovery

    International Nuclear Information System (INIS)

    Kilian, Wolfgang; Sekulla, Marco; Ohl, Thorsten; Reuter, Juergen

    2014-08-01

    Weak vector-boson W,Z scattering at high energy probes the Higgs sector and is most sensitive to any new physics associated with electroweak symmetry breaking. We show that in the presence of the 125 GeV Higgs boson, a conventional effective-theory analysis fails for this class of processes. We propose to extrapolate the effective-theory ansatz by an extension of the parameter-free K-matrix unitarization prescription, which we denote as direct T-matrix unitarization. We generalize this prescription to arbitrary non-perturbative models and describe the implementation, as an asymptotically consistent reference model matched to the low-energy effective theory. We present exemplary numerical results for full six-fermion processes at the LHC.

  6. Responsive design high performance

    CERN Document Server

    Els, Dewald

    2015-01-01

    This book is ideal for developers who have experience in developing websites or possess minor knowledge of how responsive websites work. No experience of high-level website development or performance tweaking is required.

  7. High Performance Macromolecular Material

    National Research Council Canada - National Science Library

    Forest, M

    2002-01-01

    .... In essence, most commercial high-performance polymers are processed through fiber spinning, following Nature and spider silk, which is still pound-for-pound the toughest liquid crystalline polymer...

  8. Performance Improvement of Sensorless Vector Control for Induction Motor Drives Fed by Matrix Converter Using Nonlinear Model and Disturbance Observer

    DEFF Research Database (Denmark)

    Lee, Kyo-Beum; Blaabjerg, Frede

    2004-01-01

    This paper presents a new sensorless vector control system for high performance induction motor drives fed by a matrix converter with a non-linearity compensation and disturbance observer. The nonlinear voltage distortion that is caused by communication delay and on-state voltage drop in switching...

  9. Availability of thermodynamic system with multiple performance parameters based on vector-universal generating function

    International Nuclear Information System (INIS)

    Cai Qi; Shang Yanlong; Chen Lisheng; Zhao Yuguang

    2013-01-01

    Vector-universal generating function was presented to analyze the availability of thermodynamic system with multiple performance parameters. Vector-universal generating function of component's performance was defined, the arithmetic model based on vector-universal generating function was derived for the thermodynamic system, and the calculation method was given for state probability of multi-state component. With the stochastic simulation of the degeneration trend of the multiple factors, the system availability with multiple performance parameters was obtained under composite factors. It is shown by an example that the results of the availability obtained by the binary availability analysis method are somewhat conservative, and the results considering parameter failure based on vector-universal generating function reflect the operation characteristics of the thermodynamic system better. (authors)

  10. High Performance with Prescriptive Optimization and Debugging

    DEFF Research Database (Denmark)

    Jensen, Nicklas Bo

    parallelization and automatic vectorization is attractive as it transparently optimizes programs. The thesis contributes an improved dependence analysis for explicitly parallel programs. These improvements lead to more loops being vectorized, on average we achieve a speedup of 1.46 over the existing dependence...... analysis and vectorizer in GCC. Automatic optimizations often fail for theoretical and practical reasons. When they fail we argue that a hybrid approach can be effective. Using compiler feedback, we propose to use the programmer’s intuition and insight to achieve high performance. Compiler feedback...... enlightens the programmer why a given optimization was not applied, and suggest how to change the source code to make it more amenable to optimizations. We show how this can yield significant speedups and achieve 2.4 faster execution on a real industrial use case. To aid in parallel debugging we propose...

  11. Clojure high performance programming

    CERN Document Server

    Kumar, Shantanu

    2013-01-01

    This is a short, practical guide that will teach you everything you need to know to start writing high performance Clojure code.This book is ideal for intermediate Clojure developers who are looking to get a good grip on how to achieve optimum performance. You should already have some experience with Clojure and it would help if you already know a little bit of Java. Knowledge of performance analysis and engineering is not required. For hands-on practice, you should have access to Clojure REPL with Leiningen.

  12. High Performance Concrete

    Directory of Open Access Journals (Sweden)

    Traian Oneţ

    2009-01-01

    Full Text Available The paper presents the last studies and researches accomplished in Cluj-Napoca related to high performance concrete, high strength concrete and self compacting concrete. The purpose of this paper is to raid upon the advantages and inconveniences when a particular concrete type is used. Two concrete recipes are presented, namely for the concrete used in rigid pavement for roads and another one for self-compacting concrete.

  13. High performance polymeric foams

    International Nuclear Information System (INIS)

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-01-01

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy

  14. High performance conductometry

    International Nuclear Information System (INIS)

    Saha, B.

    2000-01-01

    Inexpensive but high performance systems have emerged progressively for basic and applied measurements in physical and analytical chemistry on one hand, and for on-line monitoring and leak detection in plants and facilities on the other. Salient features of the developments will be presented with specific examples

  15. Danish High Performance Concretes

    DEFF Research Database (Denmark)

    Nielsen, M. P.; Christoffersen, J.; Frederiksen, J.

    1994-01-01

    In this paper the main results obtained in the research program High Performance Concretes in the 90's are presented. This program was financed by the Danish government and was carried out in cooperation between The Technical University of Denmark, several private companies, and Aalborg University...... concretes, workability, ductility, and confinement problems....

  16. High performance homes

    DEFF Research Database (Denmark)

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    . Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  17. A static investigation of the thrust vectoring system of the F/A-18 high-alpha research vehicle

    Science.gov (United States)

    Mason, Mary L.; Capone, Francis J.; Asbury, Scott C.

    1992-01-01

    A static (wind-off) test was conducted in the static test facility of the Langley 16-foot Transonic Tunnel to evaluate the vectoring capability and isolated nozzle performance of the proposed thrust vectoring system of the F/A-18 high alpha research vehicle (HARV). The thrust vectoring system consisted of three asymmetrically spaced vanes installed externally on a single test nozzle. Two nozzle configurations were tested: A maximum afterburner-power nozzle and a military-power nozzle. Vane size and vane actuation geometry were investigated, and an extensive matrix of vane deflection angles was tested. The nozzle pressure ratios ranged from two to six. The results indicate that the three vane system can successfully generate multiaxis (pitch and yaw) thrust vectoring. However, large resultant vector angles incurred large thrust losses. Resultant vector angles were always lower than the vane deflection angles. The maximum thrust vectoring angles achieved for the military-power nozzle were larger than the angles achieved for the maximum afterburner-power nozzle.

  18. High-Performance Networking

    CERN Multimedia

    CERN. Geneva

    2003-01-01

    The series will start with an historical introduction about what people saw as high performance message communication in their time and how that developed to the now to day known "standard computer network communication". It will be followed by a far more technical part that uses the High Performance Computer Network standards of the 90's, with 1 Gbit/sec systems as introduction for an in depth explanation of the three new 10 Gbit/s network and interconnect technology standards that exist already or emerge. If necessary for a good understanding some sidesteps will be included to explain important protocols as well as some necessary details of concerned Wide Area Network (WAN) standards details including some basics of wavelength multiplexing (DWDM). Some remarks will be made concerning the rapid expanding applications of networked storage.

  19. Performance modeling and optimization of sparse matrix-vector multiplication on NVIDIA CUDA platform

    NARCIS (Netherlands)

    Xu, S.; Xue, W.; Lin, H.X.

    2011-01-01

    In this article, we discuss the performance modeling and optimization of Sparse Matrix-Vector Multiplication (SpMV) on NVIDIA GPUs using CUDA. SpMV has a very low computation-data ratio and its performance is mainly bound by the memory bandwidth. We propose optimization of SpMV based on ELLPACK from

  20. Highly evolvable malaria vectors : The genomes of 16 Anopheles mosquitoes

    NARCIS (Netherlands)

    Neafsey, D. E.; Waterhouse, R. M.; Abai, M. R.; Aganezov, S. S.; Alekseyev, M. A.; Allen, J. E.; Amon, J.; Arca, B.; Arensburger, P.; Artemov, G.; Assour, L. A.; Basseri, H.; Berlin, A.; Birren, B. W.; Blandin, S. A.; Brockman, A. I.; Burkot, T. R.; Burt, A.; Chan, C. S.; Chauve, C.; Chiu, J. C.; Christensen, M.; Costantini, C.; Davidson, V. L. M.; Deligianni, E.; Dottorini, T.; Dritsou, V.; Gabriel, S. B.; Guelbeogo, W. M.; Hall, A. B.; Han, M. V.; Hlaing, T.; Hughes, D. S. T.; Jenkins, A. M.; Jiang, X.; Jungreis, I.; Kakani, E. G.; Kamali, M.; Kemppainen, P.; Kennedy, R. C.; Kirmitzoglou, I. K.; Koekemoer, L. L.; Laban, N.; Langridge, N.; Lawniczak, M. K. N.; Lirakis, M.; Lobo, N. F.; Lowy, E.; Maccallum, R. M.; Mao, C.; Maslen, G.; Mbogo, C.; Mccarthy, J.; Michel, K.; Mitchell, S. N.; Moore, W.; Murphy, K. A.; Naumenko, A. N.; Nolan, T.; Novoa, E. M.; O'loughlin, S.; Oringanje, C.; Oshaghi, M. A.; Pakpour, N.; Papathanos, P. A.; Peery, A. N.; Povelones, M.; Prakash, A.; Price, D. P.; Rajaraman, A.; Reimer, L. J.; Rinker, D. C.; Rokas, A.; Russell, T. L.; Sagnon, N.; Sharakhova, M. V.; Shea, T.; Simao, F. A.; Simard, F.; Slotman, M. A.; Somboon, P.; Stegniy, V.; Struchiner, C. J.; Thomas, G. W. C.; Tojo, M.; Topalis, P.; Tubio, J. M. C.; Unger, M. F.; Vontas, J.; Walton, C.; Wilding, C. S.; Willis, J. H.; Wu, Y.-c.; Yan, G.; Zdobnov, E. M.; Zhou, X.; Catteruccia, F.; Christophides, G. K.; Collins, F. H.; Cornman, R. S.; Crisanti, A.; Donnelly, M. J.; Emrich, S. J.; Fontaine, M. C.; Gelbart, W.; Hahn, M. W.; Hansen, I. A.; Howell, P. I.; Kafatos, F. C.; Kellis, M.; Lawson, D.; Louis, C.; Luckhart, S.; Muskavitch, M. A. T.; Ribeiro, J. M.; Riehle, M. A.; Sharakhov, I. V.; Tu, Z.; Zwiebel, L. J.; Besansky, N. J.

    2015-01-01

    Variation in vectorial capacity for human malaria among Anopheles mosquito species is determined by many factors, including behavior, immunity, and life history. To investigate the genomic basis of vectorial capacity and explore new avenues for vector control, we sequenced the genomes of 16

  1. High performance data transfer

    Science.gov (United States)

    Cottrell, R.; Fang, C.; Hanushevsky, A.; Kreuger, W.; Yang, W.

    2017-10-01

    The exponentially increasing need for high speed data transfer is driven by big data, and cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software. This has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a scalable, balanced, easy to deploy and use way while minimizing power and space utilization. In collaboration with several commercial vendors, Proofs of Concept (PoC) consisting of clusters have been put together using off-the- shelf components to test the ZX scalability and ability to balance services using multiple cores, and links. The PoCs are based on SSD flash storage that is managed by a parallel file system. Each cluster occupies 4 rack units. Using the PoCs, between clusters we have achieved almost 200Gbps memory to memory over two 100Gbps links, and 70Gbps parallel file to parallel file with encryption over a 5000 mile 100Gbps link.

  2. A concurrent visualization system for large-scale unsteady simulations. Parallel vector performance on an NEC SX-4

    International Nuclear Information System (INIS)

    Takei, Toshifumi; Doi, Shun; Matsumoto, Hideki; Muramatsu, Kazuhiro

    2000-01-01

    We have developed a concurrent visualization system RVSLIB (Real-time Visual Simulation Library). This paper shows the effectiveness of the system when it is applied to large-scale unsteady simulations, for which the conventional post-processing approach may no longer work, on high-performance parallel vector supercomputers. The system performs almost all of the visualization tasks on a computation server and uses compressed visualized image data for efficient communication between the server and the user terminal. We have introduced several techniques, including vectorization and parallelization, into the system to minimize the computational costs of the visualization tools. The performance of RVSLIB was evaluated by using an actual CFD code on an NEC SX-4. The computational time increase due to the concurrent visualization was at most 3% for a smaller (1.6 million) grid and less than 1% for a larger (6.2 million) one. (author)

  3. High performance sapphire windows

    Science.gov (United States)

    Bates, Stephen C.; Liou, Larry

    1993-02-01

    High-quality, wide-aperture optical access is usually required for the advanced laser diagnostics that can now make a wide variety of non-intrusive measurements of combustion processes. Specially processed and mounted sapphire windows are proposed to provide this optical access to extreme environment. Through surface treatments and proper thermal stress design, single crystal sapphire can be a mechanically equivalent replacement for high strength steel. A prototype sapphire window and mounting system have been developed in a successful NASA SBIR Phase 1 project. A large and reliable increase in sapphire design strength (as much as 10x) has been achieved, and the initial specifications necessary for these gains have been defined. Failure testing of small windows has conclusively demonstrated the increased sapphire strength, indicating that a nearly flawless surface polish is the primary cause of strengthening, while an unusual mounting arrangement also significantly contributes to a larger effective strength. Phase 2 work will complete specification and demonstration of these windows, and will fabricate a set for use at NASA. The enhanced capabilities of these high performance sapphire windows will lead to many diagnostic capabilities not previously possible, as well as new applications for sapphire.

  4. Insect cell transformation vectors that support high level expression and promoter assessment in insect cell culture

    Science.gov (United States)

    A somatic transformation vector, pDP9, was constructed that provides a simplified means of producing permanently transformed cultured insect cells that support high levels of protein expression of foreign genes. The pDP9 plasmid vector incorporates DNA sequences from the Junonia coenia densovirus th...

  5. Design of a mixer for the thrust-vectoring system on the high-alpha research vehicle

    Science.gov (United States)

    Pahle, Joseph W.; Bundick, W. Thomas; Yeager, Jessie C.; Beissner, Fred L., Jr.

    1996-01-01

    One of the advanced control concepts being investigated on the High-Alpha Research Vehicle (HARV) is multi-axis thrust vectoring using an experimental thrust-vectoring (TV) system consisting of three hydraulically actuated vanes per engine. A mixer is used to translate the pitch-, roll-, and yaw-TV commands into the appropriate TV-vane commands for distribution to the vane actuators. A computer-aided optimization process was developed to perform the inversion of the thrust-vectoring effectiveness data for use by the mixer in performing this command translation. Using this process a new mixer was designed for the HARV and evaluated in simulation and flight. An important element of the Mixer is the priority logic, which determines priority among the pitch-, roll-, and yaw-TV commands.

  6. Measured performances on vectorization and multitasking with a Monte Carlo code for neutron transport problems

    International Nuclear Information System (INIS)

    Chauvet, Y.

    1985-01-01

    This paper summarized two improvements of a real production code by using vectorization and multitasking techniques. After a short description of Monte Carlo algorithms employed in neutron transport problems, the authors briefly describe the work done in order to get a vector code. Vectorization principles are presented and measured performances on the CRAY 1S, CYBER 205 and CRAY X-MP compared in terms of vector lengths. The second part of this work is an adaptation to multitasking on the CRAY X-MP using exclusively standard multitasking tools available with FORTRAN under the COS 1.13 system. Two examples are presented. The goal of the first one is to measure the overhead inherent to multitasking when tasks become too small and to define a granularity threshold, that is to say a minimum size for a task. With the second example they propose a method that is very X-MP oriented in order to get the best speedup factor on such a computer. In conclusion they prove that Monte Carlo algorithms are very well suited to future vector and parallel computers

  7. Measured performances on vectorization and multitasking with a Monte Carlo code for neutron transport problems

    International Nuclear Information System (INIS)

    Chauvet, Y.

    1985-01-01

    This paper summarized two improvements of a real production code by using vectorization and multitasking techniques. After a short description of Monte Carlo algorithms employed in our neutron transport problems, we briefly describe the work we have done in order to get a vector code. Vectorization principles will be presented and measured performances on the CRAY 1S, CYBER 205 and CRAY X-MP compared in terms of vector lengths. The second part of this work is an adaptation to multitasking on the CRAY X-MP using exclusively standard multitasking tools available with FORTRAN under the COS 1.13 system. Two examples will be presented. The goal of the first one is to measure the overhead inherent to multitasking when tasks become too small and to define a granularity threshold, that is to say a minimum size for a task. With the second example we propose a method that is very X-MP oriented in order to get the best speedup factor on such a computer. In conclusion we prove that Monte Carlo algorithms are very well suited to future vector and parallel computers. (orig.)

  8. R high performance programming

    CERN Document Server

    Lim, Aloysius

    2015-01-01

    This book is for programmers and developers who want to improve the performance of their R programs by making them run faster with large data sets or who are trying to solve a pesky performance problem.

  9. Manipulation of dielectric Rayleigh particles using highly focused elliptically polarized vector fields.

    Science.gov (United States)

    Gu, Bing; Xu, Danfeng; Rui, Guanghao; Lian, Meng; Cui, Yiping; Zhan, Qiwen

    2015-09-20

    Generation of vectorial optical fields with arbitrary polarization distribution is of great interest in areas where exotic optical fields are desired. In this work, we experimentally demonstrate the versatile generation of linearly polarized vector fields, elliptically polarized vector fields, and circularly polarized vortex beams through introducing attenuators in a common-path interferometer. By means of Richards-Wolf vectorial diffraction method, the characteristics of the highly focused elliptically polarized vector fields are studied. The optical force and torque on a dielectric Rayleigh particle produced by these tightly focused vector fields are calculated and exploited for the stable trapping of dielectric Rayleigh particles. It is shown that the additional degree of freedom provided by the elliptically polarized vector field allows one to control the spatial structure of polarization, to engineer the focusing field, and to tailor the optical force and torque on a dielectric Rayleigh particle.

  10. A family of E. coli expression vectors for laboratory scale and high throughput soluble protein production

    Directory of Open Access Journals (Sweden)

    Bottomley Stephen P

    2006-03-01

    Full Text Available Abstract Background In the past few years, both automated and manual high-throughput protein expression and purification has become an accessible means to rapidly screen and produce soluble proteins for structural and functional studies. However, many of the commercial vectors encoding different solubility tags require different cloning and purification steps for each vector, considerably slowing down expression screening. We have developed a set of E. coli expression vectors with different solubility tags that allow for parallel cloning from a single PCR product and can be purified using the same protocol. Results The set of E. coli expression vectors, encode for either a hexa-histidine tag or the three most commonly used solubility tags (GST, MBP, NusA and all with an N-terminal hexa-histidine sequence. The result is two-fold: the His-tag facilitates purification by immobilised metal affinity chromatography, whilst the fusion domains act primarily as solubility aids during expression, in addition to providing an optional purification step. We have also incorporated a TEV recognition sequence following the solubility tag domain, which allows for highly specific cleavage (using TEV protease of the fusion protein to yield native protein. These vectors are also designed for ligation-independent cloning and they possess a high-level expressing T7 promoter, which is suitable for auto-induction. To validate our vector system, we have cloned four different genes and also one gene into all four vectors and used small-scale expression and purification techniques. We demonstrate that the vectors are capable of high levels of expression and that efficient screening of new proteins can be readily achieved at the laboratory level. Conclusion The result is a set of four rationally designed vectors, which can be used for streamlined cloning, expression and purification of target proteins in the laboratory and have the potential for being adaptable to a high

  11. Matrix multiplication operations with data pre-conditioning in a high performance computing architecture

    Science.gov (United States)

    Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A

    2013-11-05

    Mechanisms for performing matrix multiplication operations with data pre-conditioning in a high performance computing architecture are provided. A vector load operation is performed to load a first vector operand of the matrix multiplication operation to a first target vector register. A load and splat operation is performed to load an element of a second vector operand and replicating the element to each of a plurality of elements of a second target vector register. A multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the matrix multiplication operation. The partial product of the matrix multiplication operation is accumulated with other partial products of the matrix multiplication operation.

  12. Vector dark energy and high-z massive clusters

    Science.gov (United States)

    Carlesi, Edoardo; Knebe, Alexander; Yepes, Gustavo; Gottlöber, Stefan; Jiménez, Jose Beltrán.; Maroto, Antonio L.

    2011-12-01

    The detection of extremely massive clusters at z > 1 such as SPT-CL J0546-5345, SPT-CL J2106-5844 and XMMU J2235.3-2557 has been considered by some authors as a challenge to the standard Λ cold dark matter cosmology. In fact, assuming Gaussian initial conditions, the theoretical expectation of detecting such objects is as low as ≤1 per cent. In this paper we discuss the probability of the existence of such objects in the light of the vector dark energy paradigm, showing by means of a series of N-body simulations that chances of detection are substantially enhanced in this non-standard framework.

  13. High performance work practices, innovation and performance

    DEFF Research Database (Denmark)

    Jørgensen, Frances; Newton, Cameron; Johnston, Kim

    2013-01-01

    Research spanning nearly 20 years has provided considerable empirical evidence for relationships between High Performance Work Practices (HPWPs) and various measures of performance including increased productivity, improved customer service, and reduced turnover. What stands out from......, and Africa to examine these various questions relating to the HPWP-innovation-performance relationship. Each paper discusses a practice that has been identified in HPWP literature and potential variables that can facilitate or hinder the effects of these practices of innovation- and performance...

  14. Geminivirus vectors for high-level expression of foreign proteins in plant cells.

    Science.gov (United States)

    Mor, Tsafrir S; Moon, Yong-Sun; Palmer, Kenneth E; Mason, Hugh S

    2003-02-20

    Bean yellow dwarf virus (BeYDV) is a monopartite geminivirus that can infect dicotyledonous plants. We have developed a high-level expression system that utilizes elements of the replication machinery of this single-stranded DNA virus. The replication initiator protein (Rep) mediates release and replication of a replicon from a DNA construct ("LSL vector") that contains an expression cassette for a gene of interest flanked by cis-acting elements of the virus. We used tobacco NT1 cells and biolistic delivery of plasmid DNA for evaluation of replication and expression of reporter genes contained within an LSL vector. By codelivery of a GUS reporter-LSL vector and a Rep-supplying vector, we obtained up to 40-fold increase in expression levels compared to delivery of the reporter-LSL vectors alone. High-copy replication of the LSL vector was correlated with enhanced expression of GUS. Rep expression using a whole BeYDV clone, a cauliflower mosaic virus 35S promoter driving either genomic rep or an intron-deleted rep gene, or 35S-rep contained in the LSL vector all achieved efficient replication and enhancement of GUS expression. We anticipate that this system can be adapted for use in transgenic plants or plant cell cultures with appropriately regulated expression of Rep, with the potential to greatly increase yield of recombinant proteins. Copyright 2003 Wiley Periodicals, Inc. Biotechnol Bioeng 81: 430-437, 2003.

  15. Python high performance programming

    CERN Document Server

    Lanaro, Gabriele

    2013-01-01

    An exciting, easy-to-follow guide illustrating the techniques to boost the performance of Python code, and their applications with plenty of hands-on examples.If you are a programmer who likes the power and simplicity of Python and would like to use this language for performance-critical applications, this book is ideal for you. All that is required is a basic knowledge of the Python programming language. The book will cover basic and advanced topics so will be great for you whether you are a new or a seasoned Python developer.

  16. High performance germanium MOSFETs

    Energy Technology Data Exchange (ETDEWEB)

    Saraswat, Krishna [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States)]. E-mail: saraswat@stanford.edu; Chui, Chi On [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Krishnamohan, Tejas [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Kim, Donghyun [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Nayfeh, Ammar [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Pethe, Abhijit [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States)

    2006-12-15

    Ge is a very promising material as future channel materials for nanoscale MOSFETs due to its high mobility and thus a higher source injection velocity, which translates into higher drive current and smaller gate delay. However, for Ge to become main-stream, surface passivation and heterogeneous integration of crystalline Ge layers on Si must be achieved. We have demonstrated growth of fully relaxed smooth single crystal Ge layers on Si using a novel multi-step growth and hydrogen anneal process without any graded buffer SiGe layer. Surface passivation of Ge has been achieved with its native oxynitride (GeO {sub x}N {sub y} ) and high-permittivity (high-k) metal oxides of Al, Zr and Hf. High mobility MOSFETs have been demonstrated in bulk Ge with high-k gate dielectrics and metal gates. However, due to their smaller bandgap and higher dielectric constant, most high mobility materials suffer from large band-to-band tunneling (BTBT) leakage currents and worse short channel effects. We present novel, Si and Ge based heterostructure MOSFETs, which can significantly reduce the BTBT leakage currents while retaining high channel mobility, making them suitable for scaling into the sub-15 nm regime. Through full band Monte-Carlo, Poisson-Schrodinger and detailed BTBT simulations we show a dramatic reduction in BTBT and excellent electrostatic control of the channel, while maintaining very high drive currents in these highly scaled heterostructure DGFETs. Heterostructure MOSFETs with varying strained-Ge or SiGe thickness, Si cap thickness and Ge percentage were fabricated on bulk Si and SOI substrates. The ultra-thin ({approx}2 nm) strained-Ge channel heterostructure MOSFETs exhibited >4x mobility enhancements over bulk Si devices and >10x BTBT reduction over surface channel strained SiGe devices.

  17. High performance germanium MOSFETs

    International Nuclear Information System (INIS)

    Saraswat, Krishna; Chui, Chi On; Krishnamohan, Tejas; Kim, Donghyun; Nayfeh, Ammar; Pethe, Abhijit

    2006-01-01

    Ge is a very promising material as future channel materials for nanoscale MOSFETs due to its high mobility and thus a higher source injection velocity, which translates into higher drive current and smaller gate delay. However, for Ge to become main-stream, surface passivation and heterogeneous integration of crystalline Ge layers on Si must be achieved. We have demonstrated growth of fully relaxed smooth single crystal Ge layers on Si using a novel multi-step growth and hydrogen anneal process without any graded buffer SiGe layer. Surface passivation of Ge has been achieved with its native oxynitride (GeO x N y ) and high-permittivity (high-k) metal oxides of Al, Zr and Hf. High mobility MOSFETs have been demonstrated in bulk Ge with high-k gate dielectrics and metal gates. However, due to their smaller bandgap and higher dielectric constant, most high mobility materials suffer from large band-to-band tunneling (BTBT) leakage currents and worse short channel effects. We present novel, Si and Ge based heterostructure MOSFETs, which can significantly reduce the BTBT leakage currents while retaining high channel mobility, making them suitable for scaling into the sub-15 nm regime. Through full band Monte-Carlo, Poisson-Schrodinger and detailed BTBT simulations we show a dramatic reduction in BTBT and excellent electrostatic control of the channel, while maintaining very high drive currents in these highly scaled heterostructure DGFETs. Heterostructure MOSFETs with varying strained-Ge or SiGe thickness, Si cap thickness and Ge percentage were fabricated on bulk Si and SOI substrates. The ultra-thin (∼2 nm) strained-Ge channel heterostructure MOSFETs exhibited >4x mobility enhancements over bulk Si devices and >10x BTBT reduction over surface channel strained SiGe devices

  18. High energy photoproduction of the rho and rho' vector mesons

    International Nuclear Information System (INIS)

    Bronstein, J.M.

    1977-01-01

    In an experiment in the broad band photon beam at Fermilab diffractive production of 2π + and 4π +- states from Be, Al, Cu, and Pb targets was observed. The 2π + data are dominated by the rho(770) and the 4π +- is dominated by the rho'(1500). The energy dependence of rho photoproduction from Be was measured, and no evidence was seen for energy variation of the forward cross section in the range 30 to 160 GeV. The forward cross section is consistent with its average value d sigma/dtlt. slash 0 = 3.42 +- 0.28 μb/GeV 2 over the entire range. For the /sub rho'// a mass of 1487 +- 20 MeV and a width of 675 +- 60 MeV are obtained. All quoted errors are statistical. A standard optical model analysis of the A dependence of the rho and rho'/ photoproduction yields the following results. f/sub rho'/ 2 /f/sub rho/ 2 = 3.7 +- 0.7, sigma /sub rho'//sigma /sub rho/ = 1.05 +- 0.18. Results for the photon coupling constants are in good agreement with GVMD and with the e + e - storage ring results. The approximate equality of the rho-nucleon and rho'-nucleon total cross sections is inconsistent with the diagonal version of GVMD and provides strong motivation for including transitions between different vector mesons in GVMD

  19. How illustrations influence performance and eye movement behaviour when solving problems in vector calculus

    DEFF Research Database (Denmark)

    Ögren, Magnus; Nyström, Marcus

    2012-01-01

    Mathematical formulas in vector calculus often have direct visual representations, which in form of illustrations are used extensively during teaching and when assessing students’ levels of understanding. However, there is very little, if any, empirical evidence of how the illustrations...... are utilized during problem solving and whether they are beneficial to comprehension. In this paper we collect eye movements and performance scores (true or false answers) from students while solving eight problems in vector calculus; 20 students solve illustrated problems whereas 16 students solve the same...... problems, but without the illustrations. Results show no overall performance benefit for illustrated problems even though they are clearly visually attended. Surprisingly, we found a significant effect of whether the answer to the problem was true of false; students were more likely to answer...

  20. High Performance Computing Multicast

    Science.gov (United States)

    2012-02-01

    A History of the Virtual Synchrony Replication Model,” in Replication: Theory and Practice, Charron-Bost, B., Pedone, F., and Schiper, A. (Eds...Performance Computing IP / IPv4 Internet Protocol (version 4.0) IPMC Internet Protocol MultiCast LAN Local Area Network MCMD Dr. Multicast MPI

  1. NGINX high performance

    CERN Document Server

    Sharma, Rahul

    2015-01-01

    System administrators, developers, and engineers looking for ways to achieve maximum performance from NGINX will find this book beneficial. If you are looking for solutions such as how to handle more users from the same system or load your website pages faster, then this is the book for you.

  2. Highly efficient retrograde gene transfer into motor neurons by a lentiviral vector pseudotyped with fusion glycoprotein.

    Directory of Open Access Journals (Sweden)

    Miyabi Hirano

    Full Text Available The development of gene therapy techniques to introduce transgenes that promote neuronal survival and protection provides effective therapeutic approaches for neurological and neurodegenerative diseases. Intramuscular injection of adenoviral and adeno-associated viral vectors, as well as lentiviral vectors pseudotyped with rabies virus glycoprotein (RV-G, permits gene delivery into motor neurons in animal models for motor neuron diseases. Recently, we developed a vector with highly efficient retrograde gene transfer (HiRet by pseudotyping a human immunodeficiency virus type 1 (HIV-1-based vector with fusion glycoprotein B type (FuG-B or a variant of FuG-B (FuG-B2, in which the cytoplasmic domain of RV-G was replaced by the corresponding part of vesicular stomatitis virus glycoprotein (VSV-G. We have also developed another vector showing neuron-specific retrograde gene transfer (NeuRet with fusion glycoprotein C type, in which the short C-terminal segment of the extracellular domain and transmembrane/cytoplasmic domains of RV-G was substituted with the corresponding regions of VSV-G. These two vectors afford the high efficiency of retrograde gene transfer into different neuronal populations in the brain. Here we investigated the efficiency of the HiRet (with FuG-B2 and NeuRet vectors for retrograde gene transfer into motor neurons in the spinal cord and hindbrain in mice after intramuscular injection and compared it with the efficiency of the RV-G pseudotype of the HIV-1-based vector. The main highlight of our results is that the HiRet vector shows the most efficient retrograde gene transfer into both spinal cord and hindbrain motor neurons, offering its promising use as a gene therapeutic approach for the treatment of motor neuron diseases.

  3. Performance monitoring for coherent DP-QPSK systems based on stokes vectors analysis

    Science.gov (United States)

    Louchet, Hadrien; Koltchanov, Igor; Richter, André

    2010-12-01

    We show how to estimate accurately the Jones matrix of the transmission line by analyzing the Stokes vectors of DP-QPSK signals. This method can be used to perform in-situ PMD measurement in dual-polarization QPSK systems, and in addition to the constant modulus algorithm (CMA) to mitigate polarization-induced impairments. The applicability of this method to other modulation formats is discussed.

  4. High-energy manifestations of heavy quarks in axial-vector neutral currents

    International Nuclear Information System (INIS)

    Kizukuri, Y.; Ohba, I.; Okano, K.; Yamanaka, Y.

    1981-01-01

    A recent work by Collins, Wilczek, and Zee has attempted to manifest the incompleteness of the decoupling theorem in the axial-vector neutral currents at low energies. In the spirit of their work, we calculate corrections of the axial-vector neutral currents by virtual-heavy-quark exchange in the high-energy e + e - processes and estimate some observable quantities sensitive to virtual-heavy-quark masses which may be compared with experimental data at LEP energies

  5. Temporal and spatial performance of vector velocity imaging in the human fetal heart.

    Science.gov (United States)

    Matsui, H; Germanakis, I; Kulinskaya, E; Gardiner, H M

    2011-02-01

    To assess the spatial and temporal performance of fetal myocardial speckle tracking, using high-frame-rate (HFR) storing and Lagrangian strain analysis. Dummy electrocardiographic signaling permitted DICOM HFR in 124 normal fetuses and paired low-frame-rate (LFR) video storing at 25 Hz in 93 of them. Vector velocity imaging (VVI) tracking co-ordinates were used to compare time and spatial domain measures. We compared tracking success, Lagrangian strain, peak diastolic velocity and positive strain rate values in HFR vs. LFR video storing. Further comparisons within an HFR subset included Lagrangian vs. natural strain, VVI vs. M-mode annular displacement, and VVI vs. pulsed-wave tissue Doppler imaging (TDI) peak velocities. HFR (average 79.4 Hz) tracking was more successful than LFR (86 vs. 76%, P = 0.024). Lagrangian and natural HFR strain correlated highly (left ventricle (LV): r = 0.883, P < 0.001; right ventricle (RV): r = 0.792, P < 0.001) but natural strain gave 20% lower values, suggesting reduced reliability of measurement. Lagrangian HFR strain was similar in LV and RV and decreased with gestation (P = 0.015 and P < 0.001, respectively). LV Lagrangian LFR strain was significantly lower than the values for the RV (P < 0.001) and those using paired LV-HFR recordings (P = 0.007). Annular displacement methods correlated highly (LV = 1.046, r = 0.90, P < 0.001; RV = 1.170, r = 0.88, P < 0.001). Early diastolic waves were visible in 95% of TDI, but in only 26% of HFR and 0% of LFR recordings, and HFR-VVI velocities were significantly lower than those for TDI (P < 0.001). Doppler estimation of velocities remains superior to VVI but image gating and use of original co-ordinates should improve offline VVI assessment of fetal myocardial function. Copyright © 2011 ISUOG. Published by John Wiley & Sons, Ltd.

  6. Combining high productivity with high performance on commodity hardware

    DEFF Research Database (Denmark)

    Skovhede, Kenneth

    -like compiler for translating CIL bytecode on the CELL-BE. I then introduce a bytecode converter that transforms simple loops in Java bytecode to GPGPU capable code. I then introduce the numeric library for the Common Intermediate Language, NumCIL. I can then utilizing the vector programming model from Num......CIL and map this to the Bohrium framework. The result is a complete system that gives the user a choice of high-level languages with no explicit parallelism, yet seamlessly performs efficient execution on a number of hardware setups....

  7. Effects of Cavity on the Performance of Dual Throat Nozzle During the Thrust-Vectoring Starting Transient Process.

    Science.gov (United States)

    Gu, Rui; Xu, Jinglei

    2014-01-01

    The dual throat nozzle (DTN) technique is capable to achieve higher thrust-vectoring efficiencies than other fluidic techniques, without compromising thrust efficiency significantly during vectoring operation. The excellent performance of the DTN is mainly due to the concaved cavity. In this paper, two DTNs of different scales have been investigated by unsteady numerical simulations to compare the parameter variations and study the effects of cavity during the vector starting process. The results remind us that during the vector starting process, dynamic loads may be generated, which is a potentially challenging problem for the aircraft trim and control.

  8. A simple vector system to improve performance and utilisation of recombinant antibodies

    Directory of Open Access Journals (Sweden)

    Vincent Karen J

    2006-12-01

    Full Text Available Abstract Background Isolation of recombinant antibody fragments from antibody libraries is well established using technologies such as phage display. Phage display vectors are ideal for efficient display of antibody fragments on the surface of bacteriophage particles. However, they are often inefficient for expression of soluble antibody fragments, and sub-cloning of selected antibody populations into dedicated soluble antibody fragment expression vectors can enhance expression. Results We have developed a simple vector system for expression, dimerisation and detection of recombinant antibody fragments in the form of single chain Fvs (scFvs. Expression is driven by the T7 RNA polymerase promoter in conjunction with the inducible lysogen strain BL21 (DE3. The system is compatible with a simple auto-induction culture system for scFv production. As an alternative to periplasmic expression, expression directly in the cytoplasm of a mutant strain with a more oxidising cytoplasmic environment (Origami 2™ (DE3 was investigated and found to be inferior to periplasmic expression in BL21 (DE3 cells. The effect on yield and binding activity of fusing scFvs to the N terminus of maltose binding protein (a solubility enhancing partner, bacterial alkaline phosphatase (a naturally dimeric enzymatic reporter molecule, or the addition of a free C-terminal cysteine was determined. Fusion of scFvs to the N-terminus of maltose binding protein increased scFv yield but binding activity of the scFv was compromised. In contrast, fusion to the N-terminus of bacterial alkaline phosphatase led to an improved performance. Alkaline phosphatase provides a convenient tag allowing direct enzymatic detection of scFv fusions within crude extracts without the need for secondary reagents. Alkaline phosphatase also drives dimerisation of the scFv leading to an improvement in performance compared to monovalent constructs. This is illustrated by ELISA, western blot and

  9. High Frame-Rate Blood Vector Velocity Imaging Using Plane Waves: Simulations and Preliminary Experiments

    DEFF Research Database (Denmark)

    Udesen, Jesper; Gran, Fredrik; Hansen, Kristoffer Lindskov

    2008-01-01

    ) The ultrasound is not focused during the transmissions of the ultrasound signals; 2) A 13-bit Barker code is transmitted simultaneously from each transducer element; and 3) The 2-D vector velocity of the blood is estimated using 2-D cross-correlation. A parameter study was performed using the Field II program......, and performance of the method was investigated when a virtual blood vessel was scanned by a linear array transducer. An improved parameter set for the method was identified from the parameter study, and a flow rig measurement was performed using the same improved setup as in the simulations. Finally, the common...... carotid artery of a healthy male was scanned with a scan sequence that satisfies the limits set by the Food and Drug Administration. Vector velocity images were obtained with a frame-rate of 100 Hz where 40 speckle images are used for each vector velocity image. It was found that the blood flow...

  10. High performance proton accelerators

    International Nuclear Information System (INIS)

    Favale, A.J.

    1989-01-01

    In concert with this theme this paper briefly outlines how Grumman, over the past 4 years, has evolved from a company that designed and fabricated a Radio Frequency Quadrupole (RFQ) accelerator from the Los Alamos National Laboratory (LANL) physics and specifications to a company who, as prime contractor, is designing, fabricating, assembling and commissioning the US Army Strategic Defense Commands (USA SDC) Continuous Wave Deuterium Demonstrator (CWDD) accelerator as a turn-key operation. In the case of the RFQ, LANL scientists performed the physics analysis, established the specifications supported Grumman on the mechanical design, conducted the RFQ tuning and tested the RFQ at their laboratory. For the CWDD Program Grumman has the responsibility for the physics and engineering designs, assembly, testing and commissioning albeit with the support of consultants from LANL, Lawrence Berkeley Laboratory (LBL) and Brookhaven National laboratory. In addition, Culham Laboratory and LANL are team members on CWDD. LANL scientists have reviewed the physics design as well as a USA SDC review board. 9 figs

  11. Transient gene transfer to neurons and glia : analysis of adenoviral vector performance in the CNS and PNS

    NARCIS (Netherlands)

    Hermens, W.T.J.M.C.; Giger, Roman J; Holtmaat, Anthony J D G; Dijkhuizen, Paul A; Houweling, D A; Verhaagen, J

    In this paper a detailed protocol is presented for neuroscientists planning to start work on first generation recombinant adenoviral vectors as gene transfer agents for the nervous system. The performance of a prototype adenoviral vector encoding the bacterial lacZ gene as a reporter was studied,

  12. VISPA2: a scalable pipeline for high-throughput identification and annotation of vector integration sites.

    Science.gov (United States)

    Spinozzi, Giulio; Calabria, Andrea; Brasca, Stefano; Beretta, Stefano; Merelli, Ivan; Milanesi, Luciano; Montini, Eugenio

    2017-11-25

    Bioinformatics tools designed to identify lentiviral or retroviral vector insertion sites in the genome of host cells are used to address the safety and long-term efficacy of hematopoietic stem cell gene therapy applications and to study the clonal dynamics of hematopoietic reconstitution. The increasing number of gene therapy clinical trials combined with the increasing amount of Next Generation Sequencing data, aimed at identifying integration sites, require both highly accurate and efficient computational software able to correctly process "big data" in a reasonable computational time. Here we present VISPA2 (Vector Integration Site Parallel Analysis, version 2), the latest optimized computational pipeline for integration site identification and analysis with the following features: (1) the sequence analysis for the integration site processing is fully compliant with paired-end reads and includes a sequence quality filter before and after the alignment on the target genome; (2) an heuristic algorithm to reduce false positive integration sites at nucleotide level to reduce the impact of Polymerase Chain Reaction or trimming/alignment artifacts; (3) a classification and annotation module for integration sites; (4) a user friendly web interface as researcher front-end to perform integration site analyses without computational skills; (5) the time speedup of all steps through parallelization (Hadoop free). We tested VISPA2 performances using simulated and real datasets of lentiviral vector integration sites, previously obtained from patients enrolled in a hematopoietic stem cell gene therapy clinical trial and compared the results with other preexisting tools for integration site analysis. On the computational side, VISPA2 showed a > 6-fold speedup and improved precision and recall metrics (1 and 0.97 respectively) compared to previously developed computational pipelines. These performances indicate that VISPA2 is a fast, reliable and user-friendly tool for

  13. Performance evaluation of spatial vector routing protocol for wireless sensor networks

    International Nuclear Information System (INIS)

    Baloch, J.; Jokhio, I.

    2012-01-01

    WSNs (Wireless Sensor Networks) is an emerging area of research. Researchers worldwide are working on the issues faced by sensor nodes. Communication has been a major issue in wireless networks and the problem is manifolds in WSN s because of the limited resources. The routing protocol in such networks plays a pivotal role, as an effective routing protocol could significantly reduce the energy consumed in transmitting and receiving data packets throughout a network. In this paper the performance of SVR (Spatial Vector Routing) an energy efficient, location aware routing protocol is compared with the existing location aware protocols. The results from the simulation trials show the performance of SVR. (author)

  14. Impact of Health Care Employees’ Job Satisfaction on Organizational Performance Support Vector Machine Approach

    Directory of Open Access Journals (Sweden)

    CEMIL KUZEY

    2018-01-01

    Full Text Available This study is undertaken to search for key factors that contribute to job satisfaction among health care workers, and also to determine the impact of these underlying dimensions of employee satisfaction on organizational performance. Exploratory Factor Analysis (EFA is applied to initially uncover the key factors, and then, in the next stage of analysis, a popular data mining technique, Support Vector Machine (SVM is employed on a sample of 249 to determine the impact of job satisfaction factors on organizational performance. According to the proposed model, the main factors are revealed to be management’s attitude, pay/reward, job security and colleagues.

  15. Performance Evaluation of Spatial Vector Routing Protocol for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Javed Ali Baloch

    2012-10-01

    Full Text Available WSNs (Wireless Sensor Networks is an emerging area of research. Researchers worldwide are working on the issues faced by sensor nodes. Communication has been a major issue in wireless networks and the problem is manifolds in WSNs because of the limited resources. The routing protocol in such networks plays a pivotal role, as an effective routing protocol could significantly reduce the energy consumed in transmitting and receiving data packets throughout a network. In this paper the performance of SVR (Spatial Vector Routing an energy efficient, location aware routing protocol is compared with the existing location aware protocols. The results from the simulation trials show the performance of SVR.

  16. Performance optimization of Sparse Matrix-Vector Multiplication for multi-component PDE-based applications using GPUs

    KAUST Repository

    Abdelfattah, Ahmad

    2016-05-23

    Simulations of many multi-component PDE-based applications, such as petroleum reservoirs or reacting flows, are dominated by the solution, on each time step and within each Newton step, of large sparse linear systems. The standard solver is a preconditioned Krylov method. Along with application of the preconditioner, memory-bound Sparse Matrix-Vector Multiplication (SpMV) is the most time-consuming operation in such solvers. Multi-species models produce Jacobians with a dense block structure, where the block size can be as large as a few dozen. Failing to exploit this dense block structure vastly underutilizes hardware capable of delivering high performance on dense BLAS operations. This paper presents a GPU-accelerated SpMV kernel for block-sparse matrices. Dense matrix-vector multiplications within the sparse-block structure leverage optimization techniques from the KBLAS library, a high performance library for dense BLAS kernels. The design ideas of KBLAS can be applied to block-sparse matrices. Furthermore, a technique is proposed to balance the workload among thread blocks when there are large variations in the lengths of nonzero rows. Multi-GPU performance is highlighted. The proposed SpMV kernel outperforms existing state-of-the-art implementations using matrices with real structures from different applications. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Performance optimization of Sparse Matrix-Vector Multiplication for multi-component PDE-based applications using GPUs

    KAUST Repository

    Abdelfattah, Ahmad; Ltaief, Hatem; Keyes, David E.; Dongarra, Jack

    2016-01-01

    Simulations of many multi-component PDE-based applications, such as petroleum reservoirs or reacting flows, are dominated by the solution, on each time step and within each Newton step, of large sparse linear systems. The standard solver is a preconditioned Krylov method. Along with application of the preconditioner, memory-bound Sparse Matrix-Vector Multiplication (SpMV) is the most time-consuming operation in such solvers. Multi-species models produce Jacobians with a dense block structure, where the block size can be as large as a few dozen. Failing to exploit this dense block structure vastly underutilizes hardware capable of delivering high performance on dense BLAS operations. This paper presents a GPU-accelerated SpMV kernel for block-sparse matrices. Dense matrix-vector multiplications within the sparse-block structure leverage optimization techniques from the KBLAS library, a high performance library for dense BLAS kernels. The design ideas of KBLAS can be applied to block-sparse matrices. Furthermore, a technique is proposed to balance the workload among thread blocks when there are large variations in the lengths of nonzero rows. Multi-GPU performance is highlighted. The proposed SpMV kernel outperforms existing state-of-the-art implementations using matrices with real structures from different applications. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. The path toward HEP High Performance Computing

    International Nuclear Information System (INIS)

    Apostolakis, John; Brun, René; Gheata, Andrei; Wenzel, Sandro; Carminati, Federico

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit

  19. A structural modification of the two dimensional fuel behaviour analysis code FEMAXI-III with high-speed vectorized operation

    International Nuclear Information System (INIS)

    Yanagisawa, Kazuaki; Ishiguro, Misako; Yamazaki, Takashi; Tokunaga, Yasuo.

    1985-02-01

    Though the two-dimensional fuel behaviour analysis code FEMAXI-III has been developed by JAERI in form of optimized scalar computer code, the call for more efficient code usage generally arized from the recent trends like high burn-up and load follow operation asks the code into further modification stage. A principal aim of the modification is to transform the already implemented scalar type subroutines into vectorized forms to make the programme structure efficiently run on high-speed vector computers. The effort of such structural modification has been finished on a fair way to success. The benchmarking two tests subsequently performed to examine the effect of the modification led us the following concluding remarks: (1) In the first benchmark test, comparatively high-burned three fuel rods that have been irradiated in HBWR, BWR, and PWR condition are prepared. With respect to all cases, a net computing time consumed in the vectorized FEMAXI is approximately 50 % less than that consumed in the original one. (2) In the second benchmark test, a total of 26 PWR fuel rods that have been irradiated in the burn-up ranges of 13-30 MWd/kgU and subsequently power ramped in R2 reactor, Sweden is prepared. In this case the code is purposed to be used for making an envelop of PCI-failure threshold through 26 times code runs. Before coming to the same conclusion, the vectorized FEMAXI-III consumed a net computing time 18 min., while the original FEMAXI-III consumed a computing time 36 min. respectively. (3) The effects obtained from such structural modification are found to be significantly attributed to saving a net computing time in a mechanical calculation in the vectorized FEMAXI-III code. (author)

  20. Towards artificial intelligence based diesel engine performance control under varying operating conditions using support vector regression

    Directory of Open Access Journals (Sweden)

    Naradasu Kumar Ravi

    2013-01-01

    Full Text Available Diesel engine designers are constantly on the look-out for performance enhancement through efficient control of operating parameters. In this paper, the concept of an intelligent engine control system is proposed that seeks to ensure optimized performance under varying operating conditions. The concept is based on arriving at the optimum engine operating parameters to ensure the desired output in terms of efficiency. In addition, a Support Vector Machines based prediction model has been developed to predict the engine performance under varying operating conditions. Experiments were carried out at varying loads, compression ratios and amounts of exhaust gas recirculation using a variable compression ratio diesel engine for data acquisition. It was observed that the SVM model was able to predict the engine performance accurately.

  1. An Underwater Acoustic Vector Sensor with High Sensitivity and Broad Band

    Directory of Open Access Journals (Sweden)

    Hu Zhang

    2014-05-01

    Full Text Available Recently, acoustic vector sensor that use accelerators as sensing elements are widely used in underwater acoustic engineering, but the sensitivity of which at low frequency band is usually lower than -220 dB. In this paper, using a piezoelectric trilaminar optimized low frequency sensing element, we designed a high sensitivity internal placed ICP piezoelectric accelerometer as sensing element. Through structure optimization, we made a high sensitivity, broadband, small scale vector sensor. The working band is 10-2000 Hz, sound pressure sensitivity is -185 dB (at 100 Hz, outer diameter is 42 mm, length is 80 mm.

  2. Strategies to generate high-titer, high-potency recombinant AAV3 serotype vectors

    Directory of Open Access Journals (Sweden)

    Chen Ling

    2016-01-01

    Full Text Available Although recombinant adeno-associated virus serotype 3 (AAV3 vectors were largely ignored previously, owing to their poor transduction efficiency in most cells and tissues examined, our initial observation of the selective tropism of AAV3 serotype vectors for human liver cancer cell lines and primary human hepatocytes has led to renewed interest in this serotype. AAV3 vectors and their variants have recently proven to be extremely efficient in targeting human and nonhuman primate hepatocytes in vitro as well as in vivo. In the present studies, we wished to evaluate the relative contributions of the cis-acting inverted terminal repeats (ITRs from AAV3 (ITR3, as well as the trans-acting Rep proteins from AAV3 (Rep3 in the AAV3 vector production and transduction. To this end, we utilized two helper plasmids: pAAVr2c3, which carries rep2 and cap3 genes, and pAAVr3c3, which carries rep3 and cap3 genes. The combined use of AAV3 ITRs, AAV3 Rep proteins, and AAV3 capsids led to the production of recombinant vectors, AAV3-Rep3/ITR3, with up to approximately two to fourfold higher titers than AAV3-Rep2/ITR2 vectors produced using AAV2 ITRs, AAV2 Rep proteins, and AAV3 capsids. We also observed that the transduction efficiency of Rep3/ITR3 AAV3 vectors was approximately fourfold higher than that of Rep2/ITR2 AAV3 vectors in human hepatocellular carcinoma cell lines in vitro. The transduction efficiency of Rep3/ITR3 vectors was increased by ∼10-fold, when AAV3 capsids containing mutations in two surface-exposed residues (serine 663 and threonine 492 were used to generate a S663V+T492V double-mutant AAV3 vector. The Rep3/ITR3 AAV3 vectors also transduced human liver tumors in vivo approximately twofold more efficiently than those generated with Rep2/ITR2. Our data suggest that the transduction efficiency of AAV3 vectors can be significantly improved both using homologous Rep proteins and ITRs as well as by capsid optimization. Thus, the combined use of

  3. The path toward HEP High Performance Computing

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  4. Improving matrix-vector product performance and multi-level preconditioning for the parallel PCG package

    Energy Technology Data Exchange (ETDEWEB)

    McLay, R.T.; Carey, G.F.

    1996-12-31

    In this study we consider parallel solution of sparse linear systems arising from discretized PDE`s. As part of our continuing work on our parallel PCG Solver package, we have made improvements in two areas. The first is improving the performance of the matrix-vector product. Here on regular finite-difference grids, we are able to use the cache memory more efficiently for smaller domains or where there are multiple degrees of freedom. The second problem of interest in the present work is the construction of preconditioners in the context of the parallel PCG solver we are developing. Here the problem is partitioned over a set of processors subdomains and the matrix-vector product for PCG is carried out in parallel for overlapping grid subblocks. For problems of scaled speedup, the actual rate of convergence of the unpreconditioned system deteriorates as the mesh is refined. Multigrid and subdomain strategies provide a logical approach to resolving the problem. We consider the parallel trade-offs between communication and computation and provide a complexity analysis of a representative algorithm. Some preliminary calculations using the parallel package and comparisons with other preconditioners are provided together with parallel performance results.

  5. High Performance Networks for High Impact Science

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  6. High frame rate synthetic aperture vector flow imaging for transthoracic echocardiography

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Bechsgaard, Thor

    2016-01-01

    This work presents the first in vivo results of 2-D high frame rate vector velocity imaging for transthoracic cardiac imaging. Measurements are made on a healthy volunteer using the SARUS experimental ultrasound scanner connected to an intercostal phased-array probe. Two parasternal long-axis vie...

  7. Alterations to the orientation of the ground reaction force vector affect sprint acceleration performance in team sports athletes.

    Science.gov (United States)

    Bezodis, Neil E; North, Jamie S; Razavet, Jane L

    2017-09-01

    A more horizontally oriented ground reaction force vector is related to higher levels of sprint acceleration performance across a range of athletes. However, the effects of acute experimental alterations to the force vector orientation within athletes are unknown. Fifteen male team sports athletes completed maximal effort 10-m accelerations in three conditions following different verbal instructions intended to manipulate the force vector orientation. Ground reaction forces (GRFs) were collected from the step nearest 5-m and stance leg kinematics at touchdown were also analysed to understand specific kinematic features of touchdown technique which may influence the consequent force vector orientation. Magnitude-based inferences were used to compare findings between conditions. There was a likely more horizontally oriented ground reaction force vector and a likely lower peak vertical force in the control condition compared with the experimental conditions. 10-m sprint time was very likely quickest in the control condition which confirmed the importance of force vector orientation for acceleration performance on a within-athlete basis. The stance leg kinematics revealed that a more horizontally oriented force vector during stance was preceded at touchdown by a likely more dorsiflexed ankle, a likely more flexed knee, and a possibly or likely greater hip extension velocity.

  8. Performance of a novel micro force vector sensor and outlook into its biomedical applications

    Science.gov (United States)

    Meiss, Thorsten; Rossner, Tim; Minamisava Faria, Carlos; Völlmeke, Stefan; Opitz, Thomas; Werthschützky, Roland

    2011-05-01

    For the HapCath system, which provides haptic feedback of the forces acting on a guide wire's tip during vascular catheterization, very small piezoresistive force sensors of 200•200•640μm3 have been developed. This paper focuses on the characterization of the measurement performance and on possible new applications. Besides the determination of the dynamic measurement performance, special focus is put onto the results of the 3- component force vector calibration. This article addresses special advantageous characteristics of the sensor, but also the limits of applicability will be addressed. As for the special characteristics of the sensor, the second part of the article demonstrates new applications which can be opened up with the novel force sensor, like automatic navigation of medical or biological instruments without impacting surrounding tissue, surface roughness evaluation in biomedical systems, needle insertion with tactile or higher level feedback, or even building tactile hairs for artificial organisms.

  9. Unsteady aerodynamic modeling at high angles of attack using support vector machines

    Directory of Open Access Journals (Sweden)

    Wang Qing

    2015-06-01

    Full Text Available Accurate aerodynamic models are the basis of flight simulation and control law design. Mathematically modeling unsteady aerodynamics at high angles of attack bears great difficulties in model structure determination and parameter estimation due to little understanding of the flow mechanism. Support vector machines (SVMs based on statistical learning theory provide a novel tool for nonlinear system modeling. The work presented here examines the feasibility of applying SVMs to high angle-of-attack unsteady aerodynamic modeling field. Mainly, after a review of SVMs, several issues associated with unsteady aerodynamic modeling by use of SVMs are discussed in detail, such as selection of input variables, selection of output variables and determination of SVM parameters. The least squares SVM (LS-SVM models are set up from certain dynamic wind tunnel test data of a delta wing and an aircraft configuration, and then used to predict the aerodynamic responses in other tests. The predictions are in good agreement with the test data, which indicates the satisfying learning and generalization performance of LS-SVMs.

  10. Face Recognition Performance Improvement using a Similarity Score of Feature Vectors based on Probabilistic Histograms

    Directory of Open Access Journals (Sweden)

    SRIKOTE, G.

    2016-08-01

    Full Text Available This paper proposes an improved performance algorithm of face recognition to identify two face mismatch pairs in cases of incorrect decisions. The primary feature of this method is to deploy the similarity score with respect to Gaussian components between two previously unseen faces. Unlike the conventional classical vector distance measurement, our algorithms also consider the plot of summation of the similarity index versus face feature vector distance. A mixture of Gaussian models of labeled faces is also widely applicable to different biometric system parameters. By comparative evaluations, it has been shown that the efficiency of the proposed algorithm is superior to that of the conventional algorithm by an average accuracy of up to 1.15% and 16.87% when compared with 3x3 Multi-Region Histogram (MRH direct-bag-of-features and Principal Component Analysis (PCA-based face recognition systems, respectively. The experimental results show that similarity score consideration is more discriminative for face recognition compared to feature distance. Experimental results of Labeled Face in the Wild (LFW data set demonstrate that our algorithms are suitable for real applications probe-to-gallery identification of face recognition systems. Moreover, this proposed method can also be applied to other recognition systems and therefore additionally improves recognition scores.

  11. Performance enhancement for a GPS vector-tracking loop utilizing an adaptive iterated extended Kalman filter.

    Science.gov (United States)

    Chen, Xiyuan; Wang, Xiying; Xu, Yuan

    2014-12-09

    This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively.

  12. RavenDB high performance

    CERN Document Server

    Ritchie, Brian

    2013-01-01

    RavenDB High Performance is comprehensive yet concise tutorial that developers can use to.This book is for developers & software architects who are designing systems in order to achieve high performance right from the start. A basic understanding of RavenDB is recommended, but not required. While the book focuses on advanced topics, it does not assume that the reader has a great deal of prior knowledge of working with RavenDB.

  13. High-Performance Operating Systems

    DEFF Research Database (Denmark)

    Sharp, Robin

    1999-01-01

    Notes prepared for the DTU course 49421 "High Performance Operating Systems". The notes deal with quantitative and qualitative techniques for use in the design and evaluation of operating systems in computer systems for which performance is an important parameter, such as real-time applications......, communication systems and multimedia systems....

  14. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    Science.gov (United States)

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  15. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  16. High performance computing in linear control

    International Nuclear Information System (INIS)

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  17. High-Throughput Agrobacterium-mediated Transformation of Medicago Truncatula in Comparison to Two Expression Vectors

    International Nuclear Information System (INIS)

    Sultana, T.; Deeba, F.; Naqvi, S. M. S.

    2016-01-01

    Legumes have been turbulent to efficient Agrobacterium-mediated transformation for a long time. The selection of Medicago truncatula as a model legume plant for molecular analysis resulted in the development of efficient Agrobacterium-mediated transformation protocols. In current study, M. truncatula transformed plants expressing OsRGLP1 were obtained through GATEWAY technology using pGOsRGLP1 (pH7WG2.0=OsRGLP1). The transformation efficiency of this vector was compared with expression vector from pCAMBIA series over-expressing same gene (pCOsRGLP1). A lower number of explants generated hygromycin resistant plantlet for instance, 18.3 with pGOsRGLP1 vector as compared to 35.5 percent with pCOsRGLP1 vector. Transformation efficiency of PCR positive plants generated was 9.4 percent for pGOsRGLP1 while 21.6 percent for pCOsRGLP1. Furthermore 24.4 percent of explants generated antibiotic resistant plantlet on 20 mgl/sup -1/ of hygromycin which was higher than on 15 mgl/sup -1/ of hygromycin such as 12.2 percent. T/sub 1/ progeny analysis indicated that the transgene was inherited in Mendelian manner. The functionally active status of transgene was monitored by high level of Superoxide dismutase (SOD) activity in transformed progeny. (author)

  18. Identifying High Performance ERP Projects

    OpenAIRE

    Stensrud, Erik; Myrtveit, Ingunn

    2002-01-01

    Learning from high performance projects is crucial for software process improvement. Therefore, we need to identify outstanding projects that may serve as role models. It is common to measure productivity as an indicator of performance. It is vital that productivity measurements deal correctly with variable returns to scale and multivariate data. Software projects generally exhibit variable returns to scale, and the output from ERP projects is multivariate. We propose to use Data Envelopment ...

  19. INL High Performance Building Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  20. Fetal muscle gene transfer is not enhanced by an RGD capsid modification to high-capacity adenoviral vectors.

    Science.gov (United States)

    Bilbao, R; Reay, D P; Hughes, T; Biermann, V; Volpers, C; Goldberg, L; Bergelson, J; Kochanek, S; Clemens, P R

    2003-10-01

    High levels of alpha(v) integrin expression by fetal muscle suggested that vector re-targeting to integrins could enhance adenoviral vector-mediated transduction, thereby increasing safety and efficacy of muscle gene transfer in utero. High-capacity adenoviral (HC-Ad) vectors modified by an Arg-Gly-Asp (RGD) peptide motif in the HI loop of the adenoviral fiber (RGD-HC-Ad) have demonstrated efficient gene transfer through binding to alpha(v) integrins. To test integrin targeting of HC-Ad vectors for fetal muscle gene transfer, we compared unmodified and RGD-modified HC-Ad vectors. In vivo, unmodified HC-Ad vector transduced fetal mouse muscle with four-fold higher efficiency compared to RGD-HC-Ad vector. Confirming that the difference was due to muscle cell autonomous factors and not mechanical barriers, transduction of primary myogenic cells isolated from murine fetal muscle in vitro demonstrated a three-fold better transduction by HC-Ad vector than by RGD-HC-Ad vector. We hypothesized that the high expression level of coxsackievirus and adenovirus receptor (CAR), demonstrated in fetal muscle cells both in vitro and in vivo, was the crucial variable influencing the relative transduction efficiencies of HC-Ad and RGD-HC-Ad vectors. To explore this further, we studied transduction by HC-Ad and RGD-HC-Ad vectors in paired cell lines that expressed alpha(v) integrins and differed only by the presence or absence of CAR expression. The results increase our understanding of factors that will be important for retargeting HC-Ad vectors to enhance gene transfer to fetal muscle.

  1. Vectorization of phase space Monte Carlo code in FACOM vector processor VP-200

    International Nuclear Information System (INIS)

    Miura, Kenichi

    1986-01-01

    This paper describes the vectorization techniques for Monte Carlo codes in Fujitsu's Vector Processor System. The phase space Monte Carlo code FOWL is selected as a benchmark, and scalar and vector performances are compared. The vectorized kernel Monte Carlo routine which contains heavily nested IF tests runs up to 7.9 times faster in vector mode than in scalar mode. The overall performance improvement of the vectorized FOWL code over the original scalar code reaches 3.3. The results of this study strongly indicate that supercomputer can be a powerful tool for Monte Carlo simulations in high energy physics. (Auth.)

  2. Interactive Effects of Southern Rice Black-Streaked Dwarf Virus Infection of Host Plant and Vector on Performance of the Vector, Sogatella furcifera (Homoptera: Delphacidae).

    Science.gov (United States)

    Lei, Wenbin; Liu, Danfeng; Li, Pei; Hou, Maolin

    2014-10-01

    Performance of insect vectors can be influenced by the viruses they transmit, either directly by infection of the vectors or indirectly via infection of the host plants. Southern rice black-streaked dwarf virus (SRBSDV) is a propagative virus transmitted by the white-backed planthopper, Sogatella furcifera (Hovath). To elucidate the influence of SRBSDV on the performance of white-backed planthopper, life parameters of viruliferous and nonviruliferous white-backed planthopper fed rice seedlings infected or noninfected with SRBSDV were measured using a factorial design. Regardless of the infection status of the rice plant host, viruliferous white-backed planthopper nymphs took longer to develop from nymph to adult than did nonviruliferous nymphs. Viruliferous white-backed planthopper females deposited fewer eggs than nonviruliferous females and both viruliferous and nonviruliferous white-backed planthopper females laid fewer eggs on infected than on noninfected plants. Longevity of white-backed planthopper females was also affected by the infection status of the rice plant and white-backed planthopper. Nonviruliferous white-backed planthopper females that fed on infected rice plants lived longer than the other three treatment groups. These results indicate that the performance of white-backed planthopper is affected by SRBSDV either directly (by infection of white-backed planthopper) or indirectly (by infection of rice plant). The extended development of viruliferous nymphs and the prolonged life span of nonviruliferous adults on infected plants may increase their likelihood of transmitting virus, which would increase virus spread. © 2014 Entomological Society of America.

  3. High performance fuel technology development

    Energy Technology Data Exchange (ETDEWEB)

    Koon, Yang Hyun; Kim, Keon Sik; Park, Jeong Yong; Yang, Yong Sik; In, Wang Kee; Kim, Hyung Kyu [KAERI, Daejeon (Korea, Republic of)

    2012-01-15

    {omicron} Development of High Plasticity and Annular Pellet - Development of strong candidates of ultra high burn-up fuel pellets for a PCI remedy - Development of fabrication technology of annular fuel pellet {omicron} Development of High Performance Cladding Materials - Irradiation test of HANA claddings in Halden research reactor and the evaluation of the in-pile performance - Development of the final candidates for the next generation cladding materials. - Development of the manufacturing technology for the dual-cooled fuel cladding tubes. {omicron} Irradiated Fuel Performance Evaluation Technology Development - Development of performance analysis code system for the dual-cooled fuel - Development of fuel performance-proving technology {omicron} Feasibility Studies on Dual-Cooled Annular Fuel Core - Analysis on the property of a reactor core with dual-cooled fuel - Feasibility evaluation on the dual-cooled fuel core {omicron} Development of Design Technology for Dual-Cooled Fuel Structure - Definition of technical issues and invention of concept for dual-cooled fuel structure - Basic design and development of main structure components for dual- cooled fuel - Basic design of a dual-cooled fuel rod.

  4. High Performance Bulk Thermoelectric Materials

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Zhifeng [Boston College, Chestnut Hill, MA (United States)

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  5. High frame rate synthetic aperture vector flow imaging for transthoracic echocardiography

    Science.gov (United States)

    Villagómez-Hoyos, Carlos A.; Stuart, Matthias B.; Bechsgaard, Thor; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt

    2016-04-01

    This work presents the first in vivo results of 2-D high frame rate vector velocity imaging for transthoracic cardiac imaging. Measurements are made on a healthy volunteer using the SARUS experimental ultrasound scanner connected to an intercostal phased-array probe. Two parasternal long-axis view (PLAX) are obtained, one centred at the aortic valve and another centred at the left ventricle. The acquisition sequence was composed of 3 diverging waves for high frame rate synthetic aperture flow imaging. For verification a phantom measurement is performed on a transverse straight 5 mm diameter vessel at a depth of 100 mm in a tissue-mimicking phantom. A flow pump produced a 2 ml/s constant flow with a peak velocity of 0.2 m/s. The average estimated flow angle in the ROI was 86.22° +/- 6.66° with a true flow angle of 90°. A relative velocity bias of -39% with a standard deviation of 13% was found. In-vivo acquisitions show complex flow patterns in the heart. In the aortic valve view, blood is seen exiting the left ventricle cavity through the aortic valve into the aorta during the systolic phase of the cardiac cycle. In the left ventricle view, blood flow is seen entering the left ventricle cavity through the mitral valve and splitting in two ways when approximating the left ventricle wall. The work presents 2-D velocity estimates on the heart from a non-invasive transthoracic scan. The ability of the method detecting flow regardless of the beam angle could potentially reveal a more complete view of the flow patterns presented on the heart.

  6. Generation of High-order Group-velocity-locked Vector Solitons

    OpenAIRE

    Jin, X. X.; Wu, Z. C.; Zhang, Q.; Li, L.; Tang, D. Y.; Shen, D. Y.; Fu, S. N.; Liu, D. M.; Zhao, L. M.

    2015-01-01

    We report numerical simulations on the high-order group-velocity-locked vector soliton (GVLVS) generation based on the fundamental GVLVS. The high-order GVLVS generated is characterized with a two-humped pulse along one polarization while a single-humped pulse along the orthogonal polarization. The phase difference between the two humps could be 180 degree. It is found that by appropriate setting the time separation between the two components of the fundamental GVLVS, the high-order GVLVS wit...

  7. Investigating the Magnetic Imprints of Major Solar Eruptions with SDO /HMI High-cadence Vector Magnetograms

    Energy Technology Data Exchange (ETDEWEB)

    Sun Xudong; Hoeksema, J. Todd; Liu Yang; Chen Ruizhu [W. W. Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA 94305 (United States); Kazachenko, Maria, E-mail: xudong@Sun.stanford.edu [Space Sciences Laboratory, University of California, Berkeley, CA 94720 (United States)

    2017-04-10

    The solar active region photospheric magnetic field evolves rapidly during major eruptive events, suggesting appreciable feedback from the corona. Previous studies of these “magnetic imprints” are mostly based on line of sight only or lower-cadence vector observations; a temporally resolved depiction of the vector field evolution is hitherto lacking. Here, we introduce the high-cadence (90 s or 135 s) vector magnetogram data set from the Helioseismic and Magnetic Imager, which is well suited for investigating the phenomenon. These observations allow quantitative characterization of the permanent, step-like changes that are most pronounced in the horizontal field component (B {sub h}). A highly structured pattern emerges from analysis of an archetypical event, SOL2011-02-15T01:56, where B {sub h} near the main polarity inversion line increases significantly during the earlier phase of the associated flare with a timescale of several minutes, while B {sub h} in the periphery decreases at later times with smaller magnitudes and a slightly longer timescale. The data set also allows effective identification of the “magnetic transient” artifact, where enhanced flare emission alters the Stokes profiles and the inferred magnetic field becomes unreliable. Our results provide insights on the momentum processes in solar eruptions. The data set may also be useful to the study of sunquakes and data-driven modeling of the corona.

  8. Performance of Ferrite Vector Modulators in the LLRF System of the Fermilab HINS 6-Cavity Test

    Energy Technology Data Exchange (ETDEWEB)

    Varghese, Philip [Fermilab; Barnes, Barry [Fermilab; Chase, Brian [Fermilab; Cullerton, Ed [Fermilab; Tan, Cong [Fermilab

    2013-04-01

    The High Intensity Neutrino Source (HINS) 6-cavity test is a part of the Fermilab HINS Linac R&D program for a low energy, high intensity proton Hsup>- linear accelerator. One of the objectives of the 6-cavity test is to demonstrate the use of high power RF Ferrite Vector Modulators(FVM) for independent control of multiple cavities driven by a single klystron. The beamline includes an RFQ and six cavities. The LLRF system provides a primary feedback loop around the RFQ and the distribution of the regulated klystron output is controlled by secondary learning feed-forward loops on the FVMs for each of the six cavities. The feed-forward loops provide pulse to pulse correction to the current waveform profiles of the FVM power supplies to compensate for beam-loading and other disturbances. The learning feed-forward loops are shown to successfully control the amplitude and phase settings for the cavities well within the 1 % and 1 degree requirements specified for the system.

  9. Performance improvement of 64-QAM coherent optical communication system by optimizing symbol decision boundary based on support vector machine

    Science.gov (United States)

    Chen, Wei; Zhang, Junfeng; Gao, Mingyi; Shen, Gangxiang

    2018-03-01

    High-order modulation signals are suited for high-capacity communication systems because of their high spectral efficiency, but they are more vulnerable to various impairments. For the signals that experience degradation, when symbol points overlap on the constellation diagram, the original linear decision boundary cannot be used to distinguish the classification of symbol. Therefore, it is advantageous to create an optimum symbol decision boundary for the degraded signals. In this work, we experimentally demonstrated the 64-quadrature-amplitude modulation (64-QAM) coherent optical communication system using support-vector machine (SVM) decision boundary algorithm to create the optimum symbol decision boundary for improving the system performance. We investigated the influence of various impairments on the 64-QAM coherent optical communication systems, such as the impairments caused by modulator nonlinearity, phase skew between in-phase (I) arm and quadrature-phase (Q) arm of the modulator, fiber Kerr nonlinearity and amplified spontaneous emission (ASE) noise. We measured the bit-error-ratio (BER) performance of 75-Gb/s 64-QAM signals in the back-to-back and 50-km transmission. By using SVM to optimize symbol decision boundary, the impairments caused by I/Q phase skew of the modulator, fiber Kerr nonlinearity and ASE noise are greatly mitigated.

  10. High-quality and interactive animations of 3D time-varying vector fields.

    Science.gov (United States)

    Helgeland, Anders; Elboth, Thomas

    2006-01-01

    In this paper, we present an interactive texture-based method for visualizing three-dimensional unsteady vector fields. The visualization method uses a sparse and global representation of the flow, such that it does not suffer from the same perceptual issues as is the case for visualizing dense representations. The animation is made by injecting a collection of particles evenly distributed throughout the physical domain. These particles are then tracked along their path lines. At each time step, these particles are used as seed points to generate field lines using any vector field such as the velocity field or vorticity field. In this way, the animation shows the advection of particles while each frame in the animation shows the instantaneous vector field. In order to maintain a coherent particle density and to avoid clustering as time passes, we have developed a novel particle advection strategy which produces approximately evenly-spaced field lines at each time step. To improve rendering performance, we decouple the rendering stage from the preceding stages of the visualization method. This allows interactive exploration of multiple fields simultaneously, which sets the stage for a more complete analysis of the flow field. The final display is rendered using texture-based direct volume rendering.

  11. Clustering for high-dimension, low-sample size data using distance vectors

    OpenAIRE

    Terada, Yoshikazu

    2013-01-01

    In high-dimension, low-sample size (HDLSS) data, it is not always true that closeness of two objects reflects a hidden cluster structure. We point out the important fact that it is not the closeness, but the "values" of distance that contain information of the cluster structure in high-dimensional space. Based on this fact, we propose an efficient and simple clustering approach, called distance vector clustering, for HDLSS data. Under the assumptions given in the work of Hall et al. (2005), w...

  12. Neo4j high performance

    CERN Document Server

    Raj, Sonal

    2015-01-01

    If you are a professional or enthusiast who has a basic understanding of graphs or has basic knowledge of Neo4j operations, this is the book for you. Although it is targeted at an advanced user base, this book can be used by beginners as it touches upon the basics. So, if you are passionate about taming complex data with the help of graphs and building high performance applications, you will be able to get valuable insights from this book.

  13. Equivalent Vectors

    Science.gov (United States)

    Levine, Robert

    2004-01-01

    The cross-product is a mathematical operation that is performed between two 3-dimensional vectors. The result is a vector that is orthogonal or perpendicular to both of them. Learning about this for the first time while taking Calculus-III, the class was taught that if AxB = AxC, it does not necessarily follow that B = C. This seemed baffling. The…

  14. Performance Comparison Between Support Vector Regression and Artificial Neural Network for Prediction of Oil Palm Production

    Directory of Open Access Journals (Sweden)

    Mustakim Mustakim

    2016-02-01

    Full Text Available The largest region that produces oil palm in Indonesia has an important role in improving the welfare of society and economy. Oil palm has increased significantly in Riau Province in every period, to determine the production development for the next few years with the functions and benefits of oil palm carried prediction production results that were seen from time series data last 8 years (2005-2013. In its prediction implementation, it was done by comparing the performance of Support Vector Regression (SVR method and Artificial Neural Network (ANN. From the experiment, SVR produced the best model compared with ANN. It is indicated by the correlation coefficient of 95% and 6% for MSE in the kernel Radial Basis Function (RBF, whereas ANN produced only 74% for R2 and 9% for MSE on the 8th experiment with hiden neuron 20 and learning rate 0,1. SVR model generates predictions for next 3 years which increased between 3% - 6% from actual data and RBF model predictions.

  15. A model for soft high-energy scattering: Tensor pomeron and vector odderon

    Energy Technology Data Exchange (ETDEWEB)

    Ewerz, Carlo, E-mail: C.Ewerz@thphys.uni-heidelberg.de [Institut für Theoretische Physik, Universität Heidelberg, Philosophenweg 16, D-69120 Heidelberg (Germany); ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung, Planckstraße 1, D-64291 Darmstadt (Germany); Maniatis, Markos, E-mail: mmaniatis@ubiobio.cl [Departamento de Ciencias Básicas, Universidad del Bío-Bío, Avda. Andrés Bello s/n, Casilla 447, Chillán 3780000 (Chile); Nachtmann, Otto, E-mail: O.Nachtmann@thphys.uni-heidelberg.de [Institut für Theoretische Physik, Universität Heidelberg, Philosophenweg 16, D-69120 Heidelberg (Germany)

    2014-03-15

    A model for soft high-energy scattering is developed. The model is formulated in terms of effective propagators and vertices for the exchange objects: the pomeron, the odderon, and the reggeons. The vertices are required to respect standard rules of QFT. The propagators are constructed taking into account the crossing properties of amplitudes in QFT and the power-law ansätze from the Regge model. We propose to describe the pomeron as an effective spin 2 exchange. This tensor pomeron gives, at high energies, the same results for the pp and pp{sup -bar} elastic amplitudes as the standard Donnachie–Landshoff pomeron. But with our tensor pomeron it is much more natural to write down effective vertices of all kinds which respect the rules of QFT. This is particularly clear for the coupling of the pomeron to particles carrying spin, for instance vector mesons. We describe the odderon as an effective vector exchange. We emphasise that with a tensor pomeron and a vector odderon the corresponding charge-conjugation relations are automatically fulfilled. We compare the model to some experimental data, in particular to data for the total cross sections, in order to determine the model parameters. The model should provide a starting point for a general framework for describing soft high-energy reactions. It should give to experimentalists an easily manageable tool for calculating amplitudes for such reactions and for obtaining predictions which can be compared in detail with data. -- Highlights: •A general model for soft high-energy hadron scattering is developed. •The pomeron is described as effective tensor exchange. •Explicit expressions for effective reggeon–particle vertices are given. •Reggeon–particle and particle–particle vertices are related. •All vertices respect the standard C parity and crossing rules of QFT.

  16. High performance MEAs. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-07-15

    The aim of the present project is through modeling, material and process development to obtain significantly better MEA performance and to attain the technology necessary to fabricate stable catalyst materials thereby providing a viable alternative to current industry standard. This project primarily focused on the development and characterization of novel catalyst materials for the use in high temperature (HT) and low temperature (LT) proton-exchange membrane fuel cells (PEMFC). New catalysts are needed in order to improve fuel cell performance and reduce the cost of fuel cell systems. Additional tasks were the development of new, durable sealing materials to be used in PEMFC as well as the computational modeling of heat and mass transfer processes, predominantly in LT PEMFC, in order to improve fundamental understanding of the multi-phase flow issues and liquid water management in fuel cells. An improved fundamental understanding of these processes will lead to improved fuel cell performance and hence will also result in a reduced catalyst loading to achieve the same performance. The consortium have obtained significant research results and progress for new catalyst materials and substrates with promising enhanced performance and fabrication of the materials using novel methods. However, the new materials and synthesis methods explored are still in the early research and development phase. The project has contributed to improved MEA performance using less precious metal and has been demonstrated for both LT-PEM, DMFC and HT-PEM applications. New novel approach and progress of the modelling activities has been extremely satisfactory with numerous conference and journal publications along with two potential inventions concerning the catalyst layer. (LN)

  17. High Performance Proactive Digital Forensics

    International Nuclear Information System (INIS)

    Alharbi, Soltan; Traore, Issa; Moa, Belaid; Weber-Jahnke, Jens

    2012-01-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  18. Effective SIMD Vectorization for Intel Xeon Phi Coprocessors

    OpenAIRE

    Tian, Xinmin; Saito, Hideki; Preis, Serguei V.; Garcia, Eric N.; Kozhukhov, Sergey S.; Masten, Matt; Cherkasov, Aleksei G.; Panchenko, Nikolay

    2015-01-01

    Efficiently exploiting SIMD vector units is one of the most important aspects in achieving high performance of the application code running on Intel Xeon Phi coprocessors. In this paper, we present several effective SIMD vectorization techniques such as less-than-full-vector loop vectorization, Intel MIC specific alignment optimization, and small matrix transpose/multiplication 2D vectorization implemented in the Intel C/C++ and Fortran production compilers for Intel Xeon Phi coprocessors. A ...

  19. A novel and highly efficient production system for recombinant adeno-associated virus vector.

    Science.gov (United States)

    Wu, Zhijian; Wu, Xiaobing; Cao, Hui; Dong, Xiaoyan; Wang, Hong; Hou, Yunde

    2002-02-01

    Recombinant adeno-associated virus (rAAV) has proven to be a promising gene delivery vector for human gene therapy. However, its application has been limited by difficulty in obtaining enough quantities of high-titer vector stocks. In this paper, a novel and highly efficient production system for rAAV is described. A recombinant herpes simplex virus type 1 (rHSV-1) designated HSV1-rc/DeltaUL2, which expressed adeno-associated virus type2 (AAV-2) Rep and Cap proteins, was constructed previously. The data confirmed that its functions were to support rAAV replication and packaging, and the generated rAAV was infectious. Meanwhile, an rAAV proviral cell line designated BHK/SG2, which carried the green fluorescent protein (GFP) gene expression cassette, was established by transfecting BHK-21 cells with rAAV vector plasmid pSNAV-2-GFP. Infecting BHK/SG2 with HSV1-rc/DeltaUL2 at an MOI of 0.1 resulted in the optimal yields of rAAV, reaching 250 transducing unit (TU) or 4.28x10(4) particles per cell. Therefore, compared with the conventional transfection method, the yield of rAAV using this "one proviral cell line, one helper virus" strategy was increased by two orders of magnitude. Large-scale production of rAAV can be easily achieved using this strategy and might meet the demands for clinical trials of rAAV-mediated gene therapy.

  20. High performance light water reactor

    International Nuclear Information System (INIS)

    Squarer, D.; Schulenberg, T.; Struwe, D.; Oka, Y.; Bittermann, D.; Aksan, N.; Maraczy, C.; Kyrki-Rajamaeki, R.; Souyri, A.; Dumaz, P.

    2003-01-01

    The objective of the high performance light water reactor (HPLWR) project is to assess the merit and economic feasibility of a high efficiency LWR operating at thermodynamically supercritical regime. An efficiency of approximately 44% is expected. To accomplish this objective, a highly qualified team of European research institutes and industrial partners together with the University of Tokyo is assessing the major issues pertaining to a new reactor concept, under the co-sponsorship of the European Commission. The assessment has emphasized the recent advancement achieved in this area by Japan. Additionally, it accounts for advanced European reactor design requirements, recent improvements, practical design aspects, availability of plant components and the availability of high temperature materials. The final objective of this project is to reach a conclusion on the potential of the HPLWR to help sustain the nuclear option, by supplying competitively priced electricity, as well as to continue the nuclear competence in LWR technology. The following is a brief summary of the main project achievements:-A state-of-the-art review of supercritical water-cooled reactors has been performed for the HPLWR project.-Extensive studies have been performed in the last 10 years by the University of Tokyo. Therefore, a 'reference design', developed by the University of Tokyo, was selected in order to assess the available technological tools (i.e. computer codes, analyses, advanced materials, water chemistry, etc.). Design data and results of the analysis were supplied by the University of Tokyo. A benchmark problem, based on the 'reference design' was defined for neutronics calculations and several partners of the HPLWR project carried out independent analyses. The results of these analyses, which in addition help to 'calibrate' the codes, have guided the assessment of the core and the design of an improved HPLWR fuel assembly. Preliminary selection was made for the HPLWR scale

  1. Complex matrix multiplication operations with data pre-conditioning in a high performance computing architecture

    Science.gov (United States)

    Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A

    2014-02-11

    Mechanisms for performing a complex matrix multiplication operation are provided. A vector load operation is performed to load a first vector operand of the complex matrix multiplication operation to a first target vector register. The first vector operand comprises a real and imaginary part of a first complex vector value. A complex load and splat operation is performed to load a second complex vector value of a second vector operand and replicate the second complex vector value within a second target vector register. The second complex vector value has a real and imaginary part. A cross multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the complex matrix multiplication operation. The partial product is accumulated with other partial products and a resulting accumulated partial product is stored in a result vector register.

  2. GPU Accelerated Vector Median Filter

    Science.gov (United States)

    Aras, Rifat; Shen, Yuzhong

    2011-01-01

    Noise reduction is an important step for most image processing tasks. For three channel color images, a widely used technique is vector median filter in which color values of pixels are treated as 3-component vectors. Vector median filters are computationally expensive; for a window size of n x n, each of the n(sup 2) vectors has to be compared with other n(sup 2) - 1 vectors in distances. General purpose computation on graphics processing units (GPUs) is the paradigm of utilizing high-performance many-core GPU architectures for computation tasks that are normally handled by CPUs. In this work. NVIDIA's Compute Unified Device Architecture (CUDA) paradigm is used to accelerate vector median filtering. which has to the best of our knowledge never been done before. The performance of GPU accelerated vector median filter is compared to that of the CPU and MPI-based versions for different image and window sizes, Initial findings of the study showed 100x improvement of performance of vector median filter implementation on GPUs over CPU implementations and further speed-up is expected after more extensive optimizations of the GPU algorithm .

  3. Comparative performance of three experimental hut designs for measuring malaria vector responses to insecticides in Tanzania.

    Science.gov (United States)

    Massue, Dennis J; Kisinza, William N; Malongo, Bernard B; Mgaya, Charles S; Bradley, John; Moore, Jason D; Tenu, Filemoni F; Moore, Sarah J

    2016-03-15

    Experimental huts are simplified, standardized representations of human habitations that provide model systems to evaluate insecticides used in indoor residual spray (IRS) and long-lasting insecticidal nets (LLINs) to kill disease vectors. Hut volume, construction materials and size of entry points impact mosquito entry and exposure to insecticides. The performance of three standard experimental hut designs was compared to evaluate insecticide used in LLINs. Field studies were conducted at the World Health Organization Pesticide Evaluation Scheme (WHOPES) testing site in Muheza, Tanzania. Three East African huts, three West African huts, and three Ifakara huts were compared using Olyset(®) and Permanet 2.0(®) versus untreated nets as a control. Outcomes measured were mortality, induced exophily (exit rate), blood feeding inhibition and deterrence (entry rate). Data were analysed using linear mixed effect regression and Bland-Altman comparison of paired differences. A total of 613 mosquitoes were collected in 36 nights, of which 13.5% were Anopheles gambiae sensu lato, 21% Anopheles funestus sensu stricto, 38% Mansonia species and 28% Culex species. Ifakara huts caught three times more mosquitoes than the East African and West African huts, while the West African huts caught significantly fewer mosquitoes than the other hut types. Mosquito densities were low, very little mosquito exit was measured in any of the huts with no measurable exophily caused by the use of either Olyset or Permanet. When the huts were directly compared, the West African huts measured greater exophily than other huts. As unholed nets were used in the experiments and few mosquitoes were captured, it was not possible to measure difference in feeding success either between treatments or hut types. In each of the hut types there was increased mortality when Permanet or Olyset were present inside the huts compared to the control, however this did not vary between the hut types. Both East African

  4. Development of high performance cladding

    International Nuclear Information System (INIS)

    Kiuchi, Kiyoshi

    2003-01-01

    The developments of superior next-generation light water reactor are requested on the basis of general view points, such as improvement of safety, economics, reduction of radiation waste and effective utilization of plutonium, until 2030 year in which conventional reactor plants should be renovate. Improvements of stainless steel cladding for conventional high burn-up reactor to more than 100 GWd/t, developments of manufacturing technology for reduced moderation-light water reactor (RMWR) of breeding ratio beyond 1.0 and researches of water-materials interaction on super critical pressure-water cooled reactor are carried out in Japan Atomic Energy Research Institute. Stable austenite stainless steel has been selected for fuel element cladding of advanced boiling water reactor (ABWR). The austenite stain less has the superiority for anti-irradiation properties, corrosion resistance and mechanical strength. A hard spectrum of neutron energy up above 0.1 MeV takes place in core of the reduced moderation-light water reactor, as liquid metal-fast breeding reactor (LMFBR). High performance cladding for the RMWR fuel elements is required to get anti-irradiation properties, corrosion resistance and mechanical strength also. Slow strain rate test (SSRT) of SUS 304 and SUS 316 are carried out for studying stress corrosion cracking (SCC). Irradiation tests in LMFBR are intended to obtain irradiation data for damaged quantity of the cladding materials. (M. Suetake)

  5. Comparative performance of a modified change vector analysis in forest change detection

    NARCIS (Netherlands)

    Nackaerts, Kris; Vaesen, K.; Muys, Bart; Coppin, P.

    2005-01-01

    Sustainable forest management requires accurate and up-to-date information, which can nowadays be obtained using digital earth observation technology. This paper introduces a modified change vector analysis (mCVA) approach and conceptually contrasts it against traditional CVA. The results of a

  6. Vector soup: high-throughput identification of Neotropical phlebotomine sand flies using metabarcoding.

    Science.gov (United States)

    Kocher, Arthur; Gantier, Jean-Charles; Gaborit, Pascal; Zinger, Lucie; Holota, Helene; Valiere, Sophie; Dusfour, Isabelle; Girod, Romain; Bañuls, Anne-Laure; Murienne, Jerome

    2017-03-01

    Phlebotomine sand flies are haematophagous dipterans of primary medical importance. They represent the only proven vectors of leishmaniasis worldwide and are involved in the transmission of various other pathogens. Studying the ecology of sand flies is crucial to understand the epidemiology of leishmaniasis and further control this disease. A major limitation in this regard is that traditional morphological-based methods for sand fly species identifications are time-consuming and require taxonomic expertise. DNA metabarcoding holds great promise in overcoming this issue by allowing the identification of multiple species from a single bulk sample. Here, we assessed the reliability of a short insect metabarcode located in the mitochondrial 16S rRNA for the identification of Neotropical sand flies, and constructed a reference database for 40 species found in French Guiana. Then, we conducted a metabarcoding experiment on sand flies mixtures of known content and showed that the method allows an accurate identification of specimens in pools. Finally, we applied metabarcoding to field samples caught in a 1-ha forest plot in French Guiana. Besides providing reliable molecular data for species-level assignations of phlebotomine sand flies, our study proves the efficiency of metabarcoding based on the mitochondrial 16S rRNA for studying sand fly diversity from bulk samples. The application of this high-throughput identification procedure to field samples can provide great opportunities for vector monitoring and eco-epidemiological studies. © 2016 John Wiley & Sons Ltd.

  7. A new test for the mean vector in high-dimensional data

    Directory of Open Access Journals (Sweden)

    Knavoot Jiamwattanapong

    2015-08-01

    Full Text Available For the testing of the mean vector where the data are drawn from a multivariate normal population, the renowned Hotelling’s T 2 test is no longer valid when the dimension of the data equals or exceeds the sample size. In this study, we consider the problem of testing the hypothesis H :μ 0  and propose a new test based on the idea of keeping more information from the sample covariance matrix. The development of the statistic is based on Hotelling’s T 2 distribution and the new test has invariance property under a group of scalar transformation. The asymptotic distribution is derived under the null hypothesis. The simulation results show that the proposed test performs well and is more powerful when the data dimension increases for a given sample size. An analysis of DNA microarray data with the new test is demonstrated.

  8. Performance Improvement of Sensorless Vector Control for Matrix Converter Drives Using PQR Transformation

    DEFF Research Database (Denmark)

    Lee, Kyo-Beum; Blaabjerg, Frede

    2005-01-01

    This paper presents a new method to improve sensorless performance of matrix converter drives using PQR power transformation. The non-linearity of matrix converter drives such as commutation delay, turn-on and turn-off time of switching device, and on-state switching device voltage drop is modelled...... using PQR transformation and compensated using a reference current control scheme. To eliminate the input current distortion due to the input voltage unbalance, a simple method using PQR transformation is also proposed. The proposed compensation method is applied for high performance induction motor...

  9. Performance improvement of sensorless vector control for matrix converter drives using PQR power theory

    DEFF Research Database (Denmark)

    Lee, Kyo Beum; Blaabjerg, Frede

    2007-01-01

    This paper presents a new method to improve sensorless performance of matrix converter drives using PQR power transformation. The non-linearity of matrix converter drives such as commutation delay, turn-on and turn-off time of switching device, and on-state switching device voltage drop is modelled...... using PQR transformation and compensated using a reference current control scheme. To eliminate the input current distortion due to the input voltage unbalance, a simple method using PQR transformation is also proposed. The proposed compensation method is applied for high performance induction motor...

  10. A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations

    International Nuclear Information System (INIS)

    Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel; Buluc, Aydin; Shao, Meiyue

    2017-01-01

    As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using the compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.

  11. Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data

    KAUST Repository

    Hu, Zongliang; Tong, Tiejun; Genton, Marc G.

    2017-01-01

    We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling's tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.

  12. Differentiation of several interstitial lung disease patterns in HRCT images using support vector machine: role of databases on performance

    Science.gov (United States)

    Kale, Mandar; Mukhopadhyay, Sudipta; Dash, Jatindra K.; Garg, Mandeep; Khandelwal, Niranjan

    2016-03-01

    Interstitial lung disease (ILD) is complicated group of pulmonary disorders. High Resolution Computed Tomography (HRCT) considered to be best imaging technique for analysis of different pulmonary disorders. HRCT findings can be categorised in several patterns viz. Consolidation, Emphysema, Ground Glass Opacity, Nodular, Normal etc. based on their texture like appearance. Clinician often find it difficult to diagnosis these pattern because of their complex nature. In such scenario computer-aided diagnosis system could help clinician to identify patterns. Several approaches had been proposed for classification of ILD patterns. This includes computation of textural feature and training /testing of classifier such as artificial neural network (ANN), support vector machine (SVM) etc. In this paper, wavelet features are calculated from two different ILD database, publically available MedGIFT ILD database and private ILD database, followed by performance evaluation of ANN and SVM classifiers in terms of average accuracy. It is found that average classification accuracy by SVM is greater than ANN where trained and tested on same database. Investigation continued further to test variation in accuracy of classifier when training and testing is performed with alternate database and training and testing of classifier with database formed by merging samples from same class from two individual databases. The average classification accuracy drops when two independent databases used for training and testing respectively. There is significant improvement in average accuracy when classifiers are trained and tested with merged database. It infers dependency of classification accuracy on training data. It is observed that SVM outperforms ANN when same database is used for training and testing.

  13. An Investigation of the High Efficiency Estimation Approach of the Large-Scale Scattered Point Cloud Normal Vector

    Directory of Open Access Journals (Sweden)

    Xianglin Meng

    2018-03-01

    Full Text Available The normal vector estimation of the large-scale scattered point cloud (LSSPC plays an important role in point-based shape editing. However, the normal vector estimation for LSSPC cannot meet the great challenge of the sharp increase of the point cloud that is mainly attributed to its low computational efficiency. In this paper, a novel, fast method-based on bi-linear interpolation is reported on the normal vector estimation for LSSPC. We divide the point sets into many small cubes to speed up the local point search and construct interpolation nodes on the isosurface expressed by the point cloud. On the premise of calculating the normal vectors of these interpolated nodes, a normal vector bi-linear interpolation of the points in the cube is realized. The proposed approach has the merits of accurate, simple, and high efficiency, because the algorithm only needs to search neighbor and calculates normal vectors for interpolation nodes that are usually far less than the point cloud. The experimental results of several real and simulated point sets show that our method is over three times faster than the Elliptic Gabriel Graph-based method, and the average deviation is less than 0.01 mm.

  14. Asymptomatic dogs are highly competent to transmit Leishmania (Leishmania) infantum chagasi to the natural vector.

    Science.gov (United States)

    Laurenti, Márcia Dalastra; Rossi, Claudio Nazaretian; da Matta, Vânia Lúcia Ribeiro; Tomokane, Thaise Yumie; Corbett, Carlos Eduardo Pereira; Secundino, Nágila Francinete Costa; Pimenta, Paulo Filemon Paulocci; Marcondes, Mary

    2013-09-23

    We evaluated the ability of dogs naturally infected with Leishmania (Leishmania) infantum chagasi to transfer the parasite to the vector and the factors associated with transmission. Thirty-eight infected dogs were confirmed to be infected by direct observation of Leishmania in lymph node smears. Dogs were grouped according to external clinical signs and laboratory data into symptomatic (n=24) and asymptomatic (n=14) animals. All dogs were sedated and submitted to xenodiagnosis with F1-laboratory-reared Lutzomyia longipalpis. After blood digestion, sand flies were dissected and examined for the presence of promastigotes. Following canine euthanasia, fragments of skin, lymph nodes, and spleen were collected and processed using immunohistochemistry to evaluate tissue parasitism. Specific antibodies were detected using an enzyme-linked immunosorbent assay. Antibody levels were found to be higher in symptomatic dogs compared to asymptomatic dogs (p=0.0396). Both groups presented amastigotes in lymph nodes, while skin parasitism was observed in only 58.3% of symptomatic and in 35.7% of asymptomatic dogs. Parasites were visualized in the spleens of 66.7% and 71.4% of symptomatic and asymptomatic dogs, respectively. Parasite load varied from mild to intense, and was not significantly different between groups. All asymptomatic dogs except for one (93%) were competent to transmit Leishmania to the vector, including eight (61.5%) without skin parasitism. Sixteen symptomatic animals (67%) infected sand flies; six (37.5%) showed no amastigotes in the skin. Skin parasitism was not crucial for the ability to infect Lutzomyia longipalpis but the presence of Leishmania in lymph nodes was significantly related to a positive xenodiagnosis. Additionally, a higher proportion of infected vectors that fed on asymptomatic dogs was observed (p=0.0494). Clinical severity was inversely correlated with the infection rate of sand flies (p=0.027) and was directly correlated with antibody

  15. A High Performance QDWH-SVD Solver using Hardware Accelerators

    KAUST Repository

    Sukkari, Dalal E.

    2015-04-08

    This paper describes a new high performance implementation of the QR-based Dynamically Weighted Halley Singular Value Decomposition (QDWH-SVD) solver on multicore architecture enhanced with multiple GPUs. The standard QDWH-SVD algorithm was introduced by Nakatsukasa and Higham (SIAM SISC, 2013) and combines three successive computational stages: (1) the polar decomposition calculation of the original matrix using the QDWH algorithm, (2) the symmetric eigendecomposition of the resulting polar factor to obtain the singular values and the right singular vectors and (3) the matrix-matrix multiplication to get the associated left singular vectors. A comprehensive test suite highlights the numerical robustness of the QDWH-SVD solver. Although it performs up to two times more flops when computing all singular vectors compared to the standard SVD solver algorithm, our new high performance implementation on single GPU results in up to 3.8x improvements for asymptotic matrix sizes, compared to the equivalent routines from existing state-of-the-art open-source and commercial libraries. However, when only singular values are needed, QDWH-SVD is penalized by performing up to 14 times more flops. The singular value only implementation of QDWH-SVD on single GPU can still run up to 18% faster than the best existing equivalent routines. Integrating mixed precision techniques in the solver can additionally provide up to 40% improvement at the price of losing few digits of accuracy, compared to the full double precision floating point arithmetic. We further leverage the single GPU QDWH-SVD implementation by introducing the first multi-GPU SVD solver to study the scalability of the QDWH-SVD framework.

  16. The identification of high potential archers based on fitness and motor ability variables: A Support Vector Machine approach.

    Science.gov (United States)

    Taha, Zahari; Musa, Rabiu Muazu; P P Abdul Majeed, Anwar; Alim, Muhammad Muaz; Abdullah, Mohamad Razali

    2018-02-01

    Support Vector Machine (SVM) has been shown to be an effective learning algorithm for classification and prediction. However, the application of SVM for prediction and classification in specific sport has rarely been used to quantify/discriminate low and high-performance athletes. The present study classified and predicted high and low-potential archers from a set of fitness and motor ability variables trained on different SVMs kernel algorithms. 50 youth archers with the mean age and standard deviation of 17.0 ± 0.6 years drawn from various archery programmes completed a six arrows shooting score test. Standard fitness and ability measurements namely hand grip, vertical jump, standing broad jump, static balance, upper muscle strength and the core muscle strength were also recorded. Hierarchical agglomerative cluster analysis (HACA) was used to cluster the archers based on the performance variables tested. SVM models with linear, quadratic, cubic, fine RBF, medium RBF, as well as the coarse RBF kernel functions, were trained based on the measured performance variables. The HACA clustered the archers into high-potential archers (HPA) and low-potential archers (LPA), respectively. The linear, quadratic, cubic, as well as the medium RBF kernel functions models, demonstrated reasonably excellent classification accuracy of 97.5% and 2.5% error rate for the prediction of the HPA and the LPA. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from a combination of the selected few measured fitness and motor ability performance variables examined which would consequently save cost, time and effort during talent identification programme. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. INTERIM ANALYSIS OF THE CONTRIBUTION OF HIGH-LEVEL EVIDENCE FOR DENGUE VECTOR CONTROL.

    Science.gov (United States)

    Horstick, Olaf; Ranzinger, Silvia Runge

    2015-01-01

    This interim analysis reviews the available systematic literature for dengue vector control on three levels: 1) single and combined vector control methods, with existing work on peridomestic space spraying and on Bacillus thuringiensis israelensis; further work is available soon on the use of Temephos, Copepods and larvivorous fish; 2) or for a specific purpose, like outbreak control, and 3) on a strategic level, as for example decentralization vs centralization, with a systematic review on vector control organization. Clear best practice guidelines for methodology of entomological studies are needed. There is a need to include measuring dengue transmission data. The following recommendations emerge: Although vector control can be effective, implementation remains an issue; Single interventions are probably not useful; Combinations of interventions have mixed results; Careful implementation of vector control measures may be most important; Outbreak interventions are often applied with questionable effectiveness.

  18. Learning Apache Solr high performance

    CERN Document Server

    Mohan, Surendra

    2014-01-01

    This book is an easy-to-follow guide, full of hands-on, real-world examples. Each topic is explained and demonstrated in a specific and user-friendly flow, from search optimization using Solr to Deployment of Zookeeper applications. This book is ideal for Apache Solr developers and want to learn different techniques to optimize Solr performance with utmost efficiency, along with effectively troubleshooting the problems that usually occur while trying to boost performance. Familiarity with search servers and database querying is expected.

  19. High-performance composite chocolate

    Science.gov (United States)

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-07-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with the material selection process. In a competition-based practical, first-year undergraduate students design, cost and cast composite chocolate samples to maximize a particular performance criterion. The same activity could be adapted for any level of education to introduce the subject of materials properties and their effects on the material chosen for specific applications.

  20. High-Performance Composite Chocolate

    Science.gov (United States)

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-01-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with…

  1. Toward High-Performance Organizations.

    Science.gov (United States)

    Lawler, Edward E., III

    2002-01-01

    Reviews management changes that companies have made over time in adopting or adapting four approaches to organizational performance: employee involvement, total quality management, re-engineering, and knowledge management. Considers future possibilities and defines a new view of what constitutes effective organizational design in management.…

  2. Functional High Performance Financial IT

    DEFF Research Database (Denmark)

    Berthold, Jost; Filinski, Andrzej; Henglein, Fritz

    2011-01-01

    at the University of Copenhagen that attacks this triple challenge of increased performance, transparency and productivity in the financial sector by a novel integration of financial mathematics, domain-specific language technology, parallel functional programming, and emerging massively parallel hardware. HIPERFIT......The world of finance faces the computational performance challenge of massively expanding data volumes, extreme response time requirements, and compute-intensive complex (risk) analyses. Simultaneously, new international regulatory rules require considerably more transparency and external...... auditability of financial institutions, including their software systems. To top it off, increased product variety and customisation necessitates shorter software development cycles and higher development productivity. In this paper, we report about HIPERFIT, a recently etablished strategic research center...

  3. High performance Mo adsorbent PZC

    Energy Technology Data Exchange (ETDEWEB)

    Anon,

    1998-10-01

    We have developed Mo adsorbents for natural Mo(n, {gamma}){sup 99}Mo-{sup 99m}Tc generator. Among them, we called the highest performance adsorbent PZC that could adsorb about 250 mg-Mo/g. In this report, we will show the structure, adsorption mechanism of Mo, and the other useful properties of PZC when you carry out the examination of Mo adsorption and elution of {sup 99m}Tc. (author)

  4. Indoor Air Quality in High Performance Schools

    Science.gov (United States)

    High performance schools are facilities that improve the learning environment while saving energy, resources, and money. The key is understanding the lifetime value of high performance schools and effectively managing priorities, time, and budget.

  5. High performance inertial fusion targets

    International Nuclear Information System (INIS)

    Nuckolls, J.H.; Bangerter, R.O.; Lindl, J.D.; Mead, W.C.; Pan, Y.L.

    1977-01-01

    Inertial confinement fusion (ICF) designs are considered which may have very high gains (approximately 1000) and low power requirements (<100 TW) for input energies of approximately one megajoule. These include targets having very low density shells, ultra thin shells, central ignitors, magnetic insulation, and non-ablative acceleration

  6. High performance inertial fusion targets

    International Nuclear Information System (INIS)

    Nuckolls, J.H.; Bangerter, R.O.; Lindl, J.D.; Mead, W.C.; Pan, Y.L.

    1978-01-01

    Inertial confinement fusion (ICF) target designs are considered which may have very high gains (approximately 1000) and low power requirements (< 100 TW) for input energies of approximately one megajoule. These include targets having very low density shells, ultra thin shells, central ignitors, magnetic insulation, and non-ablative acceleration

  7. High performance nuclear fuel element

    International Nuclear Information System (INIS)

    Mordarski, W.J.; Zegler, S.T.

    1980-01-01

    A fuel-pellet composition is disclosed for use in fast breeder reactors. Uranium carbide particles are mixed with a powder of uraniumplutonium carbides having a stable microstructure. The resulting mixture is formed into fuel pellets. The pellets thus produced exhibit a relatively low propensity to swell while maintaining a high density

  8. The Dengue Virus Mosquito Vector Aedes aegypti at High Elevation in México

    Science.gov (United States)

    Lozano-Fuentes, Saul; Hayden, Mary H.; Welsh-Rodriguez, Carlos; Ochoa-Martinez, Carolina; Tapia-Santos, Berenice; Kobylinski, Kevin C.; Uejio, Christopher K.; Zielinski-Gutierrez, Emily; Monache, Luca Delle; Monaghan, Andrew J.; Steinhoff, Daniel F.; Eisen, Lars

    2012-01-01

    México has cities (e.g., México City and Puebla City) located at elevations > 2,000 m and above the elevation ceiling below which local climates allow the dengue virus mosquito vector Aedes aegypti to proliferate. Climate warming could raise this ceiling and place high-elevation cities at risk for dengue virus transmission. To assess the elevation ceiling for Ae. aegypti and determine the potential for using weather/climate parameters to predict mosquito abundance, we surveyed 12 communities along an elevation/climate gradient from Veracruz City (sea level) to Puebla City (∼2,100 m). Ae. aegypti was commonly encountered up to 1,700 m and present but rare from 1,700 to 2,130 m. This finding extends the known elevation range in México by > 300 m. Mosquito abundance was correlated with weather parameters, including temperature indices. Potential larval development sites were abundant in Puebla City and other high-elevation communities, suggesting that Ae. aegypti could proliferate should the climate become warmer. PMID:22987656

  9. Fine-scale mapping of vector habitats using very high resolution satellite imagery: a liver fluke case-study.

    Science.gov (United States)

    De Roeck, Els; Van Coillie, Frieke; De Wulf, Robert; Soenen, Karen; Charlier, Johannes; Vercruysse, Jozef; Hantson, Wouter; Ducheyne, Els; Hendrickx, Guy

    2014-12-01

    The visualization of vector occurrence in space and time is an important aspect of studying vector-borne diseases. Detailed maps of possible vector habitats provide valuable information for the prediction of infection risk zones but are currently lacking for most parts of the world. Nonetheless, monitoring vector habitats from the finest scales up to farm level is of key importance to refine currently existing broad-scale infection risk models. Using Fasciola hepatica, a parasite liver fluke, as a case in point, this study illustrates the potential of very high resolution (VHR) optical satellite imagery to efficiently and semi-automatically detect detailed vector habitats. A WorldView2 satellite image capable of transmitted by freshwater snails. The vector thrives in small water bodies (SWBs), such as ponds, ditches and other humid areas consisting of open water, aquatic vegetation and/or inundated grass. These water bodies can be as small as a few m2 and are most often not present on existing land cover maps because of their small size. We present a classification procedure based on object-based image analysis (OBIA) that proved valuable to detect SWBs at a fine scale in an operational and semi-automated way. The classification results were compared to field and other reference data such as existing broad-scale maps and expert knowledge. Overall, the SWB detection accuracy reached up to 87%. The resulting fine-scale SWB map can be used as input for spatial distribution modelling of the liver fluke snail vector to enable development of improved infection risk mapping and management advice adapted to specific, local farm situations.

  10. High Performance JavaScript

    CERN Document Server

    Zakas, Nicholas

    2010-01-01

    If you're like most developers, you rely heavily on JavaScript to build interactive and quick-responding web applications. The problem is that all of those lines of JavaScript code can slow down your apps. This book reveals techniques and strategies to help you eliminate performance bottlenecks during development. You'll learn how to improve execution time, downloading, interaction with the DOM, page life cycle, and more. Yahoo! frontend engineer Nicholas C. Zakas and five other JavaScript experts -- Ross Harmes, Julien Lecomte, Steven Levithan, Stoyan Stefanov, and Matt Sweeney -- demonstra

  11. Carpet Aids Learning in High Performance Schools

    Science.gov (United States)

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  12. Error-source effects on the performance of direct and iterative algorithms on an optical matrix-vector processor

    Science.gov (United States)

    Perlee, Caroline J.; Casasent, David P.

    1990-09-01

    Error sources in an optical matrix-vector processor are analyzed in terms of their effect on the performance of the algorithms used to solve a set of nonlinear and linear algebraic equations. A direct and an iterative algorithm are used to solve a nonlinear time-dependent case-study from computational fluid dynamics. A simulator which emulates the data flow and number representation of the OLAP is used to studs? these error effects. The ability of each algorithm to tolerate or correct the error sources is quantified. These results are extended to the general case of solving nonlinear and linear algebraic equations on the optical system.

  13. New Multigrid Method Including Elimination Algolithm Based on High-Order Vector Finite Elements in Three Dimensional Magnetostatic Field Analysis

    Science.gov (United States)

    Hano, Mitsuo; Hotta, Masashi

    A new multigrid method based on high-order vector finite elements is proposed in this paper. Low level discretizations in this method are obtained by using low-order vector finite elements for the same mesh. Gauss-Seidel method is used as a smoother, and a linear equation of lowest level is solved by ICCG method. But it is often found that multigrid solutions do not converge into ICCG solutions. An elimination algolithm of constant term using a null space of the coefficient matrix is also described. In three dimensional magnetostatic field analysis, convergence time and number of iteration of this multigrid method are discussed with the convectional ICCG method.

  14. High performance electromagnetic simulation tools

    Science.gov (United States)

    Gedney, Stephen D.; Whites, Keith W.

    1994-10-01

    Army Research Office Grant #DAAH04-93-G-0453 has supported the purchase of 24 additional compute nodes that were installed in the Intel iPsC/860 hypercube at the Univesity Of Kentucky (UK), rendering a 32-node multiprocessor. This facility has allowed the investigators to explore and extend the boundaries of electromagnetic simulation for important areas of defense concerns including microwave monolithic integrated circuit (MMIC) design/analysis and electromagnetic materials research and development. The iPSC/860 has also provided an ideal platform for MMIC circuit simulations. A number of parallel methods based on direct time-domain solutions of Maxwell's equations have been developed on the iPSC/860, including a parallel finite-difference time-domain (FDTD) algorithm, and a parallel planar generalized Yee-algorithm (PGY). The iPSC/860 has also provided an ideal platform on which to develop a 'virtual laboratory' to numerically analyze, scientifically study and develop new types of materials with beneficial electromagnetic properties. These materials simulations are capable of assembling hundreds of microscopic inclusions from which an electromagnetic full-wave solution will be obtained in toto. This powerful simulation tool has enabled research of the full-wave analysis of complex multicomponent MMIC devices and the electromagnetic properties of many types of materials to be performed numerically rather than strictly in the laboratory.

  15. High-Performance Data Converters

    DEFF Research Database (Denmark)

    Steensgaard-Madsen, Jesper

    -resolution internal D/A converters are required. Unit-element mismatch-shaping D/A converters are analyzed, and the concept of mismatch-shaping is generalized to include scaled-element D/A converters. Several types of scaled-element mismatch-shaping D/A converters are proposed. Simulations show that, when implemented...... in a standard CMOS technology, they can be designed to yield 100 dB performance at 10 times oversampling. The proposed scaled-element mismatch-shaping D/A converters are well suited for use as the feedback stage in oversampled delta-sigma quantizers. It is, however, not easy to make full use of their potential......-order difference of the output signal from the loop filter's first integrator stage. This technique avoids the need for accurate matching of analog and digital filters that characterizes the MASH topology, and it preserves the signal-band suppression of quantization errors. Simulations show that quantizers...

  16. High performance soft magnetic materials

    CERN Document Server

    2017-01-01

    This book provides comprehensive coverage of the current state-of-the-art in soft magnetic materials and related applications, with particular focus on amorphous and nanocrystalline magnetic wires and ribbons and sensor applications. Expert chapters cover preparation, processing, tuning of magnetic properties, modeling, and applications. Cost-effective soft magnetic materials are required in a range of industrial sectors, such as magnetic sensors and actuators, microelectronics, cell phones, security, automobiles, medicine, health monitoring, aerospace, informatics, and electrical engineering. This book presents both fundamentals and applications to enable academic and industry researchers to pursue further developments of these key materials. This highly interdisciplinary volume represents essential reading for researchers in materials science, magnetism, electrodynamics, and modeling who are interested in working with soft magnets. Covers magnetic microwires, sensor applications, amorphous and nanocrystalli...

  17. High performance polyethylene nanocomposite fibers

    Directory of Open Access Journals (Sweden)

    A. Dorigato

    2012-12-01

    Full Text Available A high density polyethylene (HDPE matrix was melt compounded with 2 vol% of dimethyldichlorosilane treated fumed silica nanoparticles. Nanocomposite fibers were prepared by melt spinning through a co-rotating twin screw extruder and drawing at 125°C in air. Thermo-mechanical and morphological properties of the resulting fibers were then investigated. The introduction of nanosilica improved the drawability of the fibers, allowing the achievement of higher draw ratios with respect to the neat matrix. The elastic modulus and creep stability of the fibers were remarkably improved upon nanofiller addition, with a retention of the pristine tensile properties at break. Transmission electronic microscope (TEM images evidenced that the original morphology of the silica aggregates was disrupted by the applied drawing.

  18. Unravelling the structure of matter on high-performance computers

    International Nuclear Information System (INIS)

    Kieu, T.D.; McKellar, B.H.J.

    1992-11-01

    The various phenomena and the different forms of matter in nature are believed to be the manifestation of only a handful set of fundamental building blocks-the elementary particles-which interact through the four fundamental forces. In the study of the structure of matter at this level one has to consider forces which are not sufficiently weak to be treated as small perturbations to the system, an example of which is the strong force that binds the nucleons together. High-performance computers, both vector and parallel machines, have facilitated the necessary non-perturbative treatments. The principles and the techniques of computer simulations applied to Quantum Chromodynamics are explained examples include the strong interactions, the calculation of the mass of nucleons and their decay rates. Some commercial and special-purpose high-performance machines for such calculations are also mentioned. 3 refs., 2 tabs

  19. Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data

    KAUST Repository

    Hu, Zongliang

    2017-10-27

    We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling\\'s tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.

  20. Effective SIMD Vectorization for Intel Xeon Phi Coprocessors

    Directory of Open Access Journals (Sweden)

    Xinmin Tian

    2015-01-01

    Full Text Available Efficiently exploiting SIMD vector units is one of the most important aspects in achieving high performance of the application code running on Intel Xeon Phi coprocessors. In this paper, we present several effective SIMD vectorization techniques such as less-than-full-vector loop vectorization, Intel MIC specific alignment optimization, and small matrix transpose/multiplication 2D vectorization implemented in the Intel C/C++ and Fortran production compilers for Intel Xeon Phi coprocessors. A set of workloads from several application domains is employed to conduct the performance study of our SIMD vectorization techniques. The performance results show that we achieved up to 12.5x performance gain on the Intel Xeon Phi coprocessor. We also demonstrate a 2000x performance speedup from the seamless integration of SIMD vectorization and parallelization.

  1. Highly predictive support vector machine (SVM) models for anthrax toxin lethal factor (LF) inhibitors.

    Science.gov (United States)

    Zhang, Xia; Amin, Elizabeth Ambrose

    2016-01-01

    Anthrax is a highly lethal, acute infectious disease caused by the rod-shaped, Gram-positive bacterium Bacillus anthracis. The anthrax toxin lethal factor (LF), a zinc metalloprotease secreted by the bacilli, plays a key role in anthrax pathogenesis and is chiefly responsible for anthrax-related toxemia and host death, partly via inactivation of mitogen-activated protein kinase kinase (MAPKK) enzymes and consequent disruption of key cellular signaling pathways. Antibiotics such as fluoroquinolones are capable of clearing the bacilli but have no effect on LF-mediated toxemia; LF itself therefore remains the preferred target for toxin inactivation. However, currently no LF inhibitor is available on the market as a therapeutic, partly due to the insufficiency of existing LF inhibitor scaffolds in terms of efficacy, selectivity, and toxicity. In the current work, we present novel support vector machine (SVM) models with high prediction accuracy that are designed to rapidly identify potential novel, structurally diverse LF inhibitor chemical matter from compound libraries. These SVM models were trained and validated using 508 compounds with published LF biological activity data and 847 inactive compounds deposited in the Pub Chem BioAssay database. One model, M1, demonstrated particularly favorable selectivity toward highly active compounds by correctly predicting 39 (95.12%) out of 41 nanomolar-level LF inhibitors, 46 (93.88%) out of 49 inactives, and 844 (99.65%) out of 847 Pub Chem inactives in external, unbiased test sets. These models are expected to facilitate the prediction of LF inhibitory activity for existing molecules, as well as identification of novel potential LF inhibitors from large datasets. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. HIGH-PERFORMANCE COATING MATERIALS

    Energy Technology Data Exchange (ETDEWEB)

    SUGAMA,T.

    2007-01-01

    Corrosion, erosion, oxidation, and fouling by scale deposits impose critical issues in selecting the metal components used at geothermal power plants operating at brine temperatures up to 300 C. Replacing these components is very costly and time consuming. Currently, components made of titanium alloy and stainless steel commonly are employed for dealing with these problems. However, another major consideration in using these metals is not only that they are considerably more expensive than carbon steel, but also the susceptibility of corrosion-preventing passive oxide layers that develop on their outermost surface sites to reactions with brine-induced scales, such as silicate, silica, and calcite. Such reactions lead to the formation of strong interfacial bonds between the scales and oxide layers, causing the accumulation of multiple layers of scales, and the impairment of the plant component's function and efficacy; furthermore, a substantial amount of time is entailed in removing them. This cleaning operation essential for reusing the components is one of the factors causing the increase in the plant's maintenance costs. If inexpensive carbon steel components could be coated and lined with cost-effective high-hydrothermal temperature stable, anti-corrosion, -oxidation, and -fouling materials, this would improve the power plant's economic factors by engendering a considerable reduction in capital investment, and a decrease in the costs of operations and maintenance through optimized maintenance schedules.

  3. BEGA Starter/Alternator - Vector Control Implementation and Performance for Wide Speed Range at Unity Power Factor Operation

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Boldea, Ion; Coroban-Schramel, Vasile

    2008-01-01

    Biaxial Excitation Generator for Automobile (BEGA) is proposed as a solution for integrated starter/alternator systems used in hybrid electric vehicles (HEVs). This paper demonstrates through experiments and simulations that BEGA has a very large constant power speed range (CPSR), theoretically...... to infinite. A vector control structure is proposed for BEGA operation during motoring and generating, at unity power factor with zero d-axis current (id) and zero q-axis flux (Ψq) control. In such conditions BEGA behaves like a truly dc. brush machine (with zero reactance in steady state !). A high iq...

  4. A Short-Term and High-Resolution System Load Forecasting Approach Using Support Vector Regression with Hybrid Parameters Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Huaiguang [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-08-25

    This work proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of the hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system.

  5. Delivering high performance BWR fuel reliably

    International Nuclear Information System (INIS)

    Schardt, J.F.

    1998-01-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  6. Exploiting the behaviour of wild malaria vectors to achieve high infection with fungal biocontrol agents

    Science.gov (United States)

    2012-01-01

    Background Control of mosquitoes that transmit malaria has been the mainstay in the fight against the disease, but alternative methods are required in view of emerging insecticide resistance. Entomopathogenic fungi are candidate alternatives, but to date, few trials have translated the use of these agents to field-based evaluations of their actual impact on mosquito survival and malaria risk. Mineral oil-formulations of the entomopathogenic fungi Metarhizium anisopliae and Beauveria bassiana were applied using five different techniques that each exploited the behaviour of malaria mosquitoes when entering, host-seeking or resting in experimental huts in a malaria endemic area of rural Tanzania. Results Survival of mosquitoes was reduced by 39-57% relative to controls after forcing upward house-entry of mosquitoes through fungus treated baffles attached to the eaves or after application of fungus-treated surfaces around an occupied bed net (bed net strip design). Moreover, 68 to 76% of the treatment mosquitoes showed fungal growth and thus had sufficient contact with fungus treated surfaces. A population dynamic model of malaria-mosquito interactions shows that these infection rates reduce malaria transmission by 75-80% due to the effect of fungal infection on adult mortality alone. The model also demonstrated that even if a high proportion of the mosquitoes exhibits outdoor biting behaviour, malaria transmission was still significantly reduced. Conclusions Entomopathogenic fungi strongly affect mosquito survival and have a high predicted impact on malaria transmission. These entomopathogens represent a viable alternative for malaria control, especially if they are used as part of an integrated vector management strategy. PMID:22449130

  7. Design and optimization of stress centralized MEMS vector hydrophone with high sensitivity at low frequency

    Science.gov (United States)

    Zhang, Guojun; Ding, Junwen; Xu, Wei; Liu, Yuan; Wang, Renxin; Han, Janjun; Bai, Bing; Xue, Chenyang; Liu, Jun; Zhang, Wendong

    2018-05-01

    A micro hydrophone based on piezoresistive effect, "MEMS vector hydrophone" was developed for acoustic detection application. To improve the sensitivity of MEMS vector hydrophone at low frequency, we reported a stress centralized MEMS vector hydrophone (SCVH) mainly used in 20-500 Hz. Stress concentration area was actualized in sensitive unit of hydrophone by silicon micromachining technology. Then piezoresistors were placed in stress concentration area for better mechanical response, thereby obtaining higher sensitivity. Static analysis was done to compare the mechanical response of three different sensitive microstructure: SCVH, conventional micro-silicon four-beam vector hydrophone (CFVH) and Lollipop-shaped vector hydrophone (LVH) respectively. And fluid-structure interaction (FSI) was used to analyze the natural frequency of SCVH for ensuring the measurable bandwidth. Eventually, the calibration experiment in standing wave field was done to test the property of SCVH and verify the accuracy of simulation. The results show that the sensitivity of SCVH has nearly increased by 17.2 dB in contrast to CFVH and 7.6 dB in contrast to LVH during 20-500 Hz.

  8. Performance evaluation for epileptic electroencephalogram (EEG) detection by using Neyman-Pearson criteria and a support vector machine

    Science.gov (United States)

    Wang, Chun-mei; Zhang, Chong-ming; Zou, Jun-zhong; Zhang, Jian

    2012-02-01

    The diagnosis of several neurological disorders is based on the detection of typical pathological patterns in electroencephalograms (EEGs). This is a time-consuming task requiring significant training and experience. A lot of effort has been devoted to developing automatic detection techniques which might help not only in accelerating this process but also in avoiding the disagreement among readers of the same record. In this work, Neyman-Pearson criteria and a support vector machine (SVM) are applied for detecting an epileptic EEG. Decision making is performed in two stages: feature extraction by computing the wavelet coefficients and the approximate entropy (ApEn) and detection by using Neyman-Pearson criteria and an SVM. Then the detection performance of the proposed method is evaluated. Simulation results demonstrate that the wavelet coefficients and the ApEn are features that represent the EEG signals well. By comparison with Neyman-Pearson criteria, an SVM applied on these features achieved higher detection accuracies.

  9. Very low speed performance of active flux based sensorless control: interior permanent magnet synchronous motor vector control versus direct torque and flux control

    DEFF Research Database (Denmark)

    Paicu, M. C.; Boldea, I.; Andreescu, G. D.

    2009-01-01

    This study is focused on very low speed performance comparison between two sensorless control systems based on the novel ‘active flux' concept, that is, the current/voltage vector control versus direct torque and flux control (DTFC) for interior permanent magnet synchronous motor (IPMSM) drives...... with space vector modulation (SVM), without signal injection. The active flux, defined as the flux that multiplies iq current in the dq-model torque expression of all ac machines, is easily obtained from the stator-flux vector and has the rotor position orientation. Therefore notable simplification...

  10. Vector model for polarized second-harmonic generation microscopy under high numerical aperture

    International Nuclear Information System (INIS)

    Wang, Xiang-Hui; Chang, Sheng-Jiang; Lin, Lie; Wang, Lin-Rui; Huo, Bing-Zhong; Hao, Shu-Jian

    2010-01-01

    Based on the vector diffraction theory and the generalized Jones matrix formalism, a vector model for polarized second-harmonic generation (SHG) microscopy is developed, which includes the roles of the axial component P z , the weight factor and the cross-effect between the lateral components. The numerical results show that as the relative magnitude of P z increases, the polarization response of the second-harmonic signal will vary from linear polarization to elliptical polarization and the polarization orientation of the second-harmonic signal is different from that under the paraxial approximation. In addition, it is interesting that the polarization response of the detected second-harmonic signal can change with the value of the collimator lens NA. Therefore, it is more advantageous to adopt the vector model to investigate the property of polarized SHG microscopy for a variety of cases

  11. High performance carbon nanocomposites for ultracapacitors

    Science.gov (United States)

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  12. An episomal vector-based CRISPR/Cas9 system for highly efficient gene knockout in human pluripotent stem cells.

    Science.gov (United States)

    Xie, Yifang; Wang, Daqi; Lan, Feng; Wei, Gang; Ni, Ting; Chai, Renjie; Liu, Dong; Hu, Shijun; Li, Mingqing; Li, Dajin; Wang, Hongyan; Wang, Yongming

    2017-05-24

    Human pluripotent stem cells (hPSCs) represent a unique opportunity for understanding the molecular mechanisms underlying complex traits and diseases. CRISPR/Cas9 is a powerful tool to introduce genetic mutations into the hPSCs for loss-of-function studies. Here, we developed an episomal vector-based CRISPR/Cas9 system, which we called epiCRISPR, for highly efficient gene knockout in hPSCs. The epiCRISPR system enables generation of up to 100% Insertion/Deletion (indel) rates. In addition, the epiCRISPR system enables efficient double-gene knockout and genomic deletion. To minimize off-target cleavage, we combined the episomal vector technology with double-nicking strategy and recent developed high fidelity Cas9. Thus the epiCRISPR system offers a highly efficient platform for genetic analysis in hPSCs.

  13. High-efficiency and flexible generation of vector vortex optical fields by a reflective phase-only spatial light modulator.

    Science.gov (United States)

    Cai, Meng-Qiang; Wang, Zhou-Xiang; Liang, Juan; Wang, Yan-Kun; Gao, Xu-Zhen; Li, Yongnan; Tu, Chenghou; Wang, Hui-Tian

    2017-08-01

    The scheme for generating vector optical fields should have not only high efficiency but also flexibility for satisfying the requirements of various applications. However, in general, high efficiency and flexibility are not compatible. Here we present and experimentally demonstrate a solution to directly, flexibly, and efficiently generate vector vortex optical fields (VVOFs) with a reflective phase-only liquid crystal spatial light modulator (LC-SLM) based on optical birefringence of liquid crystal molecules. To generate the VVOFs, this approach needs in principle only a half-wave plate, an LC-SLM, and a quarter-wave plate. This approach has some advantages, including a simple experimental setup, good flexibility, and high efficiency, making the approach very promising in some applications when higher power is need. This approach has a generation efficiency of 44.0%, which is much higher than the 1.1% of the common path interferometric approach.

  14. Strategies and Experiences Using High Performance Fortran

    National Research Council Canada - National Science Library

    Shires, Dale

    2001-01-01

    .... High performance Fortran (HPF) is a relative new addition to the Fortran dialect It is an attempt to provide an efficient high-level Fortran parallel programming language for the latest generation of been debatable...

  15. High Performance Grinding and Advanced Cutting Tools

    CERN Document Server

    Jackson, Mark J

    2013-01-01

    High Performance Grinding and Advanced Cutting Tools discusses the fundamentals and advances in high performance grinding processes, and provides a complete overview of newly-developing areas in the field. Topics covered are grinding tool formulation and structure, grinding wheel design and conditioning and applications using high performance grinding wheels. Also included are heat treatment strategies for grinding tools, using grinding tools for high speed applications, laser-based and diamond dressing techniques, high-efficiency deep grinding, VIPER grinding, and new grinding wheels.

  16. Strategy Guideline: High Performance Residential Lighting

    Energy Technology Data Exchange (ETDEWEB)

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  17. Carbon nanomaterials for high-performance supercapacitors

    OpenAIRE

    Tao Chen; Liming Dai

    2013-01-01

    Owing to their high energy density and power density, supercapacitors exhibit great potential as high-performance energy sources for advanced technologies. Recently, carbon nanomaterials (especially, carbon nanotubes and graphene) have been widely investigated as effective electrodes in supercapacitors due to their high specific surface area, excellent electrical and mechanical properties. This article summarizes the recent progresses on the development of high-performance supercapacitors bas...

  18. "Lollipop-shaped" high-sensitivity Microelectromechanical Systems vector hydrophone based on Parylene encapsulation

    Science.gov (United States)

    Liu, Yuan; Wang, Renxin; Zhang, Guojun; Du, Jin; Zhao, Long; Xue, Chenyang; Zhang, Wendong; Liu, Jun

    2015-07-01

    This paper presents methods of promoting the sensitivity of Microelectromechanical Systems (MEMS) vector hydrophone by increasing the sensing area of cilium and perfect insulative Parylene membrane. First, a low-density sphere is integrated with the cilium to compose a "lollipop shape," which can considerably increase the sensing area. A mathematic model on the sensitivity of the "lollipop-shaped" MEMS vector hydrophone is presented, and the influences of different structural parameters on the sensitivity are analyzed via simulation. Second, the MEMS vector hydrophone is encapsulated through the conformal deposition of insulative Parylene membrane, which enables underwater acoustic monitoring without any typed sound-transparent encapsulation. Finally, the characterization results demonstrate that the sensitivity reaches up to -183 dB (500 Hz 0dB at 1 V/ μPa ), which is increased by more than 10 dB, comparing with the previous cilium-shaped MEMS vector hydrophone. Besides, the frequency response takes on a sensitivity increment of 6 dB per octave. The working frequency band is 20-500 Hz and the concave point depth of 8-shaped directivity is beyond 30 dB, indicating that the hydrophone is promising in underwater acoustic application.

  19. MOSRA-Light; high speed three-dimensional nodal diffusion code for vector computers

    Energy Technology Data Exchange (ETDEWEB)

    Okumura, Keisuke [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-10-01

    MOSRA-Light is a three-dimensional neutron diffusion calculation code for X-Y-Z geometry. It is based on the 4th order polynomial nodal expansion method (NEM). As the 4th order NEM is not sensitive to mesh sizes, accurate calculation is possible by the use of coarse meshes of about 20 cm. The drastic decrease of number of unknowns in a 3-dimensional problem results in very fast computation. Furthermore, it employs newly developed computation algorithm `boundary separated checkerboard sweep method` appropriate to vector computers. This method is very efficient because the speedup factor by vectorization increases, as a scale of problem becomes larger. Speed-up factor compared to the scalar calculation is from 20 to 40 in the case of PWR core calculation. Considering the both effects by the vectorization and the coarse mesh method, total speedup factor is more than 1000 as compared with conventional scalar code with the finite difference method. MOSRA-Light can be available on most of vector or scalar computers with the UNIX or it`s similar operating systems (e.g. freeware like Linux). Users can easily install it by the help of the conversation style installer. This report contains the general theory of NEM, the fast computation algorithm, benchmark calculation results and detailed information for usage of this code including input data instructions and sample input data. (author)

  20. MOSRA-Light; high speed three-dimensional nodal diffusion code for vector computers

    International Nuclear Information System (INIS)

    Okumura, Keisuke

    1998-10-01

    MOSRA-Light is a three-dimensional neutron diffusion calculation code for X-Y-Z geometry. It is based on the 4th order polynomial nodal expansion method (NEM). As the 4th order NEM is not sensitive to mesh sizes, accurate calculation is possible by the use of coarse meshes of about 20 cm. The drastic decrease of number of unknowns in a 3-dimensional problem results in very fast computation. Furthermore, it employs newly developed computation algorithm 'boundary separated checkerboard sweep method' appropriate to vector computers. This method is very efficient because the speedup factor by vectorization increases, as a scale of problem becomes larger. Speed-up factor compared to the scalar calculation is from 20 to 40 in the case of PWR core calculation. Considering the both effects by the vectorization and the coarse mesh method, total speedup factor is more than 1000 as compared with conventional scalar code with the finite difference method. MOSRA-Light can be available on most of vector or scalar computers with the UNIX or it's similar operating systems (e.g. freeware like Linux). Users can easily install it by the help of the conversation style installer. This report contains the general theory of NEM, the fast computation algorithm, benchmark calculation results and detailed information for usage of this code including input data instructions and sample input data. (author)

  1. High stability vector-based direct power control for DFIG-based wind turbine

    DEFF Research Database (Denmark)

    Zhu, Rongwu; Chen, Zhe; Wu, Xiaojie

    2015-01-01

    This paper proposes an improved vector-based direct power control (DPC) strategy for the doubly-fed induction generator (DFIG)-based wind energy conversion system. Based on the small signal model, the proposed DPC improves the stability of the DFIG, and avoids the DFIG operating in the marginal...

  2. Relative Performance of Indoor Vector Control Interventions in the Ifakara and the West African Experimental Huts.

    OpenAIRE

    Oumbouke, Welbeck A; Fongnikin, Augustin; Soukou, Koffi B; Moore, Sarah J; N'Guessan, Raphael

    2017-01-01

    Background West African and Ifakara experimental huts are used to evaluate indoor mosquito control interventions, including spatial repellents and insecticides. The two hut types differ in size and design, so a side-by-side comparison was performed to investigate the performance of indoor interventions in the two hut designs using standard entomological outcomes: relative indoor mosquito density (deterrence), exophily (induced exit), blood-feeding and mortality of mosquitoes. Methods Metoflut...

  3. Team Development for High Performance Management.

    Science.gov (United States)

    Schermerhorn, John R., Jr.

    1986-01-01

    The author examines a team development approach to management that creates shared commitments to performance improvement by focusing the attention of managers on individual workers and their task accomplishments. It uses the "high-performance equation" to help managers confront shared beliefs and concerns about performance and develop realistic…

  4. Delivering high performance BWR fuel reliably

    Energy Technology Data Exchange (ETDEWEB)

    Schardt, J.F. [GE Nuclear Energy, Wilmington, NC (United States)

    1998-07-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  5. HPTA: High-Performance Text Analytics

    OpenAIRE

    Vandierendonck, Hans; Murphy, Karen; Arif, Mahwish; Nikolopoulos, Dimitrios S.

    2017-01-01

    One of the main targets of data analytics is unstructured data, which primarily involves textual data. High-performance processing of textual data is non-trivial. We present the HPTA library for high-performance text analytics. The library helps programmers to map textual data to a dense numeric representation, which can be handled more efficiently. HPTA encapsulates three performance optimizations: (i) efficient memory management for textual data, (ii) parallel computation on associative dat...

  6. Next generation of adeno-associated virus 2 vectors: Point mutations in tyrosines lead to high-efficiency transduction at lower doses

    Science.gov (United States)

    Zhong, Li; Li, Baozheng; Mah, Cathryn S.; Govindasamy, Lakshmanan; Agbandje-McKenna, Mavis; Cooper, Mario; Herzog, Roland W.; Zolotukhin, Irene; Warrington, Kenneth H.; Weigel-Van Aken, Kirsten A.; Hobbs, Jacqueline A.; Zolotukhin, Sergei; Muzyczka, Nicholas; Srivastava, Arun

    2008-01-01

    Recombinant adeno-associated virus 2 (AAV2) vectors are in use in several Phase I/II clinical trials, but relatively large vector doses are needed to achieve therapeutic benefits. Large vector doses also trigger an immune response as a significant fraction of the vectors fails to traffic efficiently to the nucleus and is targeted for degradation by the host cell proteasome machinery. We have reported that epidermal growth factor receptor protein tyrosine kinase (EGFR-PTK) signaling negatively affects transduction by AAV2 vectors by impairing nuclear transport of the vectors. We have also observed that EGFR-PTK can phosphorylate AAV2 capsids at tyrosine residues. Tyrosine-phosphorylated AAV2 vectors enter cells efficiently but fail to transduce effectively, in part because of ubiquitination of AAV capsids followed by proteasome-mediated degradation. We reasoned that mutations of the surface-exposed tyrosine residues might allow the vectors to evade phosphorylation and subsequent ubiquitination and, thus, prevent proteasome-mediated degradation. Here, we document that site-directed mutagenesis of surface-exposed tyrosine residues leads to production of vectors that transduce HeLa cells ≈10-fold more efficiently in vitro and murine hepatocytes nearly 30-fold more efficiently in vivo at a log lower vector dose. Therapeutic levels of human Factor IX (F.IX) are also produced at an ≈10-fold reduced vector dose. The increased transduction efficiency of tyrosine-mutant vectors is due to lack of capsid ubiquitination and improved intracellular trafficking to the nucleus. These studies have led to the development of AAV vectors that are capable of high-efficiency transduction at lower doses, which has important implications in their use in human gene therapy. PMID:18511559

  7. Unit cell determination of epitaxial thin films based on reciprocal space vectors by high-resolution X-ray diffractometry

    OpenAIRE

    Yang, Ping; Liu, Huajun; Chen, Zuhuang; Chen, Lang; Wang, John

    2013-01-01

    A new approach, based on reciprocal space vectors (RSVs), is developed to determine Bravais lattice types and accurate lattice parameters of epitaxial thin films by high-resolution X-ray diffractometry (HR-XRD). The lattice parameters of single crystal substrates are employed as references to correct the systematic experimental errors of RSVs of thin films. The general procedure is summarized, involving correction of RSVs, derivation of raw unit cell, subsequent conversion to the Niggli unit ...

  8. Rotations with Rodrigues' vector

    International Nuclear Information System (INIS)

    Pina, E

    2011-01-01

    The rotational dynamics was studied from the point of view of Rodrigues' vector. This vector is defined here by its connection with other forms of parametrization of the rotation matrix. The rotation matrix was expressed in terms of this vector. The angular velocity was computed using the components of Rodrigues' vector as coordinates. It appears to be a fundamental matrix that is used to express the components of the angular velocity, the rotation matrix and the angular momentum vector. The Hamiltonian formalism of rotational dynamics in terms of this vector uses the same matrix. The quantization of the rotational dynamics is performed with simple rules if one uses Rodrigues' vector and similar formal expressions for the quantum operators that mimic the Hamiltonian classical dynamics.

  9. Strategy Guideline. Partnering for High Performance Homes

    Energy Technology Data Exchange (ETDEWEB)

    Prahl, Duncan [IBACOS, Inc., Pittsburgh, PA (United States)

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  10. High entomological inoculation rate of malaria vectors in area of high coverage of interventions in southwest Ethiopia: Implication for residual malaria transmission

    Directory of Open Access Journals (Sweden)

    Misrak Abraham

    2017-05-01

    Finally, there was an indoor residual malaria transmission in a village of high coverage of bed nets and where the principal malaria vector is susceptibility to propoxur and bendiocarb; insecticides currently in use for indoor residual spraying. The continuing indoor transmission of malaria in such village implies the need for new tools to supplement the existing interventions and to reduce indoor malaria transmission.

  11. Cloud field classification based upon high spatial resolution textural features. II - Simplified vector approaches

    Science.gov (United States)

    Chen, D. W.; Sengupta, S. K.; Welch, R. M.

    1989-01-01

    This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.

  12. Rapid transient production in plants by replicating and non-replicating vectors yields high quality functional anti-HIV antibody.

    Directory of Open Access Journals (Sweden)

    Frank Sainsbury

    2010-11-01

    Full Text Available The capacity of plants and plant cells to produce large amounts of recombinant protein has been well established. Due to advantages in terms of speed and yield, attention has recently turned towards the use of transient expression systems, including viral vectors, to produce proteins of pharmaceutical interest in plants. However, the effects of such high level expression from viral vectors and concomitant effects on host cells may affect the quality of the recombinant product.To assess the quality of antibodies transiently expressed to high levels in plants, we have expressed and characterised the human anti-HIV monoclonal antibody, 2G12, using both replicating and non-replicating systems based on deleted versions of Cowpea mosaic virus (CPMV RNA-2. The highest yield (approximately 100 mg/kg wet weight leaf tissue of affinity purified 2G12 was obtained when the non-replicating CPMV-HT system was used and the antibody was retained in the endoplasmic reticulum (ER. Glycan analysis by mass-spectrometry showed that the glycosylation pattern was determined exclusively by whether the antibody was retained in the ER and did not depend on whether a replicating or non-replicating system was used. Characterisation of the binding and neutralisation properties of all the purified 2G12 variants from plants showed that these were generally similar to those of the Chinese hamster ovary (CHO cell-produced 2G12.Overall, the results demonstrate that replicating and non-replicating CPMV-based vectors are able to direct the production of a recombinant IgG similar in activity to the CHO-produced control. Thus, a complex recombinant protein was produced with no apparent effect on its biochemical properties using either high-level expression or viral replication. The speed with which a recombinant pharmaceutical with excellent biochemical characteristics can be produced transiently in plants makes CPMV-based expression vectors an attractive option for

  13. High-performance ceramics. Fabrication, structure, properties

    International Nuclear Information System (INIS)

    Petzow, G.; Tobolski, J.; Telle, R.

    1996-01-01

    The program ''Ceramic High-performance Materials'' pursued the objective to understand the chaining of cause and effect in the development of high-performance ceramics. This chain of problems begins with the chemical reactions for the production of powders, comprises the characterization, processing, shaping and compacting of powders, structural optimization, heat treatment, production and finishing, and leads to issues of materials testing and of a design appropriate to the material. The program ''Ceramic High-performance Materials'' has resulted in contributions to the understanding of fundamental interrelationships in terms of materials science, which are summarized in the present volume - broken down into eight special aspects. (orig./RHM)

  14. High Burnup Fuel Performance and Safety Research

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Je Keun; Lee, Chan Bok; Kim, Dae Ho (and others)

    2007-03-15

    The worldwide trend of nuclear fuel development is to develop a high burnup and high performance nuclear fuel with high economies and safety. Because the fuel performance evaluation code, INFRA, has a patent, and the superiority for prediction of fuel performance was proven through the IAEA CRP FUMEX-II program, the INFRA code can be utilized with commercial purpose in the industry. The INFRA code was provided and utilized usefully in the universities and relevant institutes domesticallly and it has been used as a reference code in the industry for the development of the intrinsic fuel rod design code.

  15. Local Patch Vectors Encoded by Fisher Vectors for Image Classification

    Directory of Open Access Journals (Sweden)

    Shuangshuang Chen

    2018-02-01

    Full Text Available The objective of this work is image classification, whose purpose is to group images into corresponding semantic categories. Four contributions are made as follows: (i For computational simplicity and efficiency, we directly adopt raw image patch vectors as local descriptors encoded by Fisher vector (FV subsequently; (ii For obtaining representative local features within the FV encoding framework, we compare and analyze three typical sampling strategies: random sampling, saliency-based sampling and dense sampling; (iii In order to embed both global and local spatial information into local features, we construct an improved spatial geometry structure which shows good performance; (iv For reducing the storage and CPU costs of high dimensional vectors, we adopt a new feature selection method based on supervised mutual information (MI, which chooses features by an importance sorting algorithm. We report experimental results on dataset STL-10. It shows very promising performance with this simple and efficient framework compared to conventional methods.

  16. Principle and performance of the transverse oscillation vector velocity technique in medical ultrasound

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Pihl, Michael Johannes; Udesen, Jesper

    2010-01-01

    Medical ultrasound systems measure the blood velocity by tracking the blood cells motion along the ultrasound field. The is done by pulsing in the same direction a number of times and then find e.q. the shift in phase between consecutive pulses. Properly normalized this is directly proportional...... a double oscillating field. A special estimator is then used for finding both the axial and lateral velocity component, so that both magnitude and phase can be calculated. The method for generating double oscillating ultrasound fields and the special estimator are described and its performance revealed...

  17. ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS

    International Nuclear Information System (INIS)

    WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

    2002-01-01

    OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability

  18. High-charge and multiple-star vortex coronagraphy from stacked vector vortex phase masks.

    Science.gov (United States)

    Aleksanyan, Artur; Brasselet, Etienne

    2018-02-01

    Optical vortex phase masks are now installed at many ground-based large telescopes for high-contrast astronomical imaging. To date, such instrumental advances have been restricted to the use of helical phase masks of the lowest even order, while future giant telescopes will require high-order masks. Here we propose a single-stage on-axis scheme to create high-order vortex coronagraphs based on second-order vortex phase masks. By extending our approach to an off-axis design, we also explore the implementation of multiple-star vortex coronagraphy. An experimental laboratory demonstration is reported and supported by numerical simulations. These results offer a practical roadmap to the development of future coronagraphic tools with enhanced performances.

  19. Performance of velocity vector estimation using an improved dynamic beamforming setup

    DEFF Research Database (Denmark)

    Munk, Peter; Jensen, Jørgen Arendt

    2001-01-01

    control of the acoustic field, based on the Pulsed Plane Wave Decomposition (PPWD), is presented. The PPWD gives an unambigious relation between a given acoustic field and the time functions needed on an array transducer for transmission. Applying this method for the receive beamformation results in a set...... and experimental data. The simulation setup is an attempt to approximate the situation present when performing a scanning of the carotid artery with a linear array. Measurement of the flow perpendicular to the emission direction is possible using the approach of transverse spatial modulation. This is most often...... the case in a scanning of the carotid artery, where the situation is handled by an angled Doppler setup in the present ultrasound scanners. The modulation period of 2 mm is controlled for a range of 20-40 mm which covers the typical range of the carotid artery. A 6 MHz array on a 128-channel system...

  20. Relative performance of indoor vector control interventions in the Ifakara and the West African experimental huts.

    Science.gov (United States)

    Oumbouke, Welbeck A; Fongnikin, Augustin; Soukou, Koffi B; Moore, Sarah J; N'Guessan, Raphael

    2017-09-19

    West African and Ifakara experimental huts are used to evaluate indoor mosquito control interventions, including spatial repellents and insecticides. The two hut types differ in size and design, so a side-by-side comparison was performed to investigate the performance of indoor interventions in the two hut designs using standard entomological outcomes: relative indoor mosquito density (deterrence), exophily (induced exit), blood-feeding and mortality of mosquitoes. Metofluthrin mosquito coils (0.00625% and 0.0097%) and Olyset® Net vs control nets (untreated, deliberately holed net) were evaluated against pyrethroid-resistant Culex quinquefasciatus in Benin. Four experimental huts were used: two West African hut designs and two Ifakara hut designs. Treatments were rotated among the huts every four nights until each treatment was tested in each hut 52 times. Volunteers rotated between huts nightly. The Ifakara huts caught a median of 37 Culex quinquefasciatus/ night, while the West African huts captured a median of 8/ night (rate ratio 3.37, 95% CI: 2.30-4.94, P  4-fold higher mosquito exit relative to the West African huts (odds ratio 4.18, 95% CI: 3.18-5.51, P < 0.0001), regardless of treatment. While blood-feeding rates were significantly higher in the West African huts, mortality appeared significantly lower for all treatments. The Ifakara hut captured more Cx. quinquefasciatus that could more easily exit into windows and eave traps after failing to blood-feed, compared to the West African hut. The higher mortality rates recorded in the Ifakara huts could be attributable to the greater proportions of Culex mosquitoes exiting and probably dying from starvation, relative to the situation in the West African huts.

  1. Vector analysis

    CERN Document Server

    Newell, Homer E

    2006-01-01

    When employed with skill and understanding, vector analysis can be a practical and powerful tool. This text develops the algebra and calculus of vectors in a manner useful to physicists and engineers. Numerous exercises (with answers) not only provide practice in manipulation but also help establish students' physical and geometric intuition in regard to vectors and vector concepts.Part I, the basic portion of the text, consists of a thorough treatment of vector algebra and the vector calculus. Part II presents the illustrative matter, demonstrating applications to kinematics, mechanics, and e

  2. About vectors

    CERN Document Server

    Hoffmann, Banesh

    1975-01-01

    From his unusual beginning in ""Defining a vector"" to his final comments on ""What then is a vector?"" author Banesh Hoffmann has written a book that is provocative and unconventional. In his emphasis on the unresolved issue of defining a vector, Hoffmann mixes pure and applied mathematics without using calculus. The result is a treatment that can serve as a supplement and corrective to textbooks, as well as collateral reading in all courses that deal with vectors. Major topics include vectors and the parallelogram law; algebraic notation and basic ideas; vector algebra; scalars and scalar p

  3. High performance liquid chromatographic determination of ...

    African Journals Online (AJOL)

    STORAGESEVER

    2010-02-08

    ) high performance liquid chromatography (HPLC) grade .... applications. These are important requirements if the reagent is to be applicable to on-line pre or post column derivatisation in a possible automation of the analytical.

  4. Analog circuit design designing high performance amplifiers

    CERN Document Server

    Feucht, Dennis

    2010-01-01

    The third volume Designing High Performance Amplifiers applies the concepts from the first two volumes. It is an advanced treatment of amplifier design/analysis emphasizing both wideband and precision amplification.

  5. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  6. Embedded High Performance Scalable Computing Systems

    National Research Council Canada - National Science Library

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  7. Gradient High Performance Liquid Chromatography Method ...

    African Journals Online (AJOL)

    Purpose: To develop a gradient high performance liquid chromatography (HPLC) method for the simultaneous determination of phenylephrine (PHE) and ibuprofen (IBU) in solid ..... nimesulide, phenylephrine. Hydrochloride, chlorpheniramine maleate and caffeine anhydrous in pharmaceutical dosage form. Acta Pol.

  8. Highlighting High Performance: Whitman Hanson Regional High School; Whitman, Massachusetts

    Energy Technology Data Exchange (ETDEWEB)

    2006-06-01

    This brochure describes the key high-performance building features of the Whitman-Hanson Regional High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar and wind energy, building envelope, heating and cooling systems, water conservation, and acoustics. Energy cost savings are also discussed.

  9. High performance computing in Windows Azure cloud

    OpenAIRE

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  10. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields.

    Science.gov (United States)

    Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen

    2017-06-01

    We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.

  11. Governance among Malaysian high performing companies

    Directory of Open Access Journals (Sweden)

    Asri Marsidi

    2016-07-01

    Full Text Available Well performed companies have always been linked with effective governance which is generally reflected through effective board of directors. However many issues concerning the attributes for effective board of directors remained unresolved. Nowadays diversity has been perceived as able to influence the corporate performance due to the likelihood of meeting variety of needs and demands from diverse customers and clients. The study therefore aims to provide a fundamental understanding on governance among high performing companies in Malaysia.

  12. High-performance OPCPA laser system

    International Nuclear Information System (INIS)

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J.

    2006-01-01

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  13. High-performance OPCPA laser system

    Energy Technology Data Exchange (ETDEWEB)

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J. [Rochester Univ., Lab. for Laser Energetics, NY (United States)

    2006-06-15

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  14. Comparing Dutch and British high performing managers

    NARCIS (Netherlands)

    Waal, A.A. de; Heijden, B.I.J.M. van der; Selvarajah, C.; Meyer, D.

    2016-01-01

    National cultures have a strong influence on the performance of organizations and should be taken into account when studying the traits of high performing managers. At the same time, many studies that focus upon the attributes of successful managers show that there are attributes that are similar

  15. High Performance Work Systems for Online Education

    Science.gov (United States)

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  16. Teacher Accountability at High Performing Charter Schools

    Science.gov (United States)

    Aguirre, Moises G.

    2016-01-01

    This study will examine the teacher accountability and evaluation policies and practices at three high performing charter schools located in San Diego County, California. Charter schools are exempted from many laws, rules, and regulations that apply to traditional school systems. By examining the teacher accountability systems at high performing…

  17. Advanced high performance solid wall blanket concepts

    International Nuclear Information System (INIS)

    Wong, C.P.C.; Malang, S.; Nishio, S.; Raffray, R.; Sagara, A.

    2002-01-01

    First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability

  18. High-speed optical three-axis vector magnetometry based on nonlinear Hanle effect in rubidium vapor

    Science.gov (United States)

    Azizbekyan, Hrayr; Shmavonyan, Svetlana; Khanbekyan, Aleksandr; Movsisyan, Marina; Papoyan, Aram

    2017-07-01

    The magnetic-field-compensation optical vector magnetometer based on the nonlinear Hanle effect in alkali metal vapor allowing two-axis measurement operation has been further elaborated for three-axis performance, along with significant reduction of measurement time. The upgrade was achieved by implementing a two-beam resonant excitation configuration and a fast maximum searching algorithm. Results of the proof-of-concept experiments, demonstrating 1 μT B-field resolution, are presented. The applied interest and capability of the proposed technique is analyzed.

  19. High Precision Measurement of the differential vector boson cross-sections with the ATLAS detector

    CERN Document Server

    Armbruster, Aaron James; The ATLAS collaboration

    2017-01-01

    Measurements of the Drell-Yan production of W and Z/gamma bosons at the LHC provide a benchmark of our understanding of perturbative QCD and probe the proton structure in a unique way. The ATLAS collaboration has performed new high precision measurements at center-of-mass energies of 7. The measurements are performed for W+, W- and Z/gamma bosons integrated and as a function of the boson or lepton rapidity and the Z/gamma* mass. Unprecedented precision is reached and strong constraints on Parton Distribution functions, in particular the strange density are found. Z cross sections are also measured at center-of-mass energies of 8 eV and 13TeV, and cross-section ratios to the top-quark pair production have been derived. This ratio measurement leads to a cancellation of systematic effects and allows for a high precision comparison to the theory predictions. The cross section of single W events has also been measured precisely at center-of-mass energies of 8TeV and 13TeV and the W charge asymmetry has been determ...

  20. Adding Cross-Platform Support to a High-Throughput Software Stack and Exploration of Vectorization Libraries

    CERN Document Server

    AUTHOR|(CDS)2258962

    This master thesis is written at the LHCb experiment at CERN. It is part of the initiative for improving software in view of the upcoming upgrade in 2021 which will significantly increase the amount of acquired data. This thesis consists of two parts. The first part is about the exploration of different vectorization libraries and their usefulness for the LHCb collaboration. The second part is about adding cross-platform support to the LHCb software stack. Here, the LHCb stack is successfully ported to ARM (aarch64) and its performance is analyzed. At the end of the thesis, the port to PowerPC(ppc64le) awaits the performance analysis. The main goal of porting the stack is the cost-performance evaluation for the different platforms to get the most cost efficient hardware for the new server farm for the upgrade. For this, selected vectorization libraries are extended to support the PowerPC and ARM platform. And though the same compiler is used, platform-specific changes to the compilation flags are required. In...

  1. Symbolic computer vector analysis

    Science.gov (United States)

    Stoutemyer, D. R.

    1977-01-01

    A MACSYMA program is described which performs symbolic vector algebra and vector calculus. The program can combine and simplify symbolic expressions including dot products and cross products, together with the gradient, divergence, curl, and Laplacian operators. The distribution of these operators over sums or products is under user control, as are various other expansions, including expansion into components in any specific orthogonal coordinate system. There is also a capability for deriving the scalar or vector potential of a vector field. Examples include derivation of the partial differential equations describing fluid flow and magnetohydrodynamics, for 12 different classic orthogonal curvilinear coordinate systems.

  2. Elementary vectors

    CERN Document Server

    Wolstenholme, E Œ

    1978-01-01

    Elementary Vectors, Third Edition serves as an introductory course in vector analysis and is intended to present the theoretical and application aspects of vectors. The book covers topics that rigorously explain and provide definitions, principles, equations, and methods in vector analysis. Applications of vector methods to simple kinematical and dynamical problems; central forces and orbits; and solutions to geometrical problems are discussed as well. This edition of the text also provides an appendix, intended for students, which the author hopes to bridge the gap between theory and appl

  3. High performance bio-integrated devices

    Science.gov (United States)

    Kim, Dae-Hyeong; Lee, Jongha; Park, Minjoon

    2014-06-01

    In recent years, personalized electronics for medical applications, particularly, have attracted much attention with the rise of smartphones because the coupling of such devices and smartphones enables the continuous health-monitoring in patients' daily life. Especially, it is expected that the high performance biomedical electronics integrated with the human body can open new opportunities in the ubiquitous healthcare. However, the mechanical and geometrical constraints inherent in all standard forms of high performance rigid wafer-based electronics raise unique integration challenges with biotic entities. Here, we describe materials and design constructs for high performance skin-mountable bio-integrated electronic devices, which incorporate arrays of single crystalline inorganic nanomembranes. The resulting electronic devices include flexible and stretchable electrophysiology electrodes and sensors coupled with active electronic components. These advances in bio-integrated systems create new directions in the personalized health monitoring and/or human-machine interfaces.

  4. Designing a High Performance Parallel Personal Cluster

    OpenAIRE

    Kapanova, K. G.; Sellier, J. M.

    2016-01-01

    Today, many scientific and engineering areas require high performance computing to perform computationally intensive experiments. For example, many advances in transport phenomena, thermodynamics, material properties, computational chemistry and physics are possible only because of the availability of such large scale computing infrastructures. Yet many challenges are still open. The cost of energy consumption, cooling, competition for resources have been some of the reasons why the scientifi...

  5. vSphere high performance cookbook

    CERN Document Server

    Sarkar, Prasenjit

    2013-01-01

    vSphere High Performance Cookbook is written in a practical, helpful style with numerous recipes focusing on answering and providing solutions to common, and not-so common, performance issues and problems.The book is primarily written for technical professionals with system administration skills and some VMware experience who wish to learn about advanced optimization and the configuration features and functions for vSphere 5.1.

  6. High-titer recombinant adeno-associated virus production utilizing a recombinant herpes simplex virus type I vector expressing AAV-2 Rep and Cap.

    Science.gov (United States)

    Conway, J E; Rhys, C M; Zolotukhin, I; Zolotukhin, S; Muzyczka, N; Hayward, G S; Byrne, B J

    1999-06-01

    Recombinant adeno-associated virus type 2 (rAAV) vectors have recently been used to achieve long-term, high level transduction in vivo. Further development of rAAV vectors for clinical use requires significant technological improvements in large-scale vector production. In order to facilitate the production of rAAV vectors, a recombinant herpes simplex virus type I vector (rHSV-1) which does not produce ICP27, has been engineered to express the AAV-2 rep and cap genes. The optimal dose of this vector, d27.1-rc, for AAV production has been determined and results in a yield of 380 expression units (EU) of AAV-GFP produced from 293 cells following transfection with AAV-GFP plasmid DNA. In addition, d27.1-rc was also efficient at producing rAAV from cell lines that have an integrated AAV-GFP provirus. Up to 480 EU/cell of AAV-GFP could be produced from the cell line GFP-92, a proviral, 293 derived cell line. Effective amplification of rAAV vectors introduced into 293 cells by infection was also demonstrated. Passage of rAAV with d27. 1-rc results in up to 200-fold amplification of AAV-GFP with each passage after coinfection of the vectors. Efficient, large-scale production (>109 cells) of AAV-GFP from a proviral cell line was also achieved and these stocks were free of replication-competent AAV. The described rHSV-1 vector provides a novel, simple and flexible way to introduce the AAV-2 rep and cap genes and helper virus functions required to produce high-titer rAAV preparations from any rAAV proviral construct. The efficiency and potential for scalable delivery of d27.1-rc to producer cell cultures should facilitate the production of sufficient quantities of rAAV vectors for clinical application.

  7. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  8. Nanosatellite High-Precision Magnetic Missions Enabled by Advances in a Stand-Alone Scalar/Vector Absolute Magnetometer

    Science.gov (United States)

    Hulot, G.; Leger, J. M.; Vigneron, P.; Jager, T.; Bertrand, F.; Coisson, P.; Deram, P.; Boness, A.; Tomasini, L.; Faure, B.

    2017-12-01

    Satellites of the ESA Swarm mission currently in operation carry a new generation of Absolute Scalar Magnetometers (ASM), which nominally deliver 1 Hz scalar for calibrating the relative flux gate magnetometers that complete the magnetometry payload (together with star cameras, STR, for attitude restitution) and providing extremely accurate scalar measurements of the magnetic field for science investigations. These ASM instruments, however, can also operate in two additional modes, a high-frequency 250 Hz scalar mode and a 1 Hz absolute dual-purpose scalar/vector mode. The 250 Hz scalar mode already allowed the detection of until now very poorly documented extremely low frequency whistler signals produced by lightning in the atmosphere, while the 1 Hz scalar/vector mode has provided data that, combined with attitude restitution from the STR, could be used to produce scientifically relevant core field and lithospheric field models. Both ASM modes have thus now been fully validated for science applications. Efforts towards developing an improved and miniaturized version of this instrument is now well under way with CNES support in the context of the preparation of a 12U nanosatellite mission (NanoMagSat) proposed to be launched to complement the Swarm satellite constellation. This advanced miniaturized ASM could potentially operate in an even more useful mode, simultaneously providing high frequency (possibly beyond 500 Hz) absolute scalar data and self-calibrated 1 Hz vector data, thus providing scientifically valuable data for multiple science applications. In this presentation, we will illustrate the science such an instrument taken on board a nanosatellite could enable, and report on the current status of the NanoMagSat project that intends to take advantage of it.

  9. Support vector machine to predict diesel engine performance and emission parameters fueled with nano-particles additive to diesel fuel

    Science.gov (United States)

    Ghanbari, M.; Najafi, G.; Ghobadian, B.; Mamat, R.; Noor, M. M.; Moosavian, A.

    2015-12-01

    This paper studies the use of adaptive Support Vector Machine (SVM) to predict the performance parameters and exhaust emissions of a diesel engine operating on nanodiesel blended fuels. In order to predict the engine parameters, the whole experimental data were randomly divided into training and testing data. For SVM modelling, different values for radial basis function (RBF) kernel width and penalty parameters (C) were considered and the optimum values were then found. The results demonstrate that SVM is capable of predicting the diesel engine performance and emissions. In the experimental step, Carbon nano tubes (CNT) (40, 80 and 120 ppm) and nano silver particles (40, 80 and 120 ppm) with nanostructure were prepared and added as additive to the diesel fuel. Six cylinders, four-stroke diesel engine was fuelled with these new blended fuels and operated at different engine speeds. Experimental test results indicated the fact that adding nano particles to diesel fuel, increased diesel engine power and torque output. For nano-diesel it was found that the brake specific fuel consumption (bsfc) was decreased compared to the net diesel fuel. The results proved that with increase of nano particles concentrations (from 40 ppm to 120 ppm) in diesel fuel, CO2 emission increased. CO emission in diesel fuel with nano-particles was lower significantly compared to pure diesel fuel. UHC emission with silver nano-diesel blended fuel decreased while with fuels that contains CNT nano particles increased. The trend of NOx emission was inverse compared to the UHC emission. With adding nano particles to the blended fuels, NOx increased compared to the net diesel fuel. The tests revealed that silver & CNT nano particles can be used as additive in diesel fuel to improve complete combustion of the fuel and reduce the exhaust emissions significantly.

  10. Strategy Guideline: Partnering for High Performance Homes

    Energy Technology Data Exchange (ETDEWEB)

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  11. Long-term bridge performance high priority bridge performance issues.

    Science.gov (United States)

    2014-10-01

    Bridge performance is a multifaceted issue involving performance of materials and protective systems, : performance of individual components of the bridge, and performance of the structural system as a whole. The : Long-Term Bridge Performance (LTBP)...

  12. Validated High Performance Liquid Chromatography Method for ...

    African Journals Online (AJOL)

    Purpose: To develop a simple, rapid and sensitive high performance liquid chromatography (HPLC) method for the determination of cefadroxil monohydrate in human plasma. Methods: Schimadzu HPLC with LC solution software was used with Waters Spherisorb, C18 (5 μm, 150mm × 4.5mm) column. The mobile phase ...

  13. An Introduction to High Performance Fortran

    Directory of Open Access Journals (Sweden)

    John Merlin

    1995-01-01

    Full Text Available High Performance Fortran (HPF is an informal standard for extensions to Fortran 90 to assist its implementation on parallel architectures, particularly for data-parallel computation. Among other things, it includes directives for specifying data distribution across multiple memories, and concurrent execution features. This article provides a tutorial introduction to the main features of HPF.

  14. High Performance Electronics on Flexible Silicon

    KAUST Repository

    Sevilla, Galo T.

    2016-09-01

    Over the last few years, flexible electronic systems have gained increased attention from researchers around the world because of their potential to create new applications such as flexible displays, flexible energy harvesters, artificial skin, and health monitoring systems that cannot be integrated with conventional wafer based complementary metal oxide semiconductor processes. Most of the current efforts to create flexible high performance devices are based on the use of organic semiconductors. However, inherent material\\'s limitations make them unsuitable for big data processing and high speed communications. The objective of my doctoral dissertation is to develop integration processes that allow the transformation of rigid high performance electronics into flexible ones while maintaining their performance and cost. In this work, two different techniques to transform inorganic complementary metal-oxide-semiconductor electronics into flexible ones have been developed using industry compatible processes. Furthermore, these techniques were used to realize flexible discrete devices and circuits which include metal-oxide-semiconductor field-effect-transistors, the first demonstration of flexible Fin-field-effect-transistors, and metal-oxide-semiconductors-based circuits. Finally, this thesis presents a new technique to package, integrate, and interconnect flexible high performance electronics using low cost additive manufacturing techniques such as 3D printing and inkjet printing. This thesis contains in depth studies on electrical, mechanical, and thermal properties of the fabricated devices.

  15. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  16. Technology Leadership in Malaysia's High Performance School

    Science.gov (United States)

    Yieng, Wong Ai; Daud, Khadijah Binti

    2017-01-01

    Headmaster as leader of the school also plays a role as a technology leader. This applies to the high performance schools (HPS) headmaster as well. The HPS excel in all aspects of education. In this study, researcher is interested in examining the role of the headmaster as a technology leader through interviews with three headmasters of high…

  17. Toward High Performance in Industrial Refrigeration Systems

    DEFF Research Database (Denmark)

    Thybo, C.; Izadi-Zamanabadi, Roozbeh; Niemann, H.

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...

  18. Towards high performance in industrial refrigeration systems

    DEFF Research Database (Denmark)

    Thybo, C.; Izadi-Zamanabadi, R.; Niemann, Hans Henrik

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...

  19. Validated high performance liquid chromatographic (HPLC) method ...

    African Journals Online (AJOL)

    STORAGESEVER

    2010-02-22

    Feb 22, 2010 ... specific and accurate high performance liquid chromatographic method for determination of ZER in micro-volumes ... tional medicine as a cure for swelling, sores, loss of appetite and ... Receptor Activator for Nuclear Factor κ B Ligand .... The effect of ... be suitable for preclinical pharmacokinetic studies. The.

  20. Validated High Performance Liquid Chromatography Method for ...

    African Journals Online (AJOL)

    Purpose: To develop a simple, rapid and sensitive high performance liquid ... response, tailing factor and resolution of six replicate injections was < 3 %. ... Cefadroxil monohydrate, Human plasma, Pharmacokinetics Bioequivalence ... Drug-free plasma was obtained from the local .... Influence of probenicid on the renal.

  1. Integrated plasma control for high performance tokamaks

    International Nuclear Information System (INIS)

    Humphreys, D.A.; Deranian, R.D.; Ferron, J.R.; Johnson, R.D.; LaHaye, R.J.; Leuer, J.A.; Penaflor, B.G.; Walker, M.L.; Welander, A.S.; Jayakumar, R.J.; Makowski, M.A.; Khayrutdinov, R.R.

    2005-01-01

    Sustaining high performance in a tokamak requires controlling many equilibrium shape and profile characteristics simultaneously with high accuracy and reliability, while suppressing a variety of MHD instabilities. Integrated plasma control, the process of designing high-performance tokamak controllers based on validated system response models and confirming their performance in detailed simulations, provides a systematic method for achieving and ensuring good control performance. For present-day devices, this approach can greatly reduce the need for machine time traditionally dedicated to control optimization, and can allow determination of high-reliability controllers prior to ever producing the target equilibrium experimentally. A full set of tools needed for this approach has recently been completed and applied to present-day devices including DIII-D, NSTX and MAST. This approach has proven essential in the design of several next-generation devices including KSTAR, EAST, JT-60SC, and ITER. We describe the method, results of design and simulation tool development, and recent research producing novel approaches to equilibrium and MHD control in DIII-D. (author)

  2. Project materials [Commercial High Performance Buildings Project

    Energy Technology Data Exchange (ETDEWEB)

    None

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  3. High performance structural ceramics for nuclear industry

    International Nuclear Information System (INIS)

    Pujari, Vimal K.; Faker, Paul

    2006-01-01

    A family of Saint-Gobain structural ceramic materials and products produced by its High performance Refractory Division is described. Over the last fifty years or so, Saint-Gobain has been a leader in developing non oxide ceramic based novel materials, processes and products for application in Nuclear, Chemical, Automotive, Defense and Mining industries

  4. A new high performance current transducer

    International Nuclear Information System (INIS)

    Tang Lijun; Lu Songlin; Li Deming

    2003-01-01

    A DC-100 kHz current transducer is developed using a new technique on zero-flux detecting principle. It was shown that the new current transducer is of high performance, its magnetic core need not be selected very stringently, and it is easy to manufacture

  5. Strategy Guideline. High Performance Residential Lighting

    Energy Technology Data Exchange (ETDEWEB)

    Holton, J. [IBACOS, Inc., Pittsburgh, PA (United States)

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  6. Vectorized Monte Carlo

    International Nuclear Information System (INIS)

    Brown, F.B.

    1981-01-01

    Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes

  7. Architecting Web Sites for High Performance

    Directory of Open Access Journals (Sweden)

    Arun Iyengar

    2002-01-01

    Full Text Available Web site applications are some of the most challenging high-performance applications currently being developed and deployed. The challenges emerge from the specific combination of high variability in workload characteristics and of high performance demands regarding the service level, scalability, availability, and costs. In recent years, a large body of research has addressed the Web site application domain, and a host of innovative software and hardware solutions have been proposed and deployed. This paper is an overview of recent solutions concerning the architectures and the software infrastructures used in building Web site applications. The presentation emphasizes three of the main functions in a complex Web site: the processing of client requests, the control of service levels, and the interaction with remote network caches.

  8. High performance anode for advanced Li batteries

    Energy Technology Data Exchange (ETDEWEB)

    Lake, Carla [Applied Sciences, Inc., Cedarville, OH (United States)

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  9. NINJA: Java for High Performance Numerical Computing

    Directory of Open Access Journals (Sweden)

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  10. Development of high performance cladding materials

    International Nuclear Information System (INIS)

    Park, Jeong Yong; Jeong, Y. H.; Park, S. Y.

    2010-04-01

    The irradiation test for HANA claddings conducted and a series of evaluation for next-HANA claddings as well as their in-pile and out-of pile performances tests were also carried out at Halden research reactor. The 6th irradiation test have been completed successfully in Halden research reactor. As a result, HANA claddings showed high performance, such as corrosion resistance increased by 40% compared to Zircaloy-4. The high performance of HANA claddings in Halden test has enabled lead test rod program as the first step of the commercialization of HANA claddings. DB has been established for thermal and LOCA-related properties. It was confirmed from the thermal shock test that the integrity of HANA claddings was maintained in more expanded region than the criteria regulated by NRC. The manufacturing process of strips was established in order to apply HANA alloys, which were originally developed for the claddings, to the spacer grids. 250 kinds of model alloys for the next-generation claddings were designed and manufactured over 4 times and used to select the preliminary candidate alloys for the next-generation claddings. The selected candidate alloys showed 50% better corrosion resistance and 20% improved high temperature oxidation resistance compared to the foreign advanced claddings. We established the manufacturing condition controlling the performance of the dual-cooled claddings by changing the reduction rate in the cold working steps

  11. A Linux Workstation for High Performance Graphics

    Science.gov (United States)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  12. Study of Muon Pairs and Vector Mesons Produced in High Energy Pb-Pb Interactions

    CERN Multimedia

    Karavicheva, T; Atayan, M; Bordalo, P; Constans, N P; Gulkanyan, H; Kluberg, L

    2002-01-01

    %NA50 %title\\\\ \\\\The experiment studies dimuons produced in Pb-Pb and p-A collisions, at nucleon-nucleon c.m. energies of $ \\sqrt{s} $ = 18 and 30 GeV respectively. The setup accepts dimuons in a kinematical range roughly defined as $0.1$ $1 GeV/c$, and stands maximal luminosity (5~10$^{7}$~Pb ions and 10$^7$ interactions per burst). The physics includes signals which probe QGP (Quark-Gluon Plasma), namely the $\\phi$, J/$\\psi$ and $\\psi^\\prime$ vector mesons and thermal dimuons, and reference signals, namely the (unseparated) $\\rho$ and $\\omega$ mesons, and Drell-Yan dimuons. The experiment is a continuation, with improved means, of NA38, and expands its study of {\\it charmonium suppression} and {\\it strangeness enhancement}.\\\\ \\\\The muons are measured in the former NA10 spectrometer, which is shielded from the hot target region by a beam stopper and absorber wall. The muons traverse 5~m of BeO and C. The impact parameter is determined by a Zero Degree Calorimeter (Ta with silica fibres). Energy dissipation ...

  13. High Performance Commercial Fenestration Framing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  14. Gateway-compatible vectors for high-throughput protein expression in pro- and eukaryotic cell-free systems.

    Science.gov (United States)

    Gagoski, Dejan; Mureev, Sergey; Giles, Nichole; Johnston, Wayne; Dahmer-Heath, Mareike; Škalamera, Dubravka; Gonda, Thomas J; Alexandrov, Kirill

    2015-02-10

    Although numerous techniques for protein expression and production are available the pace of genome sequencing outstrips our ability to analyze the encoded proteins. To address this bottleneck, we have established a system for parallelized cloning, DNA production and cell-free expression of large numbers of proteins. This system is based on a suite of pCellFree Gateway destination vectors that utilize a Species Independent Translation Initiation Sequence (SITS) that mediates recombinant protein expression in any in vitro translation system. These vectors introduce C or N terminal EGFP and mCherry fluorescent and affinity tags, enabling direct analysis and purification of the expressed proteins. To maximize throughput and minimize the cost of protein production we combined Gateway cloning with Rolling Circle DNA Amplification. We demonstrate that as little as 0.1 ng of plasmid DNA is sufficient for template amplification and production of recombinant human protein in Leishmania tarentolae and Escherichia coli cell-free expression systems. Our experiments indicate that this approach can be applied to large gene libraries as it can be reliably performed in multi-well plates. The resulting protein expression pipeline provides a valuable new tool for applications of the post genomic era. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Fracture toughness of ultra high performance concrete by flexural performance

    Directory of Open Access Journals (Sweden)

    Manolova Emanuela

    2016-01-01

    Full Text Available This paper describes the fracture toughness of the innovative structural material - Ultra High Performance Concrete (UHPC, evaluated by flexural performance. For determination the material behaviour by static loading are used adapted standard test methods for flexural performance of fiber-reinforced concrete (ASTM C 1609 and ASTM C 1018. Fracture toughness is estimated by various deformation parameters derived from the load-deflection curve, obtained by testing simple supported beam under third-point loading, using servo-controlled testing system. This method is used to be estimated the contribution of the embedded fiber-reinforcement into improvement of the fractural behaviour of UHPC by changing the crack-resistant capacity, fracture toughness and energy absorption capacity with various mechanisms. The position of the first crack has been formulated based on P-δ (load- deflection response and P-ε (load - longitudinal deformation in the tensile zone response, which are used for calculation of the two toughness indices I5 and I10. The combination of steel fibres with different dimensions leads to a composite, having at the same time increased crack resistance, first crack formation, ductility and post-peak residual strength.

  16. Vector analysis

    CERN Document Server

    Brand, Louis

    2006-01-01

    The use of vectors not only simplifies treatments of differential geometry, mechanics, hydrodynamics, and electrodynamics, but also makes mathematical and physical concepts more tangible and easy to grasp. This text for undergraduates was designed as a short introductory course to give students the tools of vector algebra and calculus, as well as a brief glimpse into these subjects' manifold applications. The applications are developed to the extent that the uses of the potential function, both scalar and vector, are fully illustrated. Moreover, the basic postulates of vector analysis are brou

  17. Selection vector filter framework

    Science.gov (United States)

    Lukac, Rastislav; Plataniotis, Konstantinos N.; Smolka, Bogdan; Venetsanopoulos, Anastasios N.

    2003-10-01

    We provide a unified framework of nonlinear vector techniques outputting the lowest ranked vector. The proposed framework constitutes a generalized filter class for multichannel signal processing. A new class of nonlinear selection filters are based on the robust order-statistic theory and the minimization of the weighted distance function to other input samples. The proposed method can be designed to perform a variety of filtering operations including previously developed filtering techniques such as vector median, basic vector directional filter, directional distance filter, weighted vector median filters and weighted directional filters. A wide range of filtering operations is guaranteed by the filter structure with two independent weight vectors for angular and distance domains of the vector space. In order to adapt the filter parameters to varying signal and noise statistics, we provide also the generalized optimization algorithms taking the advantage of the weighted median filters and the relationship between standard median filter and vector median filter. Thus, we can deal with both statistical and deterministic aspects of the filter design process. It will be shown that the proposed method holds the required properties such as the capability of modelling the underlying system in the application at hand, the robustness with respect to errors in the model of underlying system, the availability of the training procedure and finally, the simplicity of filter representation, analysis, design and implementation. Simulation studies also indicate that the new filters are computationally attractive and have excellent performance in environments corrupted by bit errors and impulsive noise.

  18. Investigation of Pear Drying Performance by Different Methods and Regression of Convective Heat Transfer Coefficient with Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Mehmet Das

    2018-01-01

    Full Text Available In this study, an air heated solar collector (AHSC dryer was designed to determine the drying characteristics of the pear. Flat pear slices of 10 mm thickness were used in the experiments. The pears were dried both in the AHSC dryer and under the sun. Panel glass temperature, panel floor temperature, panel inlet temperature, panel outlet temperature, drying cabinet inlet temperature, drying cabinet outlet temperature, drying cabinet temperature, drying cabinet moisture, solar radiation, pear internal temperature, air velocity and mass loss of pear were measured at 30 min intervals. Experiments were carried out during the periods of June 2017 in Elazig, Turkey. The experiments started at 8:00 a.m. and continued till 18:00. The experiments were continued until the weight changes in the pear slices stopped. Wet basis moisture content (MCw, dry basis moisture content (MCd, adjustable moisture ratio (MR, drying rate (DR, and convective heat transfer coefficient (hc were calculated with both in the AHSC dryer and the open sun drying experiment data. It was found that the values of hc in both drying systems with a range 12.4 and 20.8 W/m2 °C. Three different kernel models were used in the support vector machine (SVM regression to construct the predictive model of the calculated hc values for both systems. The mean absolute error (MAE, root mean squared error (RMSE, relative absolute error (RAE and root relative absolute error (RRAE analysis were performed to indicate the predictive model’s accuracy. As a result, the rate of drying of the pear was examined for both systems and it was observed that the pear had dried earlier in the AHSC drying system. A predictive model was obtained using the SVM regression for the calculated hc values for the pear in the AHSC drying system. The normalized polynomial kernel was determined as the best kernel model in SVM for estimating the hc values.

  19. HIGH PERFORMANCE CERIA BASED OXYGEN MEMBRANE

    DEFF Research Database (Denmark)

    2014-01-01

    The invention describes a new class of highly stable mixed conducting materials based on acceptor doped cerium oxide (CeO2-8 ) in which the limiting electronic conductivity is significantly enhanced by co-doping with a second element or co- dopant, such as Nb, W and Zn, so that cerium and the co......-dopant have an ionic size ratio between 0.5 and 1. These materials can thereby improve the performance and extend the range of operating conditions of oxygen permeation membranes (OPM) for different high temperature membrane reactor applications. The invention also relates to the manufacturing of supported...

  20. Playa: High-Performance Programmable Linear Algebra

    Directory of Open Access Journals (Sweden)

    Victoria E. Howle

    2012-01-01

    Full Text Available This paper introduces Playa, a high-level user interface layer for composing algorithms for complex multiphysics problems out of objects from other Trilinos packages. Among other features, Playa provides very high-performance overloaded operators implemented through an expression template mechanism. In this paper, we give an overview of the central Playa objects from a user's perspective, show application to a sequence of increasingly complex solver algorithms, provide timing results for Playa's overloaded operators and other functions, and briefly survey some of the implementation issues involved.

  1. Optimizing the design of very high power, high performance converters

    International Nuclear Information System (INIS)

    Edwards, R.J.; Tiagha, E.A.; Ganetis, G.; Nawrocky, R.J.

    1980-01-01

    This paper describes how various technologies are used to achieve the desired performance in a high current magnet power converter system. It is hoped that the discussions of the design approaches taken will be applicable to other power supply systems where stringent requirements in stability, accuracy and reliability must be met

  2. Robust High Performance Aquaporin based Biomimetic Membranes

    DEFF Research Database (Denmark)

    Helix Nielsen, Claus; Zhao, Yichun; Qiu, C.

    2013-01-01

    on top of a support membrane. Control membranes, either without aquaporins or with the inactive AqpZ R189A mutant aquaporin served as controls. The separation performance of the membranes was evaluated by cross-flow forward osmosis (FO) and reverse osmosis (RO) tests. In RO the ABM achieved a water......Aquaporins are water channel proteins with high water permeability and solute rejection, which makes them promising for preparing high-performance biomimetic membranes. Despite the growing interest in aquaporin-based biomimetic membranes (ABMs), it is challenging to produce robust and defect...... permeability of ~ 4 L/(m2 h bar) with a NaCl rejection > 97% at an applied hydraulic pressure of 5 bar. The water permeability was ~40% higher compared to a commercial brackish water RO membrane (BW30) and an order of magnitude higher compared to a seawater RO membrane (SW30HR). In FO, the ABMs had > 90...

  3. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  4. High performance cloud auditing and applications

    CERN Document Server

    Choi, Baek-Young; Song, Sejun

    2014-01-01

    This book mainly focuses on cloud security and high performance computing for cloud auditing. The book discusses emerging challenges and techniques developed for high performance semantic cloud auditing, and presents the state of the art in cloud auditing, computing and security techniques with focus on technical aspects and feasibility of auditing issues in federated cloud computing environments.   In summer 2011, the United States Air Force Research Laboratory (AFRL) CyberBAT Cloud Security and Auditing Team initiated the exploration of the cloud security challenges and future cloud auditing research directions that are covered in this book. This work was supported by the United States government funds from the Air Force Office of Scientific Research (AFOSR), the AFOSR Summer Faculty Fellowship Program (SFFP), the Air Force Research Laboratory (AFRL) Visiting Faculty Research Program (VFRP), the National Science Foundation (NSF) and the National Institute of Health (NIH). All chapters were partially suppor...

  5. Monitoring SLAC High Performance UNIX Computing Systems

    International Nuclear Information System (INIS)

    Lettsome, Annette K.

    2005-01-01

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface

  6. High performance parallel computers for science

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  7. Toward a theory of high performance.

    Science.gov (United States)

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance.

  8. High-performance phase-field modeling

    KAUST Repository

    Vignal, Philippe; Sarmiento, Adel; Cortes, Adriano Mauricio; Dalcin, L.; Collier, N.; Calo, Victor M.

    2015-01-01

    and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  9. AHPCRC - Army High Performance Computing Research Center

    Science.gov (United States)

    2010-01-01

    computing. Of particular interest is the ability of a distrib- uted jamming network (DJN) to jam signals in all or part of a sensor or communications net...and reasoning, assistive technologies. FRIEDRICH (FRITZ) PRINZ Finmeccanica Professor of Engineering, Robert Bosch Chair, Department of Engineering...High Performance Computing Research Center www.ahpcrc.org BARBARA BRYAN AHPCRC Research and Outreach Manager, HPTi (650) 604-3732 bbryan@hpti.com Ms

  10. Performance concerns for high duty fuel cycle

    International Nuclear Information System (INIS)

    Esposito, V.J.; Gutierrez, J.E.

    1999-01-01

    One of the goals of the nuclear industry is to achieve economic performance such that nuclear power plants are competitive in a de-regulated market. The manner in which nuclear fuel is designed and operated lies at the heart of economic viability. In this sense reliability, operating flexibility and low costs are the three major requirements of the NPP today. The translation of these three requirements to the design is part of our work. The challenge today is to produce a fuel design which will operate with long operating cycles, high discharge burnup, power up-rating and while still maintaining all design and safety margins. European Fuel Group (EFG) understands that to achieve the required performance high duty/energy fuel designs are needed. The concerns for high duty design includes, among other items, core design methods, advanced Safety Analysis methodologies, performance models, advanced material and operational strategies. The operational aspects require the trade-off and evaluation of various parameters including coolant chemistry control, material corrosion, boiling duty, boron level impacts, etc. In this environment MAEF is the design that EFG is now offering based on ZIRLO alloy and a robust skeleton. This new design is able to achieve 70 GWd/tU and Lead Test Programs are being executed to demonstrate this capability. A number of performance issues which have been a concern with current designs have been resolved such as cladding corrosion and incomplete RCCA insertion (IRI). As the core duty becomes more aggressive other new issues need to be addressed such as Axial Offset Anomaly. These new issues are being addressed by combination of the new design in concert with advanced methodologies to meet the demanding needs of NPP. The ability and strategy to meet high duty core requirements, flexibility of operation and maintain acceptable balance of all technical issues is the discussion in this paper. (authors)

  11. DURIP: High Performance Computing in Biomathematics Applications

    Science.gov (United States)

    2017-05-10

    Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied

  12. High Performance Computing Operations Review Report

    Energy Technology Data Exchange (ETDEWEB)

    Cupps, Kimberly C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  13. Planning for high performance project teams

    International Nuclear Information System (INIS)

    Reed, W.; Keeney, J.; Westney, R.

    1997-01-01

    Both industry-wide research and corporate benchmarking studies confirm the significant savings in cost and time that result from early planning of a project. Amoco's Team Planning Workshop combines long-term strategic project planning and short-term tactical planning with team building to provide the basis for high performing project teams, better project planning, and effective implementation of the Amoco Common Process for managing projects

  14. A Vector Printing Method for High-Speed Electrohydrodynamic (EHD Jet Printing Based on Encoder Position Sensors

    Directory of Open Access Journals (Sweden)

    Thanh Huy Phung

    2018-02-01

    Full Text Available Electrohyrodynamic (EHD jet printing has been widely used in the field of direct micro-nano patterning applications, due to its high resolution printing capability. So far, vector line printing using a single nozzle has been widely used for most EHD printing applications. However, the application has been limited to low-speed printing, to avoid non-uniform line width near the end points where line printing starts and ends. At end points of line vector printing, the deposited drop amount is likely to be significantly large compared to the rest of the printed lines, due to unavoidable acceleration and deceleration. In this study, we proposed a method to solve the printing quality problems by producing droplets at an equally spaced distance, irrespective of the printing speed. For this purpose, an encoder processing unit (EPU was developed, so that the jetting trigger could be generated according to user-defined spacing by using encoder position signals, which are used for the positioning control of the two linear stages.

  15. The vector and parallel processing of MORSE code on Monte Carlo Machine

    International Nuclear Information System (INIS)

    Hasegawa, Yukihiro; Higuchi, Kenji.

    1995-11-01

    Multi-group Monte Carlo Code for particle transport, MORSE is modified for high performance computing on Monte Carlo Machine Monte-4. The method and the results are described. Monte-4 was specially developed to realize high performance computing of Monte Carlo codes for particle transport, which have been difficult to obtain high performance in vector processing on conventional vector processors. Monte-4 has four vector processor units with the special hardware called Monte Carlo pipelines. The vectorization and parallelization of MORSE code and the performance evaluation on Monte-4 are described. (author)

  16. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  17. High performance separation of lanthanides and actinides

    International Nuclear Information System (INIS)

    Sivaraman, N.; Vasudeva Rao, P.R.

    2011-01-01

    The major advantage of High Performance Liquid Chromatography (HPLC) is its ability to provide rapid and high performance separations. It is evident from Van Deemter curve for particle size versus resolution that packing materials with particle sizes less than 2 μm provide better resolution for high speed separations and resolving complex mixtures compared to 5 μm based supports. In the recent past, chromatographic support material using monolith has been studied extensively at our laboratory. Monolith column consists of single piece of porous, rigid material containing mesopores and micropores, which provide fast analyte mass transfer. Monolith support provides significantly higher separation efficiency than particle-packed columns. A clear advantage of monolith is that it could be operated at higher flow rates but with lower back pressure. Higher operating flow rate results in higher column permeability, which drastically reduces analysis time and provides high separation efficiency. The above developed fast separation methods were applied to assay the lanthanides and actinides from the dissolver solutions of nuclear reactor fuels

  18. Vector velocimeter

    DEFF Research Database (Denmark)

    2012-01-01

    The present invention relates to a compact, reliable and low-cost vector velocimeter for example for determining velocities of particles suspended in a gas or fluid flow, or for determining velocity, displacement, rotation, or vibration of a solid surface, the vector velocimeter comprising a laser...

  19. High Performance OLED Panel and Luminaire

    Energy Technology Data Exchange (ETDEWEB)

    Spindler, Jeffrey [OLEDWorks LLC, Rochester, NY (United States)

    2017-02-20

    In this project, OLEDWorks developed and demonstrated the technology required to produce OLED lighting panels with high energy efficiency and excellent light quality. OLED panels developed in this program produce high quality warm white light with CRI greater than 85 and efficacy up to 80 lumens per watt (LPW). An OLED luminaire employing 24 of the high performance panels produces practical levels of illumination for general lighting, with a flux of over 2200 lumens at 60 LPW. This is a significant advance in the state of the art for OLED solid-state lighting (SSL), which is expected to be a complementary light source to the more advanced LED SSL technology that is rapidly replacing all other traditional forms of lighting.

  20. Multi-objective based on parallel vector evaluated particle swarm optimization for optimal steady-state performance of power systems

    DEFF Research Database (Denmark)

    Vlachogiannis, Ioannis (John); Lee, K Y

    2009-01-01

    In this paper the state-of-the-art extended particle swarm optimization (PSO) methods for solving multi-objective optimization problems are represented. We emphasize in those, the co-evolution technique of the parallel vector evaluated PSO (VEPSO), analysed and applied in a multi-objective problem...

  1. A High Performance COTS Based Computer Architecture

    Science.gov (United States)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  2. Management issues for high performance storage systems

    Energy Technology Data Exchange (ETDEWEB)

    Louis, S. [Lawrence Livermore National Lab., CA (United States); Burris, R. [Oak Ridge National Lab., TN (United States)

    1995-03-01

    Managing distributed high-performance storage systems is complex and, although sharing common ground with traditional network and systems management, presents unique storage-related issues. Integration technologies and frameworks exist to help manage distributed network and system environments. Industry-driven consortia provide open forums where vendors and users cooperate to leverage solutions. But these new approaches to open management fall short addressing the needs of scalable, distributed storage. We discuss the motivation and requirements for storage system management (SSM) capabilities and describe how SSM manages distributed servers and storage resource objects in the High Performance Storage System (HPSS), a new storage facility for data-intensive applications and large-scale computing. Modem storage systems, such as HPSS, require many SSM capabilities, including server and resource configuration control, performance monitoring, quality of service, flexible policies, file migration, file repacking, accounting, and quotas. We present results of initial HPSS SSM development including design decisions and implementation trade-offs. We conclude with plans for follow-on work and provide storage-related recommendations for vendors and standards groups seeking enterprise-wide management solutions.

  3. Automatic Energy Schemes for High Performance Applications

    Energy Technology Data Exchange (ETDEWEB)

    Sundriyal, Vaibhav [Iowa State Univ., Ames, IA (United States)

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  4. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  5. A high performance architecture for accelerator controls

    International Nuclear Information System (INIS)

    Allen, M.; Hunt, S.M; Lue, H.; Saltmarsh, C.G.; Parker, C.R.C.B.

    1991-01-01

    The demands placed on the Superconducting Super Collider (SSC) control system due to large distances, high bandwidth and fast response time required for operation will require a fresh approach to the data communications architecture of the accelerator. The prototype design effort aims at providing deterministic communication across the accelerator complex with a response time of < 100 ms and total bandwidth of 2 Gbits/sec. It will offer a consistent interface for a large number of equipment types, from vacuum pumps to beam position monitors, providing appropriate communications performance for each equipment type. It will consist of highly parallel links to all equipment: those with computing resources, non-intelligent direct control interfaces, and data concentrators. This system will give each piece of equipment a dedicated link of fixed bandwidth to the control system. Application programs will have access to all accelerator devices which will be memory mapped into a global virtual addressing scheme. Links to devices in the same geographical area will be multiplexed using commercial Time Division Multiplexing equipment. Low-level access will use reflective memory techniques, eliminating processing overhead and complexity of traditional data communication protocols. The use of commercial standards and equipment will enable a high performance system to be built at low cost

  6. A high performance architecture for accelerator controls

    International Nuclear Information System (INIS)

    Allen, M.; Hunt, S.M.; Lue, H.; Saltmarsh, C.G.; Parker, C.R.C.B.

    1991-03-01

    The demands placed on the Superconducting Super Collider (SSC) control system due to large distances, high bandwidth and fast response time required for operation will require a fresh approach to the data communications architecture of the accelerator. The prototype design effort aims at providing deterministic communication across the accelerator complex with a response time of <100 ms and total bandwidth of 2 Gbits/sec. It will offer a consistent interface for a large number of equipment types, from vacuum pumps to beam position monitors, providing appropriate communications performance for each equipment type. It will consist of highly parallel links to all equipments: those with computing resources, non-intelligent direct control interfaces, and data concentrators. This system will give each piece of equipment a dedicated link of fixed bandwidth to the control system. Application programs will have access to all accelerator devices which will be memory mapped into a global virtual addressing scheme. Links to devices in the same geographical area will be multiplexed using commercial Time Division Multiplexing equipment. Low-level access will use reflective memory techniques, eliminating processing overhead and complexity of traditional data communication protocols. The use of commercial standards and equipment will enable a high performance system to be built at low cost. 1 fig

  7. Building Trust in High-Performing Teams

    Directory of Open Access Journals (Sweden)

    Aki Soudunsaari

    2012-06-01

    Full Text Available Facilitation of growth is more about good, trustworthy contacts than capital. Trust is a driving force for business creation, and to create a global business you need to build a team that is capable of meeting the challenge. Trust is a key factor in team building and a needed enabler for cooperation. In general, trust building is a slow process, but it can be accelerated with open interaction and good communication skills. The fast-growing and ever-changing nature of global business sets demands for cooperation and team building, especially for startup companies. Trust building needs personal knowledge and regular face-to-face interaction, but it also requires empathy, respect, and genuine listening. Trust increases communication, and rich and open communication is essential for the building of high-performing teams. Other building materials are a shared vision, clear roles and responsibilities, willingness for cooperation, and supporting and encouraging leadership. This study focuses on trust in high-performing teams. It asks whether it is possible to manage trust and which tools and operation models should be used to speed up the building of trust. In this article, preliminary results from the authors’ research are presented to highlight the importance of sharing critical information and having a high level of communication through constant interaction.

  8. Support vector regression model of wastewater bioreactor performance using microbial community diversity indices: effect of stress and bioaugmentation.

    Science.gov (United States)

    Seshan, Hari; Goyal, Manish K; Falk, Michael W; Wuertz, Stefan

    2014-04-15

    The relationship between microbial community structure and function has been examined in detail in natural and engineered environments, but little work has been done on using microbial community information to predict function. We processed microbial community and operational data from controlled experiments with bench-scale bioreactor systems to predict reactor process performance. Four membrane-operated sequencing batch reactors treating synthetic wastewater were operated in two experiments to test the effects of (i) the toxic compound 3-chloroaniline (3-CA) and (ii) bioaugmentation targeting 3-CA degradation, on the sludge microbial community in the reactors. In the first experiment, two reactors were treated with 3-CA and two reactors were operated as controls without 3-CA input. In the second experiment, all four reactors were additionally bioaugmented with a Pseudomonas putida strain carrying a plasmid with a portion of the pathway for 3-CA degradation. Molecular data were generated from terminal restriction fragment length polymorphism (T-RFLP) analysis targeting the 16S rRNA and amoA genes from the sludge community. The electropherograms resulting from these T-RFs were used to calculate diversity indices - community richness, dynamics and evenness - for the domain Bacteria as well as for ammonia-oxidizing bacteria in each reactor over time. These diversity indices were then used to train and test a support vector regression (SVR) model to predict reactor performance based on input microbial community indices and operational data. Considering the diversity indices over time and across replicate reactors as discrete values, it was found that, although bioaugmentation with a bacterial strain harboring a subset of genes involved in the degradation of 3-CA did not bring about 3-CA degradation, it significantly affected the community as measured through all three diversity indices in both the general bacterial community and the ammonia-oxidizer community (

  9. Learning Change from Synthetic Aperture Radar Images: Performance Evaluation of a Support Vector Machine to Detect Earthquake and Tsunami-Induced Changes

    Directory of Open Access Journals (Sweden)

    Marc Wieland

    2016-09-01

    Full Text Available This study evaluates the performance of a Support Vector Machine (SVM classifier to learn and detect changes in single- and multi-temporal X- and L-band Synthetic Aperture Radar (SAR images under varying conditions. The purpose is to provide guidance on how to train a powerful learning machine for change detection in SAR images and to contribute to a better understanding of potentials and limitations of supervised change detection approaches. This becomes particularly important on the background of a rapidly growing demand for SAR change detection to support rapid situation awareness in case of natural disasters. The application environment of this study thus focuses on detecting changes caused by the 2011 Tohoku earthquake and tsunami disaster, where single polarized TerraSAR-X and ALOS PALSAR intensity images are used as input. An unprecedented reference dataset of more than 18,000 buildings that have been visually inspected by local authorities for damages after the disaster forms a solid statistical population for the performance experiments. Several critical choices commonly made during the training stage of a learning machine are being assessed for their influence on the change detection performance, including sampling approach, location and number of training samples, classification scheme, change feature space and the acquisition dates of the satellite images. Furthermore, the proposed machine learning approach is compared with the widely used change image thresholding. The study concludes that a well-trained and tuned SVM can provide highly accurate change detections that outperform change image thresholding. While good performance is achieved in the binary change detection case, a distinction between multiple change classes in terms of damage grades leads to poor performance in the tested experimental setting. The major drawback of a machine learning approach is related to the high costs of training. The outcomes of this study, however

  10. Improving UV Resistance of High Performance Fibers

    Science.gov (United States)

    Hassanin, Ahmed

    High performance fibers are characterized by their superior properties compared to the traditional textile fibers. High strength fibers have high modules, high strength to weight ratio, high chemical resistance, and usually high temperature resistance. It is used in application where superior properties are needed such as bulletproof vests, ropes and cables, cut resistant products, load tendons for giant scientific balloons, fishing rods, tennis racket strings, parachute cords, adhesives and sealants, protective apparel and tire cords. Unfortunately, Ultraviolet (UV) radiation causes serious degradation to the most of high performance fibers. UV lights, either natural or artificial, cause organic compounds to decompose and degrade, because the energy of the photons of UV light is high enough to break chemical bonds causing chain scission. This work is aiming at achieving maximum protection of high performance fibers using sheathing approaches. The sheaths proposed are of lightweight to maintain the advantage of the high performance fiber that is the high strength to weight ratio. This study involves developing three different types of sheathing. The product of interest that need be protected from UV is braid from PBO. First approach is extruding a sheath from Low Density Polyethylene (LDPE) loaded with different rutile TiO2 % nanoparticles around the braid from the PBO. The results of this approach showed that LDPE sheath loaded with 10% TiO2 by weight achieved the highest protection compare to 0% and 5% TiO2. The protection here is judged by strength loss of PBO. This trend noticed in different weathering environments, where the sheathed samples were exposed to UV-VIS radiations in different weatheromter equipments as well as exposure to high altitude environment using NASA BRDL balloon. The second approach is focusing in developing a protective porous membrane from polyurethane loaded with rutile TiO2 nanoparticles. Membrane from polyurethane loaded with 4

  11. Intel Xeon Phi coprocessor high performance programming

    CERN Document Server

    Jeffers, James

    2013-01-01

    Authors Jim Jeffers and James Reinders spent two years helping educate customers about the prototype and pre-production hardware before Intel introduced the first Intel Xeon Phi coprocessor. They have distilled their own experiences coupled with insights from many expert customers, Intel Field Engineers, Application Engineers and Technical Consulting Engineers, to create this authoritative first book on the essentials of programming for this new architecture and these new products. This book is useful even before you ever touch a system with an Intel Xeon Phi coprocessor. To ensure that your applications run at maximum efficiency, the authors emphasize key techniques for programming any modern parallel computing system whether based on Intel Xeon processors, Intel Xeon Phi coprocessors, or other high performance microprocessors. Applying these techniques will generally increase your program performance on any system, and better prepare you for Intel Xeon Phi coprocessors and the Intel MIC architecture. It off...

  12. Development of high-performance blended cements

    Science.gov (United States)

    Wu, Zichao

    2000-10-01

    This thesis presents the development of high-performance blended cements from industrial by-products. To overcome the low-early strength of blended cements, several chemicals were studied as the activators for cement hydration. Sodium sulfate was discovered as the best activator. The blending proportions were optimized by Taguchi experimental design. The optimized blended cements containing up to 80% fly ash performed better than Type I cement in strength development and durability. Maintaining a constant cement content, concrete produced from the optimized blended cements had equal or higher strength and higher durability than that produced from Type I cement alone. The key for the activation mechanism was the reaction between added SO4 2- and Ca2+ dissolved from cement hydration products.

  13. CÓMPUTO DE ALTO DESEMPEÑO PARA OPERACIONES VECTORIALES EN BLAS-1 // INCREASED COMPUTATIONAL PERFORMANCE FOR VECTOR OPERATIONS ON BLAS-1

    OpenAIRE

    José Antonio Muñoz Gómez; Abimael Jiménez Pérez; Gustavo Rodríguez Gómez

    2014-01-01

    The functions library, called Basic Linear Algebra Subprograms (BLAS-1), is considered the programming standard in scientific computing. In this work, we focus on the analysis of various code optimization techniques to increase the computational performance of BLAS-1. In particular, we address a combinational approach to explore possible methods of encoding using unroll technique with different levels of depth, vector data programming with MMX and SSE for Intel processors. Using the main func...

  14. Accurate Angle Estimator for High-Frame-rate 2-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Lindskov Hansen, Kristoffer

    2016-01-01

    This paper presents a novel approach for estimating 2-D flow angles using a high-frame-rate ultrasound method. The angle estimator features high accuracy and low standard deviation (SD) over the full 360° range. The method is validated on Field II simulations and phantom measurements using...

  15. Utilities for high performance dispersion model PHYSIC

    International Nuclear Information System (INIS)

    Yamazawa, Hiromi

    1992-09-01

    The description and usage of the utilities for the dispersion calculation model PHYSIC were summarized. The model was developed in the study of developing high performance SPEEDI with the purpose of introducing meteorological forecast function into the environmental emergency response system. The procedure of PHYSIC calculation consists of three steps; preparation of relevant files, creation and submission of JCL, and graphic output of results. A user can carry out the above procedure with the help of the Geographical Data Processing Utility, the Model Control Utility, and the Graphic Output Utility. (author)

  16. An integrated high performance fastbus slave interface

    International Nuclear Information System (INIS)

    Christiansen, J.; Ljuslin, C.

    1992-01-01

    A high performance Fastbus slave interface ASIC is presented. The Fastbus slave integrated circuit (FASIC) is a programmable device, enabling its direct use in many different applications. The FASIC acts as an interface between Fastbus and a 'standard' processor/memory bus. It can work stand-alone or together with a microprocessor. A set of address mapping windows can map Fastbus addresses to convenient memory addresses and at the same time act as address decoding logic. Data rates of 100 MBytes/s to Fastbus can be obtained using an internal FIFO buffer in the FASIC. (orig.)

  17. Decomposition of group-velocity-locked-vector-dissipative solitons and formation of the high-order soliton structure by the product of their recombination.

    Science.gov (United States)

    Wang, Xuan; Li, Lei; Geng, Ying; Wang, Hanxiao; Su, Lei; Zhao, Luming

    2018-02-01

    By using a polarization manipulation and projection system, we numerically decomposed the group-velocity-locked-vector-dissipative solitons (GVLVDSs) from a normal dispersion fiber laser and studied the combination of the projections of the phase-modulated components of the GVLVDS through a polarization beam splitter. Pulses with a structure similar to a high-order vector soliton could be obtained, which could be considered as a pseudo-high-order GVLVDS. It is found that, although GVLVDSs are intrinsically different from group-velocity-locked-vector solitons generated in fiber lasers operated in the anomalous dispersion regime, similar characteristics for the generation of pseudo-high-order GVLVDS are obtained. However, pulse chirp plays a significant role on the generation of pseudo-high-order GVLVDS.

  18. PCR reveals significantly higher rates of Trypanosoma cruzi infection than microscopy in the Chagas vector, Triatoma infestans: High rates found in Chuquisaca, Bolivia

    Directory of Open Access Journals (Sweden)

    Lucero David E

    2007-06-01

    Full Text Available Abstract Background The Andean valleys of Bolivia are the only reported location of sylvatic Triatoma infestans, the main vector of Chagas disease in this country, and the high human prevalence of Trypanosoma cruzi infection in this region is hypothesized to result from the ability of vectors to persist in domestic, peri-domestic, and sylvatic environments. Determination of the rate of Trypanosoma infection in its triatomine vectors is an important element in programs directed at reducing human infections. Traditionally, T. cruzi has been detected in insect vectors by direct microscopic examination of extruded feces, or dissection and analysis of the entire bug. Although this technique has proven to be useful, several drawbacks related to its sensitivity especially in the case of small instars and applicability to large numbers of insects and dead specimens have motivated researchers to look for a molecular assay based on the polymerase chain reaction (PCR as an alternative for parasitic detection of T. cruzi infection in vectors. In the work presented here, we have compared a PCR assay and direct microscopic observation for diagnosis of T. cruzi infection in T. infestans collected in the field from five localities and four habitats in Chuquisaca, Bolivia. The efficacy of the methods was compared across nymphal stages, localities and habitats. Methods We examined 152 nymph and adult T. infestans collected from rural areas in the department of Chuquisaca, Bolivia. For microscopic observation, a few drops of rectal content obtained by abdominal extrusion were diluted with saline solution and compressed between a slide and a cover slip. The presence of motile parasites in 50 microscopic fields was registered using 400× magnification. For the molecular analysis, dissection of the posterior part of the abdomen of each insect followed by DNA extraction and PCR amplification was performed using the TCZ1 (5' – CGA GCT CTT GCC CAC ACG GGT GCT – 3

  19. Applicability of vector processing to large-scale nuclear codes

    International Nuclear Information System (INIS)

    Ishiguro, Misako; Harada, Hiroo; Matsuura, Toshihiko; Okuda, Motoi; Ohta, Fumio; Umeya, Makoto.

    1982-03-01

    To meet the growing trend of computational requirements in JAERI, introduction of a high-speed computer with vector processing faculty (a vector processor) is desirable in the near future. To make effective use of a vector processor, appropriate optimization of nuclear codes to pipelined-vector architecture is vital, which will pose new problems concerning code development and maintenance. In this report, vector processing efficiency is assessed with respect to large-scale nuclear codes by examining the following items: 1) The present feature of computational load in JAERI is analyzed by compiling the computer utilization statistics. 2) Vector processing efficiency is estimated for the ten heavily-used nuclear codes by analyzing their dynamic behaviors run on a scalar machine. 3) Vector processing efficiency is measured for the other five nuclear codes by using the current vector processors, FACOM 230-75 APU and CRAY-1. 4) Effectiveness of applying a high-speed vector processor to nuclear codes is evaluated by taking account of the characteristics in JAERI jobs. Problems of vector processors are also discussed from the view points of code performance and ease of use. (author)

  20. High performance visual display for HENP detectors

    CERN Document Server

    McGuigan, M; Spiletic, J; Fine, V; Nevski, P

    2001-01-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactiv...

  1. High-Performance Vertical Organic Electrochemical Transistors.

    Science.gov (United States)

    Donahue, Mary J; Williamson, Adam; Strakosas, Xenofon; Friedlein, Jacob T; McLeod, Robert R; Gleskova, Helena; Malliaras, George G

    2018-02-01

    Organic electrochemical transistors (OECTs) are promising transducers for biointerfacing due to their high transconductance, biocompatibility, and availability in a variety of form factors. Most OECTs reported to date, however, utilize rather large channels, limiting the transistor performance and resulting in a low transistor density. This is typically a consequence of limitations associated with traditional fabrication methods and with 2D substrates. Here, the fabrication and characterization of OECTs with vertically stacked contacts, which overcome these limitations, is reported. The resulting vertical transistors exhibit a reduced footprint, increased intrinsic transconductance of up to 57 mS, and a geometry-normalized transconductance of 814 S m -1 . The fabrication process is straightforward and compatible with sensitive organic materials, and allows exceptional control over the transistor channel length. This novel 3D fabrication method is particularly suited for applications where high density is needed, such as in implantable devices. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. High Performance Data Distribution for Scientific Community

    Science.gov (United States)

    Tirado, Juan M.; Higuero, Daniel; Carretero, Jesus

    2010-05-01

    Institutions such as NASA, ESA or JAXA find solutions to distribute data from their missions to the scientific community, and their long term archives. This is a complex problem, as it includes a vast amount of data, several geographically distributed archives, heterogeneous architectures with heterogeneous networks, and users spread around the world. We propose a novel architecture (HIDDRA) that solves this problem aiming to reduce user intervention in data acquisition and processing. HIDDRA is a modular system that provides a highly efficient parallel multiprotocol download engine, using a publish/subscribe policy which helps the final user to obtain data of interest transparently. Our system can deal simultaneously with multiple protocols (HTTP,HTTPS, FTP, GridFTP among others) to obtain the maximum bandwidth, reducing the workload in data server and increasing flexibility. It can also provide high reliability and fault tolerance, as several sources of data can be used to perform one file download. HIDDRA architecture can be arranged into a data distribution network deployed on several sites that can cooperate to provide former features. HIDDRA has been addressed by the 2009 e-IRG Report on Data Management as a promising initiative for data interoperability. Our first prototype has been evaluated in collaboration with the ESAC centre in Villafranca del Castillo (Spain) that shows a high scalability and performance, opening a wide spectrum of opportunities. Some preliminary results have been published in the Journal of Astrophysics and Space Science [1]. [1] D. Higuero, J.M. Tirado, J. Carretero, F. Félix, and A. de La Fuente. HIDDRA: a highly independent data distribution and retrieval architecture for space observation missions. Astrophysics and Space Science, 321(3):169-175, 2009

  3. High-performance laboratories and cleanrooms; TOPICAL

    International Nuclear Information System (INIS)

    Tschudi, William; Sartor, Dale; Mills, Evan; Xu, Tengfang

    2002-01-01

    The California Energy Commission sponsored this roadmap to guide energy efficiency research and deployment for high performance cleanrooms and laboratories. Industries and institutions utilizing these building types (termed high-tech buildings) have played an important part in the vitality of the California economy. This roadmap's key objective to present a multi-year agenda to prioritize and coordinate research efforts. It also addresses delivery mechanisms to get the research products into the market. Because of the importance to the California economy, it is appropriate and important for California to take the lead in assessing the energy efficiency research needs, opportunities, and priorities for this market. In addition to the importance to California's economy, energy demand for this market segment is large and growing (estimated at 9400 GWH for 1996, Mills et al. 1996). With their 24hr. continuous operation, high tech facilities are a major contributor to the peak electrical demand. Laboratories and cleanrooms constitute the high tech building market, and although each building type has its unique features, they are similar in that they are extremely energy intensive, involve special environmental considerations, have very high ventilation requirements, and are subject to regulations-primarily safety driven-that tend to have adverse energy implications. High-tech buildings have largely been overlooked in past energy efficiency research. Many industries and institutions utilize laboratories and cleanrooms. As illustrated, there are many industries operating cleanrooms in California. These include semiconductor manufacturing, semiconductor suppliers, pharmaceutical, biotechnology, disk drive manufacturing, flat panel displays, automotive, aerospace, food, hospitals, medical devices, universities, and federal research facilities

  4. Cloning vector

    Science.gov (United States)

    Guilfoyle, Richard A.; Smith, Lloyd M.

    1994-01-01

    A vector comprising a filamentous phage sequence containing a first copy of filamentous phage gene X and other sequences necessary for the phage to propagate is disclosed. The vector also contains a second copy of filamentous phage gene X downstream from a promoter capable of promoting transcription in a bacterial host. In a preferred form of the present invention, the filamentous phage is M13 and the vector additionally includes a restriction endonuclease site located in such a manner as to substantially inactivate the second gene X when a DNA sequence is inserted into the restriction site.

  5. Cloning vector

    Science.gov (United States)

    Guilfoyle, R.A.; Smith, L.M.

    1994-12-27

    A vector comprising a filamentous phage sequence containing a first copy of filamentous phage gene X and other sequences necessary for the phage to propagate is disclosed. The vector also contains a second copy of filamentous phage gene X downstream from a promoter capable of promoting transcription in a bacterial host. In a preferred form of the present invention, the filamentous phage is M13 and the vector additionally includes a restriction endonuclease site located in such a manner as to substantially inactivate the second gene X when a DNA sequence is inserted into the restriction site. 2 figures.

  6. Gardening as vector of a humanization of high-rise building

    Science.gov (United States)

    Lekareva, Nina; Zaslavskaya, Anna

    2018-03-01

    Article is devoted to issue of integration of vertical gardening into structure of high-rise building in the conditions of the constrained town-planning situation. On the basis of the analysis of the existing experience of design and building of "biopositive" high-rise building ecological, town-planning, social and constructive advantages of the organization of gardens on roofs and vertical gardens are considered [1]. As the main mechanism of increase in investment appeal of high-rise building the principle of a humanization due to gardening of high-rise building taking into account requirements of ecology, energy efficiency of buildings and improvement of quality of construction with minimization of expenses and maximizing comfort moves forward. The National Standards of Green construction designed to adapt the international requirements of architecture and construction of the energy efficient, eco-friendly and comfortable building or a complex to local conditions are considered [2,3].

  7. Transport in JET high performance plasmas

    International Nuclear Information System (INIS)

    2001-01-01

    Two type of high performance scenarios have been produced in JET during DTE1 campaign. One of them is the well known and extensively used in the past ELM-free hot ion H-mode scenario which has two distinct regions- plasma core and the edge transport barrier. The results obtained during DTE-1 campaign with D, DT and pure T plasmas confirms our previous conclusion that the core transport scales as a gyroBohm in the inner half of plasma volume, recovers its Bohm nature closer to the separatrix and behaves as ion neoclassical in the transport barrier. Measurements on the top of the barrier suggest that the width of the barrier is dependent upon isotope and moreover suggest that fast ions play a key role. The other high performance scenario is a relatively recently developed Optimised Shear Scenario with small or slightly negative magnetic shear in plasma core. Different mechanisms of Internal Transport Barrier (ITB) formation have been tested by predictive modelling and the results are compared with experimentally observed phenomena. The experimentally observed non-penetration of the heavy impurities through the strong ITB which contradicts to a prediction of the conventional neo-classical theory is discussed. (author)

  8. High-performance computing for airborne applications

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Manuzatto, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-01-01

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  9. Transport in JET high performance plasmas

    International Nuclear Information System (INIS)

    1999-01-01

    Two type of high performance scenarios have been produced in JET during DTE1 campaign. One of them is the well known and extensively used in the past ELM-free hot ion H-mode scenario which has two distinct regions- plasma core and the edge transport barrier. The results obtained during DTE-1 campaign with D, DT and pure T plasmas confirms our previous conclusion that the core transport scales as a gyroBohm in the inner half of plasma volume, recovers its Bohm nature closer to the separatrix and behaves as ion neoclassical in the transport barrier. Measurements on the top of the barrier suggest that the width of the barrier is dependent upon isotope and moreover suggest that fast ions play a key role. The other high performance scenario is a relatively recently developed Optimised Shear Scenario with small or slightly negative magnetic shear in plasma core. Different mechanisms of Internal Transport Barrier (ITB) formation have been tested by predictive modelling and the results are compared with experimentally observed phenomena. The experimentally observed non-penetration of the heavy impurities through the strong ITB which contradicts to a prediction of the conventional neo-classical theory is discussed. (author)

  10. High-performance vertical organic transistors.

    Science.gov (United States)

    Kleemann, Hans; Günther, Alrun A; Leo, Karl; Lüssem, Björn

    2013-11-11

    Vertical organic thin-film transistors (VOTFTs) are promising devices to overcome the transconductance and cut-off frequency restrictions of horizontal organic thin-film transistors. The basic physical mechanisms of VOTFT operation, however, are not well understood and VOTFTs often require complex patterning techniques using self-assembly processes which impedes a future large-area production. In this contribution, high-performance vertical organic transistors comprising pentacene for p-type operation and C60 for n-type operation are presented. The static current-voltage behavior as well as the fundamental scaling laws of such transistors are studied, disclosing a remarkable transistor operation with a behavior limited by injection of charge carriers. The transistors are manufactured by photolithography, in contrast to other VOTFT concepts using self-assembled source electrodes. Fluorinated photoresist and solvent compounds allow for photolithographical patterning directly and strongly onto the organic materials, simplifying the fabrication protocol and making VOTFTs a prospective candidate for future high-performance applications of organic transistors. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Performance of the CMS High Level Trigger

    CERN Document Server

    Perrotta, Andrea

    2015-01-01

    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increases in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. The increase in the number of interactions per bunch crossing, on average 25 in 2012, and expected to be around 40 in Run II, will be an additional complication. We present here the expected performance of the main triggers that will be used during the 2015 data taking campaign, paying particular attention to the new approaches that have been developed to cope with the challenges of the new run. This includes improvements in HLT electron and photon reconstruction as well as better performing muon triggers. We will also present the performance of the improved trac...

  12. Development of a High Performance Spacer Grid

    Energy Technology Data Exchange (ETDEWEB)

    Song, Kee Nam; Song, K. N.; Yoon, K. H. (and others)

    2007-03-15

    A spacer grid in a LWR fuel assembly is a key structural component to support fuel rods and to enhance the heat transfer from the fuel rod to the coolant. In this research, the main research items are the development of inherent and high performance spacer grid shapes, the establishment of mechanical/structural analysis and test technology, and the set-up of basic test facilities for the spacer grid. The main research areas and results are as follows. 1. 18 different spacer grid candidates have been invented and applied for domestic and US patents. Among the candidates 16 are chosen from the patent. 2. Two kinds of spacer grids are finally selected for the advanced LWR fuel after detailed performance tests on the candidates and commercial spacer grids from a mechanical/structural point of view. According to the test results the features of the selected spacer grids are better than those of the commercial spacer grids. 3. Four kinds of basic test facilities are set up and the relevant test technologies are established. 4. Mechanical/structural analysis models and technology for spacer grid performance are developed and the analysis results are compared with the test results to enhance the reliability of the models.

  13. Low cost high performance uncertainty quantification

    KAUST Repository

    Bekas, C.

    2009-01-01

    Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost which quickly becomes intractable with the current explosion of data sizes. In this work we reduce this complexity to quadratic with the synergy of two algorithms that gracefully complement each other and lead to a radically different approach. First, we turned to stochastic estimation of the diagonal. This allowed us to cast the problem as a linear system with a relatively small number of multiple right hand sides. Second, for this linear system we developed a novel, mixed precision, iterative refinement scheme, which uses iterative solvers instead of matrix factorizations. We demonstrate that the new framework not only achieves the much needed quadratic cost but in addition offers excellent opportunities for scaling at massively parallel environments. We based our implementation on BLAS 3 kernels that ensure very high processor performance. We achieved a peak performance of 730 TFlops on 72 BG/P racks, with a sustained performance 73% of theoretical peak. We stress that the techniques presented in this work are quite general and applicable to several other important applications. Copyright © 2009 ACM.

  14. Energy Efficient Graphene Based High Performance Capacitors.

    Science.gov (United States)

    Bae, Joonwon; Kwon, Oh Seok; Lee, Chang-Soo

    2017-07-10

    Graphene (GRP) is an interesting class of nano-structured electronic materials for various cutting-edge applications. To date, extensive research activities have been performed on the investigation of diverse properties of GRP. The incorporation of this elegant material can be very lucrative in terms of practical applications in energy storage/conversion systems. Among various those systems, high performance electrochemical capacitors (ECs) have become popular due to the recent need for energy efficient and portable devices. Therefore, in this article, the application of GRP for capacitors is described succinctly. In particular, a concise summary on the previous research activities regarding GRP based capacitors is also covered extensively. It was revealed that a lot of secondary materials such as polymers and metal oxides have been introduced to improve the performance. Also, diverse devices have been combined with capacitors for better use. More importantly, recent patents related to the preparation and application of GRP based capacitors are also introduced briefly. This article can provide essential information for future study. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  15. SISYPHUS: A high performance seismic inversion factory

    Science.gov (United States)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with

  16. Ultra high performance concrete dematerialization study

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-03-01

    Concrete is the most widely used building material in the world and its use is expected to grow. It is well recognized that the production of portland cement results in the release of large amounts of carbon dioxide, a greenhouse gas (GHG). The main challenge facing the industry is to produce concrete in an environmentally sustainable manner. Reclaimed industrial by-proudcts such as fly ash, silica fume and slag can reduce the amount of portland cement needed to make concrete, thereby reducing the amount of GHGs released to the atmosphere. The use of these supplementary cementing materials (SCM) can also enhance the long-term strength and durability of concrete. The intention of the EcoSmart{sup TM} Concrete Project is to develop sustainable concrete through innovation in supply, design and construction. In particular, the project focuses on finding a way to minimize the GHG signature of concrete by maximizing the replacement of portland cement in the concrete mix with SCM while improving the cost, performance and constructability. This paper describes the use of Ductal{sup R} Ultra High Performance Concrete (UHPC) for ramps in a condominium. It examined the relationship between the selection of UHPC and the overall environmental performance, cost, constructability maintenance and operational efficiency as it relates to the EcoSmart Program. The advantages and challenges of using UHPC were outlined. In addition to its very high strength, UHPC has been shown to have very good potential for GHG emission reduction due to the reduced material requirements, reduced transport costs and increased SCM content. refs., tabs., figs.

  17. JT-60U high performance regimes

    International Nuclear Information System (INIS)

    Ishida, S.

    1999-01-01

    High performance regimes of JT-60U plasmas are presented with an emphasis upon the results from the use of a semi-closed pumped divertor with W-shaped geometry. Plasma performance in transient and quasi steady states has been significantly improved in reversed shear and high- βp regimes. The reversed shear regime elevated an equivalent Q DT eq transiently up to 1.25 (n D (0)τ E T i (0)=8.6x10 20 m-3·s·keV) in a reactor-relevant thermonuclear dominant regime. Long sustainment of enhanced confinement with internal transport barriers (ITBs) with a fully non-inductive current drive in a reversed shear discharge was successfully demonstrated with LH wave injection. Performance sustainment has been extended in the high- bp regime with a high triangularity achieving a long sustainment of plasma conditions equivalent to Q DT eq ∼0.16 (n D (0)τ E T i (0)∼1.4x10 20 m -3 ·s·keV) for ∼4.5 s with a large non-inductive current drive fraction of 60-70% of the plasma current. Thermal and particle transport analyses show significant reduction of thermal and particle diffusivities around ITB resulting in a strong Er shear in the ITB region. The W-shaped divertor is effective for He ash exhaust demonstrating steady exhaust capability of τ He */τ E ∼3-10 in support of ITER. Suppression of neutral back flow and chemical sputtering effect have been observed while MARFE onset density is rather decreased. Negative-ion based neutral beam injection (N-NBI) experiments have created a clear H-mode transition. Enhanced ionization cross- section due to multi-step ionization processes was confirmed as theoretically predicted. A current density profile driven by N-NBI is measured in a good agreement with theoretical prediction. N-NBI induced TAE modes characterized as persistent and bursting oscillations have been observed from a low hot beta of h >∼0.1-0.2% without a significant loss of fast ions. (author)

  18. High-performance phase-field modeling

    KAUST Repository

    Vignal, Philippe

    2015-04-27

    Many processes in engineering and sciences involve the evolution of interfaces. Among the mathematical frameworks developed to model these types of problems, the phase-field method has emerged as a possible solution. Phase-fields nonetheless lead to complex nonlinear, high-order partial differential equations, whose solution poses mathematical and computational challenges. Guaranteeing some of the physical properties of the equations has lead to the development of efficient algorithms and discretizations capable of recovering said properties by construction [2, 5]. This work builds-up on these ideas, and proposes novel discretization strategies that guarantee numerical energy dissipation for both conserved and non-conserved phase-field models. The temporal discretization is based on a novel method which relies on Taylor series and ensures strong energy stability. It is second-order accurate, and can also be rendered linear to speed-up the solution process [4]. The spatial discretization relies on Isogeometric Analysis, a finite element method that possesses the k-refinement technology and enables the generation of high-order, high-continuity basis functions. These basis functions are well suited to handle the high-order operators present in phase-field models. Two-dimensional and three dimensional results of the Allen-Cahn, Cahn-Hilliard, Swift-Hohenberg and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  19. Identifying individuals at high risk of psychosis: predictive utility of Support Vector Machine using structural and functional MRI data

    Directory of Open Access Journals (Sweden)

    Isabel eValli

    2016-04-01

    Full Text Available The identification of individuals at high risk of developing psychosis is entirely based on clinical assessment, associated with limited predictive potential. There is therefore increasing interest in the development of biological markers that could be used in clinical practice for this purpose. We studied 25 individuals with an At Risk Mental State for psychosis and 25 healthy controls using structural MRI, and functional MRI in conjunction with a verbal memory task. Data were analysed using a standard univariate analysis, and with Support Vector Machine (SVM, a multivariate pattern recognition technique that enables statistical inferences to be made at the level of the individual, yielding results with high translational potential. The application of SVM to structural MRI data permitted the identification of individuals at high risk of psychosis with a sensitivity of 68% and a specificity of 76%, resulting in an accuracy of 72% (p<0.001. Univariate volumetric between-group differences did not reach statistical significance. In contrast, the univariate fMRI analysis identified between-group differences (p<0.05 corrected while the application of SVM to the same data did not. Since SVM is well suited at identifying the pattern of abnormality that distinguishes two groups, whereas univariate methods are more likely to identify regions that individually are most different between two groups, our results suggest the presence of focal functional abnormalities in the context of a diffuse pattern of structural abnormalities in individuals at high clinical risk of psychosis.

  20. High performance computing, supercomputing, náročné počítání

    Czech Academy of Sciences Publication Activity Database

    Okrouhlík, Miloslav

    2003-01-01

    Roč. 10, č. 5 (2003), s. 429-438 ISSN 1210-2717 R&D Projects: GA ČR GA101/02/0072 Institutional research plan: CEZ:AV0Z2076919 Keywords : high performance computing * vector and parallel computers * programing tools for parellelization Subject RIV: BI - Acoustics

  1. High performance visual display for HENP detectors

    International Nuclear Information System (INIS)

    McGuigan, Michael; Smith, Gordon; Spiletic, John; Fine, Valeri; Nevski, Pavel

    2001-01-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactive control, including the ability to slice, search and mark areas of the detector. We incorporate the ability to make a high quality still image of a view of the detector and the ability to generate animations and a fly through of the detector and output these to MPEG or VRML models. We develop data compression hardware and software so that remote interactive visualization will be possible among dispersed collaborators. We obtain real time visual display for events accumulated during simulations

  2. Development of high performance ODS alloys

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Lin [Texas A & M Univ., College Station, TX (United States); Gao, Fei [Univ. of Michigan, Ann Arbor, MI (United States); Garner, Frank [Texas A & M Univ., College Station, TX (United States)

    2018-01-29

    This project aims to capitalize on insights developed from recent high-dose self-ion irradiation experiments in order to develop and test the next generation of optimized ODS alloys needed to meet the nuclear community's need for high strength, radiation-tolerant cladding and core components, especially with enhanced resistance to void swelling. Two of these insights are that ferrite grains swell earlier than tempered martensite grains, and oxide dispersions currently produced only in ferrite grains require a high level of uniformity and stability to be successful. An additional insight is that ODS particle stability is dependent on as-yet unidentified compositional combinations of dispersoid and alloy matrix, such as dispersoids are stable in MA957 to doses greater than 200 dpa but dissolve in MA956 at doses less than 200 dpa. These findings focus attention on candidate next-generation alloys which address these concerns. Collaboration with two Japanese groups provides this project with two sets of first-round candidate alloys that have already undergone extensive development and testing for unirradiated properties, but have not yet been evaluated for their irradiation performance. The first set of candidate alloys are dual phase (ferrite + martensite) ODS alloys with oxide particles uniformly distributed in both ferrite and martensite phases. The second set of candidate alloys are ODS alloys containing non-standard dispersoid compositions with controllable oxide particle sizes, phases and interfaces.

  3. Low-Cost High-Performance MRI

    Science.gov (United States)

    Sarracanie, Mathieu; Lapierre, Cristen D.; Salameh, Najat; Waddington, David E. J.; Witzel, Thomas; Rosen, Matthew S.

    2015-10-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5-3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm3 imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (standards for affordable (<$50,000) and robust portable devices.

  4. VEST: Abstract vector calculus simplification in Mathematica

    Science.gov (United States)

    Squire, J.; Burby, J.; Qin, H.

    2014-01-01

    We present a new package, VEST (Vector Einstein Summation Tools), that performs abstract vector calculus computations in Mathematica. Through the use of index notation, VEST is able to reduce three-dimensional scalar and vector expressions of a very general type to a well defined standard form. In addition, utilizing properties of the Levi-Civita symbol, the program can derive types of multi-term vector identities that are not recognized by reduction, subsequently applying these to simplify large expressions. In a companion paper Burby et al. (2013) [12], we employ VEST in the automation of the calculation of high-order Lagrangians for the single particle guiding center system in plasma physics, a computation which illustrates its ability to handle very large expressions. VEST has been designed to be simple and intuitive to use, both for basic checking of work and more involved computations.

  5. Creating Realistic 3D Graphics with Excel at High School--Vector Algebra in Practice

    Science.gov (United States)

    Benacka, Jan

    2015-01-01

    The article presents the results of an experiment in which Excel applications that depict rotatable and sizable orthographic projection of simple 3D figures with face overlapping were developed with thirty gymnasium (high school) students of age 17-19 as an introduction to 3D computer graphics. A questionnaire survey was conducted to find out…

  6. Vector magnetic field inversions of high cadence SOLIS-VSM data

    NARCIS (Netherlands)

    Fischer, C.E.; Keller, C.U.; Snik, F.

    2007-01-01

    We have processed full Stokes observations from the SOLIS VSM in the photospheric lines Fe I 630.15 nm and 630.25 nm. The data sets have high spectral and temporal resolution, moderate spatial resolution, and large polarimetric sensitivity and accuracy. We used the LILIA, an LTE code written by

  7. Support vector machines applications

    CERN Document Server

    Guo, Guodong

    2014-01-01

    Support vector machines (SVM) have both a solid mathematical background and good performance in practical applications. This book focuses on the recent advances and applications of the SVM in different areas, such as image processing, medical practice, computer vision, pattern recognition, machine learning, applied statistics, business intelligence, and artificial intelligence. The aim of this book is to create a comprehensive source on support vector machine applications, especially some recent advances.

  8. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  9. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  10. High performance thermal stress analysis on the earth simulator

    International Nuclear Information System (INIS)

    Noriyuki, Kushida; Hiroshi, Okuda; Genki, Yagawa

    2003-01-01

    In this study, the thermal stress finite element analysis code optimized for the earth simulator was developed. A processor node of which of the earth simulator is the 8-way vector processor, and each processor can communicate using the message passing interface. Thus, there are two ways to parallelize the finite element method on the earth simulator. The first method is to assign one processor for one sub-domain, and the second method is to assign one node (=8 processors) for one sub-domain considering the shared memory type parallelization. Considering that the preconditioned conjugate gradient (PCG) method, which is one of the suitable linear equation solvers for the large-scale parallel finite element methods, shows the better convergence behavior if the number of domains is the smaller, we have determined to employ PCG and the hybrid parallelization, which is based on the shared and distributed memory type parallelization. It has been said that it is hard to obtain the good parallel or vector performance, since the finite element method is based on unstructured grids. In such situation, the reordering is inevitable to improve the computational performance [2]. In this study, we used three reordering methods, i.e. Reverse Cuthil-McKee (RCM), cyclic multicolor (CM) and diagonal jagged descending storage (DJDS)[3]. RCM provides the good convergence of the incomplete lower-upper (ILU) PCG, but causes the load imbalance. On the other hand, CM provides the good load balance, but worsens the convergence of ILU PCG if the vector length is so long. Therefore, we used the combined-method of RCM and CM. DJDS is the method to store the sparse matrices such that longer vector length can be obtained. For attaining the efficient inter-node parallelization, such partitioning methods as the recursive coordinate bisection (RCM) or MeTIS have been used. Computational performance of the practical large-scale engineering problems will be shown at the meeting. (author)

  11. Thermal interface pastes nanostructured for high performance

    Science.gov (United States)

    Lin, Chuangang

    Thermal interface materials in the form of pastes are needed to improve thermal contacts, such as that between a microprocessor and a heat sink of a computer. High-performance and low-cost thermal pastes have been developed in this dissertation by using polyol esters as the vehicle and various nanoscale solid components. The proportion of a solid component needs to be optimized, as an excessive amount degrades the performance, due to the increase in the bond line thickness. The optimum solid volume fraction tends to be lower when the mating surfaces are smoother, and higher when the thermal conductivity is higher. Both a low bond line thickness and a high thermal conductivity help the performance. When the surfaces are smooth, a low bond line thickness can be even more important than a high thermal conductivity, as shown by the outstanding performance of the nanoclay paste of low thermal conductivity in the smooth case (0.009 mum), with the bond line thickness less than 1 mum, as enabled by low storage modulus G', low loss modulus G" and high tan delta. However, for rough surfaces, the thermal conductivity is important. The rheology affects the bond line thickness, but it does not correlate well with the performance. This study found that the structure of carbon black is an important parameter that governs the effectiveness of a carbon black for use in a thermal paste. By using a carbon black with a lower structure (i.e., a lower DBP value), a thermal paste that is more effective than the previously reported carbon black paste was obtained. Graphite nanoplatelet (GNP) was found to be comparable in effectiveness to carbon black (CB) pastes for rough surfaces, but it is less effective for smooth surfaces. At the same filler volume fraction, GNP gives higher thermal conductivity than carbon black paste. At the same pressure, GNP gives higher bond line thickness than CB (Tokai or Cabot). The effectiveness of GNP is limited, due to the high bond line thickness. A

  12. High performance liquid chromatography in pharmaceutical analyses

    Directory of Open Access Journals (Sweden)

    Branko Nikolin

    2004-05-01

    Full Text Available In testing the pre-sale procedure the marketing of drugs and their control in the last ten years, high performance liquid chromatographyreplaced numerous spectroscopic methods and gas chromatography in the quantitative and qualitative analysis. In the first period of HPLC application it was thought that it would become a complementary method of gas chromatography, however, today it has nearly completely replaced gas chromatography in pharmaceutical analysis. The application of the liquid mobile phase with the possibility of transformation of mobilized polarity during chromatography and all other modifications of mobile phase depending upon the characteristics of substance which are being tested, is a great advantage in the process of separation in comparison to other methods. The greater choice of stationary phase is the next factor which enables realization of good separation. The separation line is connected to specific and sensitive detector systems, spectrafluorimeter, diode detector, electrochemical detector as other hyphernated systems HPLC-MS and HPLC-NMR, are the basic elements on which is based such wide and effective application of the HPLC method. The purpose high performance liquid chromatography(HPLC analysis of any drugs is to confirm the identity of a drug and provide quantitative results and also to monitor the progress of the therapy of a disease.1 Measuring presented on the Fig. 1. is chromatogram obtained for the plasma of depressed patients 12 h before oral administration of dexamethasone. It may also be used to further our understanding of the normal and disease process in the human body trough biomedical and therapeutically research during investigation before of the drugs registration. The analyses of drugs and metabolites in biological fluids, particularly plasma, serum or urine is one of the most demanding but one of the most common uses of high performance of liquid chromatography. Blood, plasma or

  13. Integrating advanced facades into high performance buildings

    International Nuclear Information System (INIS)

    Selkowitz, Stephen E.

    2001-01-01

    Glass is a remarkable material but its functionality is significantly enhanced when it is processed or altered to provide added intrinsic capabilities. The overall performance of glass elements in a building can be further enhanced when they are designed to be part of a complete facade system. Finally the facade system delivers the greatest performance to the building owner and occupants when it becomes an essential element of a fully integrated building design. This presentation examines the growing interest in incorporating advanced glazing elements into more comprehensive facade and building systems in a manner that increases comfort, productivity and amenity for occupants, reduces operating costs for building owners, and contributes to improving the health of the planet by reducing overall energy use and negative environmental impacts. We explore the role of glazing systems in dynamic and responsive facades that provide the following functionality: Enhanced sun protection and cooling load control while improving thermal comfort and providing most of the light needed with daylighting; Enhanced air quality and reduced cooling loads using natural ventilation schemes employing the facade as an active air control element; Reduced operating costs by minimizing lighting, cooling and heating energy use by optimizing the daylighting-thermal tradeoffs; Net positive contributions to the energy balance of the building using integrated photovoltaic systems; Improved indoor environments leading to enhanced occupant health, comfort and performance. In addressing these issues facade system solutions must, of course, respect the constraints of latitude, location, solar orientation, acoustics, earthquake and fire safety, etc. Since climate and occupant needs are dynamic variables, in a high performance building the facade solution have the capacity to respond and adapt to these variable exterior conditions and to changing occupant needs. This responsive performance capability

  14. The need for high performance breeder reactors

    International Nuclear Information System (INIS)

    Vaughan, R.D.; Chermanne, J.

    1977-01-01

    It can be easily demonstrated, on the basis of realistic estimates of continued high oil costs, that an increasing portion of the growth in energy demand must be supplied by nuclear power and that this one might account for 20% of all the energy production by the end of the century. Such assumptions lead very quickly to the conclusion that the discovery, extraction and processing of the uranium will not be able to follow the demand; the bottleneck will essentially be related to the rate at which the ore can be discovered and extracted, and not to the existing quantities nor their grade. Figures as high as 150.000 T/annum and more would be quickly reached, and it is necessary to wonder already now if enough capital can be attracted to meet these requirements. There is only one solution to this problem: improve the conversion ratio of the nuclear system and quickly reach the breeding; this would lead to the reduction of the natural uranium consumption by a factor of about 50. However, this condition is not sufficient; the commercial breeder must have a breeding gain as high as possible because the Pu out-of-pile time and the Pu losses in the cycle could lead to an unacceptable doubling time for the system, if the breeding gain is too low. That is the reason why it is vital to develop high performance breeder reactors. The present paper indicates how the Gas-cooled Breeder Reactor [GBR] can meet the problems mentioned above, on the basis of recent and realistic studies. It briefly describes the present status of GBR development, from the predecessors in the gas cooled reactor line, particularly the AGR. It shows how the GBR fuel takes mostly profit from the LMFBR fuel irradiation experience. It compares the GBR performance on a consistent basis with that of the LMFBR. The GBR capital and fuel cycle costs are compared with those of thermal and fast reactors respectively. The conclusion is, based on a cost-benefit study, that the GBR must be quickly developed in order

  15. High performance nano-composite technology development

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D. [KAERI, Taejon (Korea, Republic of); Kim, E. K.; Jung, S. Y.; Ryu, H. J. [KRICT, Taejon (Korea, Republic of); Hwang, S. S.; Kim, J. K.; Hong, S. M. [KIST, Taejon (Korea, Republic of); Chea, Y. B. [KIGAM, Taejon (Korea, Republic of); Choi, C. H.; Kim, S. D. [ATS, Taejon (Korea, Republic of); Cho, B. G.; Lee, S. H. [HGREC, Taejon (Korea, Republic of)

    1999-06-15

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  16. How to create high-performing teams.

    Science.gov (United States)

    Lam, Samuel M

    2010-02-01

    This article is intended to discuss inspirational aspects on how to lead a high-performance team. Cogent topics discussed include how to hire staff through methods of "topgrading" with reference to Geoff Smart and "getting the right people on the bus" referencing Jim Collins' work. In addition, once the staff is hired, this article covers how to separate the "eagles from the ducks" and how to inspire one's staff by creating the right culture with suggestions for further reading by Don Miguel Ruiz (The four agreements) and John Maxwell (21 Irrefutable laws of leadership). In addition, Simon Sinek's concept of "Start with Why" is elaborated to help a leader know what the core element should be with any superior culture. Thieme Medical Publishers.

  17. High performance nano-composite technology development

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D. [KAERI, Taejon (Korea, Republic of); Kim, E. K.; Jung, S. Y.; Ryu, H. J. [KRICT, Taejon (Korea, Republic of); Hwang, S. S.; Kim, J. K.; Hong, S. M. [KIST, Taejon (Korea, Republic of); Chea, Y. B. [KIGAM, Taejon (Korea, Republic of); Choi, C. H.; Kim, S. D. [ATS, Taejon (Korea, Republic of); Cho, B. G.; Lee, S. H. [HGREC, Taejon (Korea, Republic of)

    1999-06-15

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  18. High performance nano-composite technology development

    International Nuclear Information System (INIS)

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D.; Kim, E. K.; Jung, S. Y.; Ryu, H. J.; Hwang, S. S.; Kim, J. K.; Hong, S. M.; Chea, Y. B.; Choi, C. H.; Kim, S. D.; Cho, B. G.; Lee, S. H.

    1999-06-01

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  19. Heterologous prime-boost immunization of Newcastle disease virus vectored vaccines protected broiler chickens against highly pathogenic avian influenza and Newcastle disease viruses.

    Science.gov (United States)

    Kim, Shin-Hee; Samal, Siba K

    2017-07-24

    Avian Influenza virus (AIV) is an important pathogen for both human and animal health. There is a great need to develop a safe and effective vaccine for AI infections in the field. Live-attenuated Newcastle disease virus (NDV) vectored AI vaccines have shown to be effective, but preexisting antibodies to the vaccine vector can affect the protective efficacy of the vaccine in the field. To improve the efficacy of AI vaccine, we generated a novel vectored vaccine by using a chimeric NDV vector that is serologically distant from NDV. In this study, the protective efficacy of our vaccines was evaluated by using H5N1 highly pathogenic avian influenza virus (HPAIV) strain A/Vietnam/1203/2004, a prototype strain for vaccine development. The vaccine viruses were three chimeric NDVs expressing the hemagglutinin (HA) protein in combination with the neuraminidase (NA) protein, matrix 1 protein, or nonstructural 1 protein. Comparison of their protective efficacy between a single and prime-boost immunizations indicated that prime immunization of 1-day-old SPF chicks with our vaccine viruses followed by boosting with the conventional NDV vector strain LaSota expressing the HA protein provided complete protection of chickens against mortality, clinical signs and virus shedding. Further verification of our heterologous prime-boost immunization using commercial broiler chickens suggested that a sequential immunization of chickens with chimeric NDV vector expressing the HA and NA proteins following the boost with NDV vector expressing the HA protein can be a promising strategy for the field vaccination against HPAIVs and against highly virulent NDVs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. The identification of high potential archers based on relative psychological coping skills variables: A Support Vector Machine approach

    Science.gov (United States)

    Taha, Zahari; Muazu Musa, Rabiu; Majeed, A. P. P. Abdul; Razali Abdullah, Mohamad; Aizzat Zakaria, Muhammad; Muaz Alim, Muhammad; Arif Mat Jizat, Jessnor; Fauzi Ibrahim, Mohamad

    2018-03-01

    Support Vector Machine (SVM) has been revealed to be a powerful learning algorithm for classification and prediction. However, the use of SVM for prediction and classification in sport is at its inception. The present study classified and predicted high and low potential archers from a collection of psychological coping skills variables trained on different SVMs. 50 youth archers with the average age and standard deviation of (17.0 ±.056) gathered from various archery programmes completed a one end shooting score test. Psychological coping skills inventory which evaluates the archers level of related coping skills were filled out by the archers prior to their shooting tests. k-means cluster analysis was applied to cluster the archers based on their scores on variables assessed. SVM models, i.e. linear and fine radial basis function (RBF) kernel functions, were trained on the psychological variables. The k-means clustered the archers into high psychologically prepared archers (HPPA) and low psychologically prepared archers (LPPA), respectively. It was demonstrated that the linear SVM exhibited good accuracy and precision throughout the exercise with an accuracy of 92% and considerably fewer error rate for the prediction of the HPPA and the LPPA as compared to the fine RBF SVM. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from the selected psychological coping skills variables examined which would consequently save time and energy during talent identification and development programme.

  1. Optimizing High Performance Self Compacting Concrete

    Directory of Open Access Journals (Sweden)

    Raymond A Yonathan

    2017-01-01

    Full Text Available This paper’s objectives are to learn the effect of glass powder, silica fume, Polycarboxylate Ether, and gravel to optimizing composition of each factor in making High Performance SCC. Taguchi method is proposed in this paper as best solution to minimize specimen variable which is more than 80 variations. Taguchi data analysis method is applied to provide composition, optimizing, and the effect of contributing materials for nine variable of specimens. Concrete’s workability was analyzed using Slump flow test, V-funnel test, and L-box test. Compressive and porosity test were performed for the hardened state. With a dimension of 100×200 mm the cylindrical specimens were cast for compressive test with the age of 3, 7, 14, 21, 28 days. Porosity test was conducted at 28 days. It is revealed that silica fume contributes greatly to slump flow and porosity. Coarse aggregate shows the greatest contributing factor to L-box and compressive test. However, all factors show unclear result to V-funnel test.

  2. Scattering of massless vector, tensor, and other particles in string theory at high energy

    International Nuclear Information System (INIS)

    Antonov, E.N.

    1990-01-01

    The 2 → 2 and 2 → 3 processes are studied in the multi-Regge kinematics for gluons and gravitons, the first excited states of the open and closed strings. The factorization of the corresponding amplitudes is demonstrated. Explicit relations generalizing the Low-Gribov expressions are obtained in the kinematics where one of the external particles is produced with small transverse momentum. The expressions in the limit α' → 0 coincide with the results of Yang-Mills theory and gravitation at high energies

  3. High Performance Circularly Polarized Microstrip Antenna

    Science.gov (United States)

    Bondyopadhyay, Probir K. (Inventor)

    1997-01-01

    A microstrip antenna for radiating circularly polarized electromagnetic waves comprising a cluster array of at least four microstrip radiator elements, each of which is provided with dual orthogonal coplanar feeds in phase quadrature relation achieved by connection to an asymmetric T-junction power divider impedance notched at resonance. The dual fed circularly polarized reference element is positioned with its axis at a 45 deg angle with respect to the unit cell axis. The other three dual fed elements in the unit cell are positioned and fed with a coplanar feed structure with sequential rotation and phasing to enhance the axial ratio and impedance matching performance over a wide bandwidth. The centers of the radiator elements are disposed at the corners of a square with each side of a length d in the range of 0.7 to 0.9 times the free space wavelength of the antenna radiation and the radiator elements reside in a square unit cell area of sides equal to 2d and thereby permit the array to be used as a phased array antenna for electronic scanning and is realizable in a high temperature superconducting thin film material for high efficiency.

  4. NCI's Transdisciplinary High Performance Scientific Data Platform

    Science.gov (United States)

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  5. High Power Flex-Propellant Arcjet Performance

    Science.gov (United States)

    Litchford, Ron J.

    2011-01-01

    implied nearly frozen flow in the nozzle and yielded performance ranges of 800-1100 sec for hydrogen and 400-600 sec for ammonia. Inferred thrust-to-power ratios were in the range of 30-10 lbf/MWe for hydrogen and 60-20 lbf/MWe for ammonia. Successful completion of this test series represents a fundamental milestone in the progression of high power arcjet technology, and it is hoped that the results may serve as a reliable touchstone for the future development of MW-class regeneratively-cooled flex-propellant plasma rockets.

  6. Silicon Photomultiplier Performance in High ELectric Field

    Science.gov (United States)

    Montoya, J.; Morad, J.

    2016-12-01

    Roughly 27% of the universe is thought to be composed of dark matter. The Large Underground Xenon (LUX) relies on the emission of light from xenon atoms after a collision with a dark matter particle. After a particle interaction in the detector, two things can happen: the xenon will emit light and charge. The charge (electrons), in the liquid xenon needs to be pulled into the gas section so that it can interact with gas and emit light. This allows LUX to convert a single electron into many photons. This is done by applying a high voltage across the liquid and gas regions, effectively ripping electrons out of the liquid xenon and into the gas. The current device used to detect photons is the photomultiplier tube (PMT). These devices are large and costly. In recent years, a new technology that is capable of detecting single photons has emerged, the silicon photomultiplier (SiPM). These devices are cheaper and smaller than PMTs. Their performance in a high electric fields, such as those found in LUX, are unknown. It is possible that a large electric field could introduce noise on the SiPM signal, drowning the single photon detection capability. My hypothesis is that SiPMs will not observe a significant increase is noise at an electric field of roughly 10kV/cm (an electric field within the range used in detectors like LUX). I plan to test this hypothesis by first rotating the SiPMs with no applied electric field between two metal plates roughly 2 cm apart, providing a control data set. Then using the same angles test the dark counts with the constant electric field applied. Possibly the most important aspect of LUX, is the photon detector because it's what detects the signals. Dark matter is detected in the experiment by looking at the ratio of photons to electrons emitted for a given interaction in the detector. Interactions with a low electron to photon ratio are more like to be dark matter events than those with a high electron to photon ratio. The ability to

  7. The Role of Performance Management in the High Performance Organisation

    NARCIS (Netherlands)

    de Waal, André A.; van der Heijden, Beatrice I.J.M.

    2014-01-01

    The allegiance of partnering organisations and their employees to an Extended Enterprise performance is its proverbial sword of Damocles. Literature on Extended Enterprises focuses on collaboration, inter-organizational integration and learning to avoid diminishing or missing allegiance becoming an

  8. 14th annual Results and Review Workshop on High Performance Computing in Science and Engineering

    CERN Document Server

    Nagel, Wolfgang E; Resch, Michael M; Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2011; High Performance Computing in Science and Engineering '11

    2012-01-01

    This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2011. The reports cover all fields of computational science and engineering, ranging from CFD to computational physics and chemistry, to computer science, with a special emphasis on industrially relevant applications. Presenting results for both vector systems and microprocessor-based systems, the book allows readers to compare the performance levels and usability of various architectures. As HLRS

  9. Using High-Dimensional Image Models to Perform Highly Undetectable Steganography

    Science.gov (United States)

    Pevný, Tomáš; Filler, Tomáš; Bas, Patrick

    This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.

  10. BEGA Starter/Alternator—Vector Control Implementation and Performance for Wide Speed Range at Unity Power Factor Operation

    DEFF Research Database (Denmark)

    Boldea, Ion; Coroban-Schramel, Vasile; Andreescu, Gheorghe-Daniel

    2010-01-01

    The Biaxial Excitation Generator for Automobiles (BEGA) is proposed as a solution for integrated starter/alternator systems used in hybrid electric vehicles. This paper demonstrates through experiments and simulations that BEGA has a very large constant power speed range. A vector control structure...... is proposed for BEGA operation during motoring and generating, at unity power factor with zero d-axis current (id) and zero q-axis flux (Ψq) control. In such conditions, BEGA behaves like a separately excited dc brush(commutator) machine, in the sense that no stator inductance voltage drop occurs...

  11. Applying Multi-Class Support Vector Machines for performance assessment of shipping operations: The case of tanker vessels

    DEFF Research Database (Denmark)

    Pagoropoulos, Aris; Møller, Anders H.; McAloone, Tim C.

    2017-01-01

    of feature selection algorithms. Afterwards, a model based on Multi- Class Support Vector Machines (SVM) was constructed and the efficacy of the approach is shown through the application of a test set. The results demonstrate the importance and benefits of machine learning algorithms in driving energy....... Identifying the potential of behavioural savings can be challenging, due to the inherent difficulty in analysing the data and operationalizing energy efficiency within the dynamic operating environment of the vessels. This article proposes a supervised learning model for identifying the presence of energy...

  12. Victims and vectors: highly pathogenic avian influenza H5N1 and the ecology of wild birds

    Science.gov (United States)

    Takekawa, John Y.; Prosser, Diann J.; Newman, Scott H.; Muzaffar, Sabir Bin; Hill, Nichola J.; Yan, Baoping; Xiao, Xiangming; Lei, Fumin; Li, Tianxian; Schwarzbach, Steven E.; Howell, Judd A.

    2010-01-01

    The emergence of highly pathogenic avian influenza (HPAI) viruses has raised concerns about the role of wild birds in the spread and persistence of the disease. In 2005, an outbreak of the highly pathogenic subtype H5N1 killed more than 6,000 wild waterbirds at Qinghai Lake, China. Outbreaks have continued to periodically occur in wild birds at Qinghai Lake and elsewhere in Central China and Mongolia. This region has few poultry but is a major migration and breeding area for waterbirds in the Central Asian Flyway, although relatively little is known about migratory movements of different species and connectivity of their wetland habitats. The scientific debate has focused on the role of waterbirds in the epidemiology, maintenance and spread of HPAI H5N1: to what extent are they victims affected by the disease, or vectors that have a role in disease transmission? In this review, we summarise the current knowledge of wild bird involvement in the ecology of HPAI H5N1. Specifically, we present details on: (1) origin of HPAI H5N1; (2) waterbirds as LPAI reservoirs and evolution into HPAI; (3) the role of waterbirds in virus spread and persistence; (4) key biogeographic regions of outbreak; and (5) applying an ecological research perspective to studying AIVs in wild waterbirds and their ecosystems.

  13. Evaluating performance of high efficiency mist eliminators

    Energy Technology Data Exchange (ETDEWEB)

    Waggoner, Charles A.; Parsons, Michael S.; Giffin, Paxton K. [Mississippi State University, Institute for Clean Energy Technology, 205 Research Blvd, Starkville, MS (United States)

    2013-07-01

    Processing liquid wastes frequently generates off gas streams with high humidity and liquid aerosols. Droplet laden air streams can be produced from tank mixing or sparging and processes such as reforming or evaporative volume reduction. Unfortunately these wet air streams represent a genuine threat to HEPA filters. High efficiency mist eliminators (HEME) are one option for removal of liquid aerosols with high dissolved or suspended solids content. HEMEs have been used extensively in industrial applications, however they have not seen widespread use in the nuclear industry. Filtering efficiency data along with loading curves are not readily available for these units and data that exist are not easily translated to operational parameters in liquid waste treatment plants. A specialized test stand has been developed to evaluate the performance of HEME elements under use conditions of a US DOE facility. HEME elements were tested at three volumetric flow rates using aerosols produced from an iron-rich waste surrogate. The challenge aerosol included submicron particles produced from Laskin nozzles and super micron particles produced from a hollow cone spray nozzle. Test conditions included ambient temperature and relative humidities greater than 95%. Data collected during testing HEME elements from three different manufacturers included volumetric flow rate, differential temperature across the filter housing, downstream relative humidity, and differential pressure (dP) across the filter element. Filter challenge was discontinued at three intermediate dPs and the filter to allow determining filter efficiency using dioctyl phthalate and then with dry surrogate aerosols. Filtering efficiencies of the clean HEME, the clean HEME loaded with water, and the HEME at maximum dP were also collected using the two test aerosols. Results of the testing included differential pressure vs. time loading curves for the nine elements tested along with the mass of moisture and solid

  14. Static thrust-vectoring performance of nonaxisymmetric convergent-divergent nozzles with post-exit yaw vanes. M.S. Thesis - George Washington Univ., Aug. 1988

    Science.gov (United States)

    Foley, Robert J.; Pendergraft, Odis C., Jr.

    1991-01-01

    A static (wind-off) test was conducted in the Static Test Facility of the 16-ft transonic tunnel to determine the performance and turning effectiveness of post-exit yaw vanes installed on two-dimensional convergent-divergent nozzles. One nozzle design that was previously tested was used as a baseline, simulating dry power and afterburning power nozzles at both 0 and 20 degree pitch vectoring conditions. Vanes were installed on these four nozzle configurations to study the effects of vane deflection angle, longitudinal and lateral location, size, and camber. All vanes were hinged at the nozzle sidewall exit, and in addition, some were also hinged at the vane quarter chord (double-hinged). The vane concepts tested generally produced yaw thrust vectoring angles much less than the geometric vane angles, for (up to 8 percent) resultant thrust losses. When the nozzles were pitch vectored, yawing effectiveness decreased as the vanes were moved downstream. Thrust penalties and yawing effectiveness both decreased rapidly as the vanes were moved outboard (laterally). Vane length and height changes increased yawing effectiveness and thrust ratio losses, while using vane camber, and double-hinged vanes increased resultant yaw angles by 50 to 100 percent.

  15. A Linear Algebra Framework for Static High Performance Fortran Code Distribution

    Directory of Open Access Journals (Sweden)

    Corinne Ancourt

    1997-01-01

    Full Text Available High Performance Fortran (HPF was developed to support data parallel programming for single-instruction multiple-data (SIMD and multiple-instruction multiple-data (MIMD machines with distributed memory. The programmer is provided a familiar uniform logical address space and specifies the data distribution by directives. The compiler then exploits these directives to allocate arrays in the local memories, to assign computations to elementary processors, and to migrate data between processors when required. We show here that linear algebra is a powerful framework to encode HPF directives and to synthesize distributed code with space-efficient array allocation, tight loop bounds, and vectorized communications for INDEPENDENT loops. The generated code includes traditional optimizations such as guard elimination, message vectorization and aggregation, and overlap analysis. The systematic use of an affine framework makes it possible to prove the compilation scheme correct.

  16. An integrated high performance Fastbus slave interface

    International Nuclear Information System (INIS)

    Christiansen, J.; Ljuslin, C.

    1993-01-01

    A high performance CMOS Fastbus slave interface ASIC (Application Specific Integrated Circuit) supporting all addressing and data transfer modes defined in the IEEE 960 - 1986 standard is presented. The FAstbus Slave Integrated Circuit (FASIC) is an interface between the asynchronous Fastbus and a clock synchronous processor/memory bus. It can work stand-alone or together with a 32 bit microprocessor. The FASIC is a programmable device enabling its direct use in many different applications. A set of programmable address mapping windows can map Fastbus addresses to convenient memory addresses and at the same time act as address decoding logic. Data rates of 100 MBytes/sec to Fastbus can be obtained using an internal FIFO in the FASIC to buffer data between the two buses during block transfers. Message passing from Fastbus to a microprocessor on the slave module is supported. A compact (70 mm x 170 mm) Fastbus slave piggy back sub-card interface including level conversion between ECL and TTL signal levels has been implemented using surface mount components and the 208 pin FASIC chip

  17. High Performance Graphene Oxide Based Rubber Composites

    Science.gov (United States)

    Mao, Yingyan; Wen, Shipeng; Chen, Yulong; Zhang, Fazhong; Panine, Pierre; Chan, Tung W.; Zhang, Liqun; Liang, Yongri; Liu, Li

    2013-01-01

    In this paper, graphene oxide/styrene-butadiene rubber (GO/SBR) composites with complete exfoliation of GO sheets were prepared by aqueous-phase mixing of GO colloid with SBR latex and a small loading of butadiene-styrene-vinyl-pyridine rubber (VPR) latex, followed by their co-coagulation. During co-coagulation, VPR not only plays a key role in the prevention of aggregation of GO sheets but also acts as an interface-bridge between GO and SBR. The results demonstrated that the mechanical properties of the GO/SBR composite with 2.0 vol.% GO is comparable with those of the SBR composite reinforced with 13.1 vol.% of carbon black (CB), with a low mass density and a good gas barrier ability to boot. The present work also showed that GO-silica/SBR composite exhibited outstanding wear resistance and low-rolling resistance which make GO-silica/SBR very competitive for the green tire application, opening up enormous opportunities to prepare high performance rubber composites for future engineering applications. PMID:23974435

  18. Initial rheological description of high performance concretes

    Directory of Open Access Journals (Sweden)

    Alessandra Lorenzetti de Castro

    2006-12-01

    Full Text Available Concrete is defined as a composite material and, in rheological terms, it can be understood as a concentrated suspension of solid particles (aggregates in a viscous liquid (cement paste. On a macroscopic scale, concrete flows as a liquid. It is known that the rheological behavior of the concrete is close to that of a Bingham fluid and two rheological parameters regarding its description are needed: yield stress and plastic viscosity. The aim of this paper is to present the initial rheological description of high performance concretes using the modified slump test. According to the results, an increase of yield stress was observed over time, while a slight variation in plastic viscosity was noticed. The incorporation of silica fume showed changes in the rheological properties of fresh concrete. The behavior of these materials also varied with the mixing procedure employed in their production. The addition of superplasticizer meant that there was a large reduction in the mixture's yield stress, while plastic viscosity remained practically constant.

  19. High thermoelectric performance of graphite nanofibers.

    Science.gov (United States)

    Tran, Van-Truong; Saint-Martin, Jérôme; Dollfus, Philippe; Volz, Sebastian

    2018-02-22

    Graphite nanofibers (GNFs) have been demonstrated to be a promising material for hydrogen storage and heat management in electronic devices. Here, by means of first-principles and transport simulations, we show that GNFs can also be an excellent material for thermoelectric applications thanks to the interlayer weak van der Waals interaction that induces low thermal conductance and a step-like shape in the electronic transmission with mini-gaps, which are necessary ingredients to achieve high thermoelectric performance. This study unveils that the platelet form of GNFs in which graphite layers are perpendicular to the fiber axis can exhibit outstanding thermoelectric properties with a figure of merit ZT reaching 3.55 in a 0.5 nm diameter fiber and 1.1 in a 1.1 nm diameter one. Interestingly, by introducing 14 C isotope doping, ZT can even be enhanced up to more than 5, and more than 8 if we include the effect of finite phonon mean free path, which demonstrates the amazing thermoelectric potential of GNFs.

  20. Durability of high performance concrete in seawater

    International Nuclear Information System (INIS)

    Amjad Hussain Memon; Salihuddin Radin Sumadi; Rabitah Handan

    2000-01-01

    This paper presents a report on the effects of blended cements on the durability of high performance concrete (HPC) in seawater. In this research the effect of seawater was investigated. The specimens were initially subjected to water curing for seven days inside the laboratory at room temperature, followed by seawater curing exposed to tidal zone until testing. In this study three levels of cement replacement (0%, 30% and 70%) were used. The combined use of chemical and mineral admixtures has resulted in a new generation of concrete called HPC. The HPC has been identified as one of the most important advanced materials necessary in the effort to build a nation's infrastructure. HPC opens new opportunities in the utilization of the industrial by-products (mineral admixtures) in the construction industry. As a matter of fact permeability is considered as one of the fundamental properties governing the durability of concrete in the marine environment. Results of this investigation indicated that the oxygen permeability values for the blended cement concretes at the age of one year are reduced by a factor of about 2 as compared to OPC control mix concrete. Therefore both blended cement concretes are expected to withstand in the seawater exposed to tidal zone without serious deterioration. (Author)

  1. Alternative High-Performance Ceramic Waste Forms

    Energy Technology Data Exchange (ETDEWEB)

    Sundaram, S. K. [Alfred Univ., NY (United States)

    2017-02-01

    This final report (M5NU-12-NY-AU # 0202-0410) summarizes the results of the project titled “Alternative High-Performance Ceramic Waste Forms,” funded in FY12 by the Nuclear Energy University Program (NEUP Project # 12-3809) being led by Alfred University in collaboration with Savannah River National Laboratory (SRNL). The overall focus of the project is to advance fundamental understanding of crystalline ceramic waste forms and to demonstrate their viability as alternative waste forms to borosilicate glasses. We processed single- and multiphase hollandite waste forms based on simulated waste streams compositions provided by SRNL based on the advanced fuel cycle initiative (AFCI) aqueous separation process developed in the Fuel Cycle Research and Development (FCR&D). For multiphase simulated waste forms, oxide and carbonate precursors were mixed together via ball milling with deionized water using zirconia media in a polyethylene jar for 2 h. The slurry was dried overnight and then separated from the media. The blended powders were then subjected to melting or spark plasma sintering (SPS) processes. Microstructural evolution and phase assemblages of these samples were studied using x-ray diffraction (XRD), scanning electron microscopy (SEM), energy dispersion analysis of x-rays (EDAX), wavelength dispersive spectrometry (WDS), transmission electron spectroscopy (TEM), selective area x-ray diffraction (SAXD), and electron backscatter diffraction (EBSD). These results showed that the processing methods have significant effect on the microstructure and thus the performance of these waste forms. The Ce substitution into zirconolite and pyrochlore materials was investigated using a combination of experimental (in situ XRD and x-ray absorption near edge structure (XANES)) and modeling techniques to study these single phases independently. In zirconolite materials, a transition from the 2M to the 4M polymorph was observed with increasing Ce content. The resulting

  2. Vector geometry

    CERN Document Server

    Robinson, Gilbert de B

    2011-01-01

    This brief undergraduate-level text by a prominent Cambridge-educated mathematician explores the relationship between algebra and geometry. An elementary course in plane geometry is the sole requirement for Gilbert de B. Robinson's text, which is the result of several years of teaching and learning the most effective methods from discussions with students. Topics include lines and planes, determinants and linear equations, matrices, groups and linear transformations, and vectors and vector spaces. Additional subjects range from conics and quadrics to homogeneous coordinates and projective geom

  3. Germline Cas9 expression yields highly efficient genome engineering in a major worldwide disease vector, Aedes aegypti.

    Science.gov (United States)

    Li, Ming; Bui, Michelle; Yang, Ting; Bowman, Christian S; White, Bradley J; Akbari, Omar S

    2017-12-05

    The development of CRISPR/Cas9 technologies has dramatically increased the accessibility and efficiency of genome editing in many organisms. In general, in vivo germline expression of Cas9 results in substantially higher activity than embryonic injection. However, no transgenic lines expressing Cas9 have been developed for the major mosquito disease vector Aedes aegypti Here, we describe the generation of multiple stable, transgenic Ae. aegypti strains expressing Cas9 in the germline, resulting in dramatic improvements in both the consistency and efficiency of genome modifications using CRISPR. Using these strains, we disrupted numerous genes important for normal morphological development, and even generated triple mutants from a single injection. We have also managed to increase the rates of homology-directed repair by more than an order of magnitude. Given the exceptional mutagenic efficiency and specificity of the Cas9 strains we engineered, they can be used for high-throughput reverse genetic screens to help functionally annotate the Ae. aegypti genome. Additionally, these strains represent a step toward the development of novel population control technologies targeting Ae. aegypti that rely on Cas9-based gene drives. Copyright © 2017 the Author(s). Published by PNAS.

  4. Transcoding method from H.264/AVC to high efficiency video coding based on similarity of intraprediction, interprediction, and motion vector

    Science.gov (United States)

    Liu, Mei-Feng; Zhong, Guo-Yun; He, Xiao-Hai; Qing, Lin-Bo

    2016-09-01

    Currently, most video resources on line are encoded in the H.264/AVC format. More fluent video transmission can be obtained if these resources are encoded in the newest international video coding standard: high efficiency video coding (HEVC). In order to improve the video transmission and storage on line, a transcoding method from H.264/AVC to HEVC is proposed. In this transcoding algorithm, the coding information of intraprediction, interprediction, and motion vector (MV) in H.264/AVC video stream are used to accelerate the coding in HEVC. It is found through experiments that the region of interprediction in HEVC overlaps that in H.264/AVC. Therefore, the intraprediction for the region in HEVC, which is interpredicted in H.264/AVC, can be skipped to reduce coding complexity. Several macroblocks in H.264/AVC are combined into one PU in HEVC when the MV difference between two of the macroblocks in H.264/AVC is lower than a threshold. This method selects only one coding unit depth and one prediction unit (PU) mode to reduce the coding complexity. An MV interpolation method of combined PU in HEVC is proposed according to the areas and distances between the center of one macroblock in H.264/AVC and that of the PU in HEVC. The predicted MV accelerates the motion estimation for HEVC coding. The simulation results show that our proposed algorithm achieves significant coding time reduction with a little loss in bitrates distortion rate, compared to the existing transcoding algorithms and normal HEVC coding.

  5. A support vector machine designed to identify breasts at high risk using multi-probe generated REIS signals: a preliminary assessment

    Science.gov (United States)

    Gur, David; Zheng, Bin; Lederman, Dror; Dhurjaty, Sreeram; Sumkin, Jules; Zuley, Margarita

    2010-02-01

    A new resonance-frequency based electronic impedance spectroscopy (REIS) system with multi-probes, including one central probe and six external probes that are designed to contact the breast skin in a circular form with a radius of 60 millimeters to the central ("nipple") probe, has been assembled and installed in our breast imaging facility. We are conducting a prospective clinical study to test the performance of this REIS system in identifying younger women (detection of a highly suspicious breast lesion and 50 were determined negative during mammography screening. REIS output signal sweeps that we used to compute an initial feature included both amplitude and phase information representing differences between corresponding (matched) EIS signal values acquired from the left and right breasts. A genetic algorithm was applied to reduce the feature set and optimize a support vector machine (SVM) to classify the REIS examinations into "biopsy recommended" and "non-biopsy" recommended groups. Using the leave-one-case-out testing method, the classification performance as measured by the area under the receiver operating characteristic (ROC) curve was 0.816 +/- 0.042. This pilot analysis suggests that the new multi-probe-based REIS system could potentially be used as a risk stratification tool to identify pre-screened young women who are at higher risk of having or developing breast cancer.

  6. Vectors and Rotations in 3-Dimensions: Vector Algebra for the C++ Programmer

    Science.gov (United States)

    2016-12-01

    release; distribution is unlimited. 1. Introduction This report describes 2 C++ classes: a Vector class for performing vector algebra in 3-dimensional...ARL-TR-7894•DEC 2016 US Army Research Laboratory Vectors and Rotations in 3-Dimensions:Vector Algebra for the C++ Programmer by Richard Saucier...Army Research Laboratory Vectors and Rotations in 3-Dimensions:Vector Algebra for the C++ Programmer by Richard Saucier Survivability/Lethality

  7. Intelligent Facades for High Performance Green Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Dyson, Anna [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2017-03-01

    Progress Towards Net-Zero and Net-Positive-Energy Commercial Buildings and Urban Districts Through Intelligent Building Envelope Strategies Previous research and development of intelligent facades systems has been limited in their contribution towards national goals for achieving on-site net zero buildings, because this R&D has failed to couple the many qualitative requirements of building envelopes such as the provision of daylighting, access to exterior views, satisfying aesthetic and cultural characteristics, with the quantitative metrics of energy harvesting, storage and redistribution. To achieve energy self-sufficiency from on-site solar resources, building envelopes can and must address this gamut of concerns simultaneously. With this project, we have undertaken a high-performance building integrated combined-heat and power concentrating photovoltaic system with high temperature thermal capture, storage and transport towards multiple applications (BICPV/T). The critical contribution we are offering with the Integrated Concentrating Solar Façade (ICSF) is conceived to improve daylighting quality for improved health of occupants and mitigate solar heat gain while maximally capturing and transferring onsite solar energy. The ICSF accomplishes this multi-functionality by intercepting only the direct-normal component of solar energy (which is responsible for elevated cooling loads) thereby transforming a previously problematic source of energy into a high quality resource that can be applied to building demands such as heating, cooling, dehumidification, domestic hot water, and possible further augmentation of electrical generation through organic Rankine cycles. With the ICSF technology, our team is addressing the global challenge in transitioning commercial and residential building stock towards on-site clean energy self-sufficiency, by fully integrating innovative environmental control systems strategies within an intelligent and responsively dynamic building

  8. High-performance commercial building systems

    Energy Technology Data Exchange (ETDEWEB)

    Selkowitz, Stephen

    2003-10-01

    This report summarizes key technical accomplishments resulting from the three year PIER-funded R&D program, ''High Performance Commercial Building Systems'' (HPCBS). The program targets the commercial building sector in California, an end-use sector that accounts for about one-third of all California electricity consumption and an even larger fraction of peak demand, at a cost of over $10B/year. Commercial buildings also have a major impact on occupant health, comfort and productivity. Building design and operations practices that influence energy use are deeply engrained in a fragmented, risk-averse industry that is slow to change. Although California's aggressive standards efforts have resulted in new buildings designed to use less energy than those constructed 20 years ago, the actual savings realized are still well below technical and economic potentials. The broad goal of this program is to develop and deploy a set of energy-saving technologies, strategies, and techniques, and improve processes for designing, commissioning, and operating commercial buildings, while improving health, comfort, and performance of occupants, all in a manner consistent with sound economic investment practices. Results are to be broadly applicable to the commercial sector for different building sizes and types, e.g. offices and schools, for different classes of ownership, both public and private, and for owner-occupied as well as speculative buildings. The program aims to facilitate significant electricity use savings in the California commercial sector by 2015, while assuring that these savings are affordable and promote high quality indoor environments. The five linked technical program elements contain 14 projects with 41 distinct R&D tasks. Collectively they form a comprehensive Research, Development, and Demonstration (RD&D) program with the potential to capture large savings in the commercial building sector, providing significant economic benefits to

  9. Fault diagnosis and performance evaluation for high current LIA based on radial basis function neural network

    International Nuclear Information System (INIS)

    Yang Xinglin; Wang Huacen; Chen Nan; Dai Wenhua; Li Jin

    2006-01-01

    High current linear induction accelerator (LIA) is a complicated experimental physics device. It is difficult to evaluate and predict its performance. this paper presents a method which combines wavelet packet transform and radial basis function (RBF) neural network to build fault diagnosis and performance evaluation in order to improve reliability of high current LIA. The signal characteristics vectors which are extracted based on energy parameters of wavelet packet transform can well present the temporal and steady features of pulsed power signal, and reduce data dimensions effectively. The fault diagnosis system for accelerating cell and the trend classification system for the beam current based on RBF networks can perform fault diagnosis and evaluation, and provide predictive information for precise maintenance of high current LIA. (authors)

  10. VECTOR INTEGRATION

    NARCIS (Netherlands)

    Thomas, E. G. F.

    2012-01-01

    This paper deals with the theory of integration of scalar functions with respect to a measure with values in a, not necessarily locally convex, topological vector space. It focuses on the extension of such integrals from bounded measurable functions to the class of integrable functions, proving

  11. A report on the study of algorithms to enhance Vector computer performance for the discretized one-dimensional time-dependent heat conduction equation: EPIC research, Phase 1

    International Nuclear Information System (INIS)

    Majumdar, A.; Makowitz, H.

    1987-10-01

    With the development of modern vector/parallel supercomputers and their lower performance clones it has become possible to increase computational performance by several orders of magnitude when comparing to the previous generation of scalar computers. These performance gains are not observed when production versions of current thermal-hydraulic codes are implemented on modern supercomputers. It is our belief that this is due in part to the inappropriateness of using old thermal-hydraulic algorithms with these new computer architectures. We believe that a new generation of algorithms needs to be developed for thermal-hydraulics simulation that is optimized for vector/parallel architectures, and not the scalar computers of the previous generation. We have begun a study that will investigate several approaches for designing such optimal algorithms. These approaches are based on the following concepts: minimize recursion; utilize predictor-corrector iterative methods; maximize the convergence rate of iterative methods used; use physical approximations as well as numerical means to accelerate convergence; utilize explicit methods (i.e., marching) where stability will permit. We call this approach the ''EPIC'' methodology (i.e., Explicit Predictor Iterative Corrector methods). Utilizing the above ideas, we have begun our work by investigating the one-dimensional transient heat conduction equation. We have developed several algorithms based on variations of the Hopscotch concept, which we discuss in the body of this report. 14 refs

  12. Improving the high performance concrete (HPC behaviour in high temperatures

    Directory of Open Access Journals (Sweden)

    Cattelan Antocheves De Lima, R.

    2003-12-01

    Full Text Available High performance concrete (HPC is an interesting material that has been long attracting the interest from the scientific and technical community, due to the clear advantages obtained in terms of mechanical strength and durability. Given these better characteristics, HFC, in its various forms, has been gradually replacing normal strength concrete, especially in structures exposed to severe environments. However, the veiy dense microstructure and low permeability typical of HPC can result in explosive spalling under certain thermal and mechanical conditions, such as when concrete is subject to rapid temperature rises, during a f¡re. This behaviour is caused by the build-up of internal water pressure, in the pore structure, during heating, and by stresses originating from thermal deformation gradients. Although there are still a limited number of experimental programs in this area, some researchers have reported that the addition of polypropylene fibers to HPC is a suitable way to avoid explosive spalling under f re conditions. This change in behavior is derived from the fact that polypropylene fibers melt in high temperatures and leave a pathway for heated gas to escape the concrete matrix, therefore allowing the outward migration of water vapor and resulting in the reduction of interned pore pressure. The present research investigates the behavior of high performance concrete on high temperatures, especially when polypropylene fibers are added to the mix.

    El hormigón de alta resistencia (HAR es un material de gran interés para la comunidad científica y técnica, debido a las claras ventajas obtenidas en término de resistencia mecánica y durabilidad. A causa de estas características, el HAR, en sus diversas formas, en algunas aplicaciones está reemplazando gradualmente al hormigón de resistencia normal, especialmente en estructuras expuestas a ambientes severos. Sin embargo, la microestructura muy densa y la baja permeabilidad t

  13. A new methodology for studying dynamics of aerosol particles in sneeze and cough using a digital high-vision, high-speed video system and vector analyses.

    Directory of Open Access Journals (Sweden)

    Hidekazu Nishimura

    Full Text Available Microbial pathogens of respiratory infectious diseases are often transmitted through particles in sneeze and cough. Therefore, understanding the particle movement is important for infection control. Images of a sneeze induced by nasal cavity stimulation by healthy adult volunteers, were taken by a digital high-vision, high-speed video system equipped with a computer system and treated as a research model. The obtained images were enhanced electronically, converted to digital images every 1/300 s, and subjected to vector analysis of the bioparticles contained in the whole sneeze cloud using automatic image processing software. The initial velocity of the particles or their clusters in the sneeze was greater than 6 m/s, but decreased as the particles moved forward; the momentums of the particles seemed to be lost by 0.15-0.20 s and started a diffusion movement. An approximate equation of a function of elapsed time for their velocity was obtained from the vector analysis to represent the dynamics of the front-line particles. This methodology was also applied for a cough. Microclouds contained in a smoke exhaled with a voluntary cough by a volunteer after smoking one breath of cigarette, were traced as the visible, aerodynamic surrogates for invisible bioparticles of cough. The smoke cough microclouds had an initial velocity greater than 5 m/s. The fastest microclouds were located at the forefront of cloud mass that moving forward; however, their velocity clearly decreased after 0.05 s and they began to diffuse in the environmental airflow. The maximum direct reaches of the particles and microclouds driven by sneezing and coughing unaffected by environmental airflows were estimated by calculations using the obtained equations to be about 84 cm and 30 cm from the mouth, respectively, both achieved in about 0.2 s, suggesting that data relating to the dynamics of sneeze and cough became available by calculation.

  14. Spectrally high performing quantum cascade lasers

    Science.gov (United States)

    Toor, Fatima

    Quantum cascade (QC) lasers are versatile semiconductor light sources that can be engineered to emit light of almost any wavelength in the mid- to far-infrared (IR) and terahertz region from 3 to 300 mum [1-5]. Furthermore QC laser technology in the mid-IR range has great potential for applications in environmental, medical and industrial trace gas sensing [6-10] since several chemical vapors have strong rovibrational frequencies in this range and are uniquely identifiable by their absorption spectra through optical probing of absorption and transmission. Therefore, having a wide range of mid-IR wavelengths in a single QC laser source would greatly increase the specificity of QC laser-based spectroscopic systems, and also make them more compact and field deployable. This thesis presents work on several different approaches to multi-wavelength QC laser sources that take advantage of band-structure engineering and the uni-polar nature of QC lasers. Also, since for chemical sensing, lasers with narrow linewidth are needed, work is presented on a single mode distributed feedback (DFB) QC laser. First, a compact four-wavelength QC laser source, which is based on a 2-by-2 module design, with two waveguides having QC laser stacks for two different emission wavelengths each, one with 7.0 mum/11.2 mum, and the other with 8.7 mum/12.0 mum is presented. This is the first design of a four-wavelength QC laser source with widely different emission wavelengths that uses minimal optics and electronics. Second, since there are still several unknown factors that affect QC laser performance, results on a first ever study conducted to determine the effects of waveguide side-wall roughness on QC laser performance using the two-wavelength waveguides is presented. The results are consistent with Rayleigh scattering effects in the waveguides, with roughness effecting shorter wavelengths more than longer wavelengths. Third, a versatile time-multiplexed multi-wavelength QC laser system that

  15. Nova performance at ultra high fluence levels

    International Nuclear Information System (INIS)

    Hunt, J.T.

    1986-01-01

    Nova is a ten beam high power Nd:glass laser used for interial confinement fusion research. It was operated in the high power high energy regime following the completion of construction in December 1984. During this period several interesting nonlinear optical phenomena were observed. These phenomena are discussed in the text. 11 refs., 5 figs

  16. Sensitivity of Support Vector Machine Predictions of Passive Microwave Brightness Temperature Over Snow-covered Terrain in High Mountain Asia

    Science.gov (United States)

    Ahmad, J. A.; Forman, B. A.

    2017-12-01

    High Mountain Asia (HMA) serves as a water supply source for over 1.3 billion people, primarily in south-east Asia. Most of this water originates as snow (or ice) that melts during the summer months and contributes to the run-off downstream. In spite of its critical role, there is still considerable uncertainty regarding the total amount of snow in HMA and its spatial and temporal variation. In this study, the NASA Land Information Systems (LIS) is used to model the hydrologic cycle over the Indus basin. In addition, the ability of support vector machines (SVM), a machine learning technique, to predict passive microwave brightness temperatures at a specific frequency and polarization as a function of LIS-derived land surface model output is explored in a sensitivity analysis. Multi-frequency, multi-polarization passive microwave brightness temperatures as measured by the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E) over the Indus basin are used as training targets during the SVM training process. Normalized sensitivity coefficients (NSC) are then computed to assess the sensitivity of a well-trained SVM to each LIS-derived state variable. Preliminary results conform with the known first-order physics. For example, input states directly linked to physical temperature like snow temperature, air temperature, and vegetation temperature have positive NSC's whereas input states that increase volume scattering such as snow water equivalent or snow density yield negative NSC's. Air temperature exhibits the largest sensitivity coefficients due to its inherent, high-frequency variability. Adherence of this machine learning algorithm to the first-order physics bodes well for its potential use in LIS as the observation operator within a radiance data assimilation system aimed at improving regional- and continental-scale snow estimates.

  17. Durability and Performance of High Performance Infiltration Cathodes

    DEFF Research Database (Denmark)

    Samson, Alfred Junio; Søgaard, Martin; Hjalmarsson, Per

    2013-01-01

    The performance and durability of solid oxide fuel cell (SOFC) cathodes consisting of a porous Ce0.9Gd0.1O1.95 (CGO) infiltrated with nitrates corresponding to the nominal compositions La0.6Sr0.4Co1.05O3-δ (LSC), LaCoO3-δ (LC), and Co3O4 are discussed. At 600°C, the polarization resistance, Rp......, varied as: LSC (0.062Ωcm2)cathode was found to depend on the infiltrate firing temperature and is suggested to originate...... of the infiltrate but also from a better surface exchange property. A 450h test of an LSC-infiltrated CGO cathode showed an Rp with final degradation rate of only 11mΩcm2kh-1. An SOFC with an LSC-infiltrated CGO cathode tested for 1,500h at 700°C and 0.5Acm-2 (60% fuel, 20% air utilization) revealed no measurable...

  18. From adaptive to high-performance structures

    NARCIS (Netherlands)

    Teuffel, P.

    2011-01-01

    Multiple design aspects influence the building performance such as architectural criteria, various environmental impacts and user behaviour. Specific examples are sun, wind, temperatures, function, occupancy, socio-cultural aspects and other contextual aspects and needs. Even though these aspects

  19. Dynamic changes in the characteristics of cationic lipidic vectors after exposure to mouse serum: implications for intravenous lipofection.

    Science.gov (United States)

    Li, S; Tseng, W C; Stolz, D B; Wu, S P; Watkins, S C; Huang, L

    1999-04-01

    Intravenous gene delivery via cationic lipidic vectors gives systemic gene expression particularly in the lung. In order to understand the mechanism of intravenous lipofection, a systematic study was performed to investigate the interactions of lipidic vectors with mouse serum emphasizing how serum affects the biophysical and biological properties of vectors of different lipid compositions. Results from this study showed that lipidic vectors underwent dynamic changes in their characteristics after exposure to serum. Addition of lipidic vectors into serum resulted in an immediate aggregation of vectors. Prolonged incubation of lipidic vectors with serum led to vector disintegration as shown in turbidity study, sucrose-gradient centrifugation analysis and fluorescence resonance energy transfer (FRET) study. Vector disintegration was associated with DNA release and degradation as shown in EtBr intercalation assay and DNA digestion study. Serum-induced disintegration of vectors is a general phenomenon for all cationic lipidic vectors tested in this study. Yet, vectors of different lipid compositions vary greatly in the rate of disintegration. There is an inverse correlation between the disintegration rate of lipidic vectors and their in vivo transfection efficiency. Vectors with a rapid rate of disintegration such as those containing dioleoyl-phosphatidylethanolamine (DOPE) poorly stayed in the lung and were barely active in transfecting cells. In contrast, cholesterol-containing vectors that had a rapid aggregation and a slow disintegration were highly efficient in transfecting cells in vivo. The results of this study explain why cationic lipidic vectors of different lipid compositions have a dramatic difference in their in vivo transfection efficiency. These results also suggest that the study of the interactions of lipidic vectors with serum may serve as a predictive model for the in vivo efficiency of a lipidic vector. Further study of the numerous interactions of

  20. Flight-Determined Subsonic Longitudinal Stability and Control Derivatives of the F-18 High Angle of Attack Research Vehicle (HARV) with Thrust Vectoring

    Science.gov (United States)

    Iliff, Kenneth W.; Wang, Kon-Sheng Charles

    1997-01-01

    The subsonic longitudinal stability and control derivatives of the F-18 High Angle of Attack Research Vehicle (HARV) are extracted from dynamic flight data using a maximum likelihood parameter identification technique. The technique uses the linearized aircraft equations of motion in their continuous/discrete form and accounts for state and measurement noise as well as thrust-vectoring effects. State noise is used to model the uncommanded forcing function caused by unsteady aerodynamics over the aircraft, particularly at high angles of attack. Thrust vectoring was implemented using electrohydraulically-actuated nozzle postexit vanes and a specialized research flight control system. During maneuvers, a control system feature provided independent aerodynamic control surface inputs and independent thrust-vectoring vane inputs, thereby eliminating correlations between the aircraft states and controls. Substantial variations in control excitation and dynamic response were exhibited for maneuvers conducted at different angles of attack. Opposing vane interactions caused most thrust-vectoring inputs to experience some exhaust plume interference and thus reduced effectiveness. The estimated stability and control derivatives are plotted, and a discussion relates them to predicted values and maneuver quality.

  1. High-performance-vehicle technology. [fighter aircraft propulsion

    Science.gov (United States)

    Povinelli, L. A.

    1979-01-01

    Propulsion needs of high performance military aircraft are discussed. Inlet performance, nozzle performance and cooling, and afterburner performance are covered. It is concluded that nonaxisymmetric nozzles provide cleaner external lines and enhanced maneuverability, but the internal flows are more complex. Swirl afterburners show promise for enhanced performance in the high altitude, low Mach number region.

  2. An introduction to vectors, vector operators and vector analysis

    CERN Document Server

    Joag, Pramod S

    2016-01-01

    Ideal for undergraduate and graduate students of science and engineering, this book covers fundamental concepts of vectors and their applications in a single volume. The first unit deals with basic formulation, both conceptual and theoretical. It discusses applications of algebraic operations, Levi-Civita notation, and curvilinear coordinate systems like spherical polar and parabolic systems and structures, and analytical geometry of curves and surfaces. The second unit delves into the algebra of operators and their types and also explains the equivalence between the algebra of vector operators and the algebra of matrices. Formulation of eigen vectors and eigen values of a linear vector operator are elaborated using vector algebra. The third unit deals with vector analysis, discussing vector valued functions of a scalar variable and functions of vector argument (both scalar valued and vector valued), thus covering both the scalar vector fields and vector integration.

  3. RISC Processors and High Performance Computing

    Science.gov (United States)

    Bailey, David H.; Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    This tutorial will discuss the top five RISC microprocessors and the parallel systems in which they are used. It will provide a unique cross-machine comparison not available elsewhere. The effective performance of these processors will be compared by citing standard benchmarks in the context of real applications. The latest NAS Parallel Benchmarks, both absolute performance and performance per dollar, will be listed. The next generation of the NPB will be described. The tutorial will conclude with a discussion of future directions in the field. Technology Transfer Considerations: All of these computer systems are commercially available internationally. Information about these processors is available in the public domain, mostly from the vendors themselves. The NAS Parallel Benchmarks and their results have been previously approved numerous times for public release, beginning back in 1991.

  4. A high performance thermoacoustic Stirling-engine

    Energy Technology Data Exchange (ETDEWEB)

    Tijani, M.E.H.; Spoelstra, S. [Energy research Centre of the Netherlands (ECN), PO Box 1, 1755 ZG Petten (Netherlands)

    2011-11-10

    In thermoacoustic systems heat is converted into acoustic energy and vice versa. These systems use inert gases as working medium and have no moving parts which makes the thermoacoustic technology a serious alternative to produce mechanical or electrical power, cooling power, and heating in a sustainable and environmentally friendly way. A thermoacoustic Stirling heat engine is designed and built which achieves a record performance of 49% of the Carnot efficiency. The design and performance of the engine is presented. The engine has no moving parts and is made up of few simple components.

  5. Psychological factors in developing high performance athletes

    DEFF Research Database (Denmark)

    Elbe, Anne-Marie; Wikman, Johan Michael

    2017-01-01

    calls for great efforts in dealing with competitive pressure and demands mental strength with regard to endurance, self-motivation and willpower. But while it is somewhat straightforward to specify the physical and physiological skills needed for top performance in a specific sport, it becomes less...... clear with regard to the psychological skills that are needed. Therefore, the main questions to be addressed in this chapter are: (1) which psychological skills are needed to reach top performance? And (2) (how) can these skills be developed in young talents?...

  6. High Performance Expectations: Concept and causes

    DEFF Research Database (Denmark)

    Andersen, Lotte Bøgh; Jacobsen, Christian Bøtcher

    2017-01-01

    literature research, HPE is defined as the degree to which leaders succeed in expressing ambitious expectations to their employees’ achievement of given performance criteria, and it is analyzed how leadership behavior affects employee-perceived HPE. This study applies a large-scale leadership field...... experiment with 3,730 employees nested in 471 organizations and finds that transformational leadership training as well as transactional and combined training of the leaders significantly increased employees’ HPE relative to a control group. Furthermore, transformational leadership and the use of pecuniary...... rewards seem to be important mechanisms. This implies that public leaders can actually affect HPE through their leadership and thus potentially organizational performance as well....

  7. High Rate Performing Li-ion Battery

    Science.gov (United States)

    2015-02-09

    journal article will be sufficient in most cases. This document may be as long or as short as needed to give a fair account of the work performed...Klink, J. J. & Moser, J. EPR Study of Vanadium (4+) in the Anatase and Rutile Phases of TiO2. Phys. Rev. B 34, 3060-3068 (1986). 40 Abragam, A

  8. Engendering a high performing organisational culture through ...

    African Journals Online (AJOL)

    Concluding that Africa's poor organisational performances are attributable to some inadequacies in the cultural foundations of countries and organisations, this paper argues for internal branding as the way forward for African organisations. Through internal branding an African organization can use a systematic and ...

  9. Mastering JavaScript high performance

    CERN Document Server

    Adams, Chad R

    2015-01-01

    If you are a JavaScript developer with some experience in development and want to increase the performance of JavaScript projects by building faster web apps, then this book is for you. You should know the basic concepts of JavaScript.

  10. Gamma and Xray spectroscopy at high performance

    International Nuclear Information System (INIS)

    Borchert, G.L.

    1984-01-01

    The author determines that for many interesting problems in gamma and Xray spectroscopy it is necessary to use crystal diffractometers. The basic features of such instruments are discussed and the special performance of crystal spectrometers is demonstrated by means of typical examples of various applications

  11. High Performance Fortran for Aerospace Applications

    National Research Council Canada - National Science Library

    Mehrotra, Piyush

    2000-01-01

    .... HPF is a set of Fortran extensions designed to provide users with a high-level interface for programming data parallel scientific applications while delegating to the compiler/runtime system the task...

  12. High-Performance Computing Paradigm and Infrastructure

    CERN Document Server

    Yang, Laurence T

    2006-01-01

    With hyperthreading in Intel processors, hypertransport links in next generation AMD processors, multi-core silicon in today's high-end microprocessors from IBM and emerging grid computing, parallel and distributed computers have moved into the mainstream

  13. High performance management bij franchise-supermarkten

    NARCIS (Netherlands)

    Sloot, Laurens; van Nierop, Erjen; de Waal, Andre

    In dit artikel wordt een onderzoek gepresenteerd naar de mate waarin franchise-supermarkten voldoen aan de vijf factoren van high performanceorganisaties (HPO): hoge kwaliteit managers, hoge kwaliteit medewerkers, openheid en actiegerichtheid, continue verbetering en vernieuwing, en

  14. High performance fuel technology development : Development of high performance cladding materials

    International Nuclear Information System (INIS)

    Park, Jeongyong; Jeong, Y. H.; Park, S. Y.

    2012-04-01

    The superior in-pile performance of the HANA claddings have been verified by the successful irradiation test and in the Halden research reactor up to the high burn-up of 67GWD/MTU. The in-pile corrosion and creep resistances of HANA claddings were improved by 40% and 50%, respectively, over Zircaloy-4. HANA claddings have been also irradiated in the commercial reactor up to 2 reactor cycles, showing the corrosion resistance 40% better than that of ZIRLO in the same fuel assembly. Long-term out-of-pile performance tests for the candidates of the next generation cladding materials have produced the highly reliable test results. The final candidate alloys were selected and they showed the corrosion resistance 50% better than the foreign advanced claddings, which is beyond the original target. The LOCA-related properties were also improved by 20% over the foreign advanced claddings. In order to establish the optimal manufacturing process for the inner and outer claddings of the dual-cooled fuel, 18 different kinds of specimens were fabricated with various cold working and annealing conditions. Based on the performance tests and various out-of-pile test results obtained from the specimens, the optimal manufacturing process was established for the inner and outer cladding tubes of the dual-cooled fuel

  15. LIFE EXPECTANCY AND THE HEALTHY LIFE OF A POPULATION – A CONSONANT VECTOR OF ECONOMIC PERFORMANCE, PUBLIC HEALTH SYSTEM AND MORAL VALUES

    Directory of Open Access Journals (Sweden)

    Mihai Luchian

    2015-03-01

    Full Text Available The performance of the therapeutical act depends on the personal training and optimum interdisciplinary cooperation of the medical staff involved, representing dynamic elements directly supported by adequate logistic means –medical technique and implements, drugs, procedures, etc. Considered from a dialectic perspective, all such factors of force are made up and are mainly and conjunctly manifested at the level of an elevated (ethic-moral and juridical deontological and normative level. The cooperative vectors of interest may be protected and increased by means of some key factors characteristic to the system of public health assurance, implicitly of the functional medical structures, for attaining the major goal of consolidating general health condition, alongwith the strategic factors for promoting a healthy, really economically and socially performant life of the population to come.

  16. Menhir: An Environment for High Performance Matlab

    Directory of Open Access Journals (Sweden)

    Stéphane Chauveau

    1999-01-01

    Full Text Available In this paper we present Menhir a compiler for generating sequential or parallel code from the Matlab language. The compiler has been designed in the context of using Matlab as a specification language. One of the major features of Menhir is its retargetability to generate parallel and sequential C or Fortran code. We present the compilation process and the target system description for Menhir. Preliminary performances are given and compared with MCC, the MathWorks Matlab compiler.

  17. Inclusion control in high-performance steels

    International Nuclear Information System (INIS)

    Holappa, L.E.K.; Helle, A.S.

    1995-01-01

    Progress of clean steel production, fundamentals of oxide and sulphide inclusions as well as inclusion morphology in normal and calcium treated steels are described. Effects of cleanliness and inclusion control on steel properties are discussed. In many damaging constructional and engineering applications the nonmetallic inclusions have a quite decisive role in steel performance. An example of combination of good mechanical properties and superior machinability by applying inclusion control is presented. (author)

  18. Emerging technologies for high performance infrared detectors

    OpenAIRE

    Tan Chee Leong; Mohseni Hooman

    2018-01-01

    Infrared photodetectors (IRPDs) have become important devices in various applications such as night vision, military missile tracking, medical imaging, industry defect imaging, environmental sensing, and exoplanet exploration. Mature semiconductor technologies such as mercury cadmium telluride and III–V material-based photodetectors have been dominating the industry. However, in the last few decades, significant funding and research has been focused to improve the performance of IRPDs such as...

  19. Development of a high performance liquid chromatography method ...

    African Journals Online (AJOL)

    Development of a high performance liquid chromatography method for simultaneous ... Purpose: To develop and validate a new low-cost high performance liquid chromatography (HPLC) method for ..... Several papers have reported the use of ...

  20. High Performance Home Building Guide for Habitat for Humanity Affiliates

    Energy Technology Data Exchange (ETDEWEB)

    Lindsey Marburger

    2010-10-01

    This guide covers basic principles of high performance Habitat construction, steps to achieving high performance Habitat construction, resources to help improve building practices, materials, etc., and affiliate profiles and recommendations.

  1. Optical electromagnetic vector-field modeling for the accurate analysis of finite diffractive structures of high complexity

    DEFF Research Database (Denmark)

    Dridi, Kim; Bjarklev, Anders Overgaard

    1999-01-01

    An electromagnetic vector-field modle for design of optical components based on the finite-difference-time-domain method and radiation integrals in presented. Its ability to predict the optical electromagnetic dynamics in structures with complex material distribution is demonstrated. Theoretical...

  2. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  3. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  4. IGUANA: a high-performance 2D and 3D visualisation system

    Energy Technology Data Exchange (ETDEWEB)

    Alverson, G. [Department of Physics, Northeastern University, Boston, MA 02115 (United States); Eulisse, G. [Department of Physics, Northeastern University, Boston, MA 02115 (United States); Muzaffar, S. [Department of Physics, Northeastern University, Boston, MA 02115 (United States); Osborne, I. [Department of Physics, Northeastern University, Boston, MA 02115 (United States); Taylor, L. [Department of Physics, Northeastern University, Boston, MA 02115 (United States)]. E-mail: lucas.taylor@cern.ch; Tuura, L.A. [Department of Physics, Northeastern University, Boston, MA 02115 (United States)

    2004-11-21

    The IGUANA project has developed visualisation tools for multiple high-energy experiments. At the core of IGUANA is a generic, high-performance visualisation system based on OpenInventor and OpenGL. This paper describes the back-end and a feature-rich 3D visualisation system built on it, as well as a new 2D visualisation system that can automatically generate 2D views from 3D data, for example to produce R/Z or X/Y detector displays from existing 3D display with little effort. IGUANA has collaborated with the open-source gl2ps project to create a high-quality vector postscript output that can produce true vector graphics output from any OpenGL 2D or 3D display, complete with surface shading and culling of invisible surfaces. We describe how it works. We also describe how one can measure the memory and performance costs of various OpenInventor constructs and how to test scene graphs. We present good patterns to follow and bad patterns to avoid. We have added more advanced tools such as per-object clipping, slicing, lighting or animation, as well as multiple linked views with OpenInventor, and describe them in this paper. We give details on how to edit object appearance efficiently and easily, and even dynamically as a function of object properties, with instant visual feedback to the user.

  5. IGUANA A high-performance 2D and 3D visualisation system

    CERN Document Server

    Alverson, G; Muzaffar, S; Osborne, I; Taylor, L; Tuura, L A

    2004-01-01

    The IGUANA project has developed visualisation tools for multiple high-energy experiments. At the core of IGUANA is a generic, high- performance visualisation system based on OpenInventor and OpenGL. This paper describes the back-end and a feature-rich 3D visualisation system built on it, as well as a new 2D visualisation system that can automatically generate 2D views from 3D data, for example to produce R/Z or X/Y detector displays from existing 3D display with little effort. IGUANA has collaborated with the open-source gl2ps project to create a high-quality vector postscript output that can produce true vector graphics output from any OpenGL 2D or 3D display, complete with surface shading and culling of invisible surfaces. We describe how it works. We also describe how one can measure the memory and performance costs of various OpenInventor constructs and how to test scene graphs. We present good patterns to follow and bad patterns to avoid. We have added more advanced tools such as per-object clipping, sl...

  6. IGUANA: a high-performance 2D and 3D visualisation system

    International Nuclear Information System (INIS)

    Alverson, G.; Eulisse, G.; Muzaffar, S.; Osborne, I.; Taylor, L.; Tuura, L.A.

    2004-01-01

    The IGUANA project has developed visualisation tools for multiple high-energy experiments. At the core of IGUANA is a generic, high-performance visualisation system based on OpenInventor and OpenGL. This paper describes the back-end and a feature-rich 3D visualisation system built on it, as well as a new 2D visualisation system that can automatically generate 2D views from 3D data, for example to produce R/Z or X/Y detector displays from existing 3D display with little effort. IGUANA has collaborated with the open-source gl2ps project to create a high-quality vector postscript output that can produce true vector graphics output from any OpenGL 2D or 3D display, complete with surface shading and culling of invisible surfaces. We describe how it works. We also describe how one can measure the memory and performance costs of various OpenInventor constructs and how to test scene graphs. We present good patterns to follow and bad patterns to avoid. We have added more advanced tools such as per-object clipping, slicing, lighting or animation, as well as multiple linked views with OpenInventor, and describe them in this paper. We give details on how to edit object appearance efficiently and easily, and even dynamically as a function of object properties, with instant visual feedback to the user

  7. Development of High Performance Piezoelectric Polyimides

    Science.gov (United States)

    Simpson, Joycelyn O.; St.Clair, Terry L.; Welch, Sharon S.

    1996-01-01

    In this work a series of polyimides are investigated which exhibit a strong piezoelectric response and polarization stability at temperatures in excess of 100 C. This work was motivated by the need to develop piezoelectric sensors suitable for use in high temperature aerospace applications.

  8. Powder metallurgical high performance materials. Proceedings. Volume 1: high performance P/M metals

    International Nuclear Information System (INIS)

    Kneringer, G.; Roedhammer, P.; Wildner, H.

    2001-01-01

    The proceedings of this sequence of seminars form an impressive chronicle of the continued progress in the understanding of refractory metals and cemented carbides and in their manufacture and application. There the ingenuity and assiduous work of thousands of scientists and engineers striving for progress in the field of powder metallurgy is documented in more than 2000 contributions covering some 30000 pages. The 15th Plansee Seminar was convened under the general theme 'Powder Metallurgical High Performance Materials'. Under this broadened perspective the seminar will strive to look beyond the refractory metals and cemented carbides, which remain at its focus, to novel classes of materials, such as intermetallic compounds, with potential for high temperature applications. (author)

  9. Powder metallurgical high performance materials. Proceedings. Volume 1: high performance P/M metals

    Energy Technology Data Exchange (ETDEWEB)

    Kneringer, G; Roedhammer, P; Wildner, H [eds.

    2001-07-01

    The proceedings of this sequence of seminars form an impressive chronicle of the continued progress in the understanding of refractory metals and cemented carbides and in their manufacture and application. There the ingenuity and assiduous work of thousands of scientists and engineers striving for progress in the field of powder metallurgy is documented in more than 2000 contributions covering some 30000 pages. The 15th Plansee Seminar was convened under the general theme 'Powder Metallurgical High Performance Materials'. Under this broadened perspective the seminar will strive to look beyond the refractory metals and cemented carbides, which remain at its focus, to novel classes of materials, such as intermetallic compounds, with potential for high temperature applications. (author)

  10. High performance flexible electronics for biomedical devices.

    Science.gov (United States)

    Salvatore, Giovanni A; Munzenrieder, Niko; Zysset, Christoph; Kinkeldei, Thomas; Petti, Luisa; Troster, Gerhard

    2014-01-01

    Plastic electronics is soft, deformable and lightweight and it is suitable for the realization of devices which can form an intimate interface with the body, be implanted or integrated into textile for wearable and biomedical applications. Here, we present flexible electronics based on amorphous oxide semiconductors (a-IGZO) whose performance can achieve MHz frequency even when bent around hair. We developed an assembly technique to integrate complex electronic functionalities into textile while preserving the softness of the garment. All this and further developments can open up new opportunities in health monitoring, biotechnology and telemedicine.

  11. High performance image processing of SPRINT

    Energy Technology Data Exchange (ETDEWEB)

    DeGroot, T. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    This talk will describe computed tomography (CT) reconstruction using filtered back-projection on SPRINT parallel computers. CT is a computationally intensive task, typically requiring several minutes to reconstruct a 512x512 image. SPRINT and other parallel computers can be applied to CT reconstruction to reduce computation time from minutes to seconds. SPRINT is a family of massively parallel computers developed at LLNL. SPRINT-2.5 is a 128-node multiprocessor whose performance can exceed twice that of a Cray-Y/MP. SPRINT-3 will be 10 times faster. Described will be the parallel algorithms for filtered back-projection and their execution on SPRINT parallel computers.

  12. High-performance commercial building facades

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eleanor; Selkowitz, Stephen; Bazjanac, Vladimir; Inkarojrit, Vorapat; Kohler, Christian

    2002-06-01

    This study focuses on advanced building facades that use daylighting, sun control, ventilation systems, and dynamic systems. A quick perusal of the leading architectural magazines, or a discussion in most architectural firms today will eventually lead to mention of some of the innovative new buildings that are being constructed with all-glass facades. Most of these buildings are appearing in Europe, although interestingly U.S. A/E firms often have a leading role in their design. This ''emerging technology'' of heavily glazed fagades is often associated with buildings whose design goals include energy efficiency, sustainability, and a ''green'' image. While there are a number of new books on the subject with impressive photos and drawings, there is little critical examination of the actual performance of such buildings, and a generally poor understanding as to whether they achieve their performance goals, or even what those goals might be. Even if the building ''works'' it is often dangerous to take a design solution from one climate and location and transport it to a new one without a good causal understanding of how the systems work. In addition, there is a wide range of existing and emerging glazing and fenestration technologies in use in these buildings, many of which break new ground with respect to innovative structural use of glass. It is unclear as to how well many of these designs would work as currently formulated in California locations dominated by intense sunlight and seismic events. Finally, the costs of these systems are higher than normal facades, but claims of energy and productivity savings are used to justify some of them. Once again these claims, while plausible, are largely unsupported. There have been major advances in glazing and facade technology over the past 30 years and we expect to see continued innovation and product development. It is critical in this process to be able to

  13. Miniaturized high performance sensors for space plasmas

    International Nuclear Information System (INIS)

    Young, D.T.

    1996-01-01

    Operating under ever more constrained budgets, NASA has turned to a new paradigm for instrumentation and mission development in which smaller, faster, better, cheaper is of primary consideration for future space plasma investigations. The author presents several examples showing the influence of this new paradigm on sensor development and discuss certain implications for the scientific return from resource constrained sensors. The author also discusses one way to improve space plasma sensor performance which is to search out new technologies, measurement techniques and instrument analogs from related fields including among others, laboratory plasma physics

  14. High Performance Building Mockup in FLEXLAB

    Energy Technology Data Exchange (ETDEWEB)

    McNeil, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kohler, Christian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lee, Eleanor S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Selkowitz, Stephen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-08-30

    Genentech has ambitious energy and indoor environmental quality performance goals for Building 35 (B35) being constructed by Webcor at the South San Francisco campus. Genentech and Webcor contracted with the Lawrence Berkeley National Laboratory (LBNL) to test building systems including lighting, lighting controls, shade fabric, and automated shading controls in LBNL’s new FLEXLAB facility. The goal of the testing is to ensure that the systems installed in the new office building will function in a way that reduces energy consumption and provides a comfortable work environment for employees.

  15. High performance computations using dynamical nucleation theory

    International Nuclear Information System (INIS)

    Windus, T L; Crosby, L D; Kathmann, S M

    2008-01-01

    Chemists continue to explore the use of very large computations to perform simulations that describe the molecular level physics of critical challenges in science. In this paper, we describe the Dynamical Nucleation Theory Monte Carlo (DNTMC) model - a model for determining molecular scale nucleation rate constants - and its parallel capabilities. The potential for bottlenecks and the challenges to running on future petascale or larger resources are delineated. A 'master-slave' solution is proposed to scale to the petascale and will be developed in the NWChem software. In addition, mathematical and data analysis challenges are described

  16. Estimating chlorophyll with thermal and broadband multispectral high resolution imagery from an unmanned aerial system using relevance vector machines for precision agriculture

    Science.gov (United States)

    Elarab, Manal; Ticlavilca, Andres M.; Torres-Rua, Alfonso F.; Maslova, Inga; McKee, Mac

    2015-12-01

    Precision agriculture requires high-resolution information to enable greater precision in the management of inputs to production. Actionable information about crop and field status must be acquired at high spatial resolution and at a temporal frequency appropriate for timely responses. In this study, high spatial resolution imagery was obtained through the use of a small, unmanned aerial system called AggieAirTM. Simultaneously with the AggieAir flights, intensive ground sampling for plant chlorophyll was conducted at precisely determined locations. This study reports the application of a relevance vector machine coupled with cross validation and backward elimination to a dataset composed of reflectance from high-resolution multi-spectral imagery (VIS-NIR), thermal infrared imagery, and vegetative indices, in conjunction with in situ SPAD measurements from which chlorophyll concentrations were derived, to estimate chlorophyll concentration from remotely sensed data at 15-cm resolution. The results indicate that a relevance vector machine with a thin plate spline kernel type and kernel width of 5.4, having LAI, NDVI, thermal and red bands as the selected set of inputs, can be used to spatially estimate chlorophyll concentration with a root-mean-squared-error of 5.31 μg cm-2, efficiency of 0.76, and 9 relevance vectors.

  17. Pressurized planar electrochromatography, high-performance thin-layer chromatography and high-performance liquid chromatography--comparison of performance.

    Science.gov (United States)

    Płocharz, Paweł; Klimek-Turek, Anna; Dzido, Tadeusz H

    2010-07-16

    Kinetic performance, measured by plate height, of High-Performance Thin-Layer Chromatography (HPTLC), High-Performance Liquid Chromatography (HPLC) and Pressurized Planar Electrochromatography (PPEC) was compared for the systems with adsorbent of the HPTLC RP18W plate from Merck as the stationary phase and the mobile phase composed of acetonitrile and buffer solution. The HPLC column was packed with the adsorbent, which was scrapped from the chromatographic plate mentioned. An additional HPLC column was also packed with adsorbent of 5 microm particle diameter, C18 type silica based (LiChrosorb RP-18 from Merck). The dependence of plate height of both HPLC and PPEC separating systems on flow velocity of the mobile phase and on migration distance of the mobile phase in TLC system was presented applying test solute (prednisolone succinate). The highest performance, amongst systems investigated, was obtained for the PPEC system. The separation efficiency of the systems investigated in the paper was additionally confirmed by the separation of test component mixture composed of six hormones. 2010 Elsevier B.V. All rights reserved.

  18. Can Knowledge of the Characteristics of "High Performers" Be Generalised?

    Science.gov (United States)

    McKenna, Stephen

    2002-01-01

    Two managers described as high performing constructed complexity maps of their organization/world. The maps suggested that high performance is socially constructed and negotiated in specific contexts and management competencies associated with it are context specific. Development of high performers thus requires personalized coaching more than…

  19. A high performance totally ordered multicast protocol

    Science.gov (United States)

    Montgomery, Todd; Whetten, Brian; Kaplan, Simon

    1995-01-01

    This paper presents the Reliable Multicast Protocol (RMP). RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service such as IP Multicasting. RMP is fully and symmetrically distributed so that no site bears un undue portion of the communication load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These QoS guarantees are selectable on a per packet basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, an implicit naming service, mutually exclusive handlers for messages, and mutually exclusive locks. It has commonly been held that a large performance penalty must be paid in order to implement total ordering -- RMP discounts this. On SparcStation 10's on a 1250 KB/sec Ethernet, RMP provides totally ordered packet delivery to one destination at 842 KB/sec throughput and with 3.1 ms packet latency. The performance stays roughly constant independent of the number of destinations. For two or more destinations on a LAN, RMP provides higher throughput than any protocol that does not use multicast or broadcast.

  20. High Performance, Three-Dimensional Bilateral Filtering

    International Nuclear Information System (INIS)

    Bethel, E. Wes

    2008-01-01

    Image smoothing is a fundamental operation in computer vision and image processing. This work has two main thrusts: (1) implementation of a bilateral filter suitable for use in smoothing, or denoising, 3D volumetric data; (2) implementation of the 3D bilateral filter in three different parallelization models, along with parallel performance studies on two modern HPC architectures. Our bilateral filter formulation is based upon the work of Tomasi [11], but extended to 3D for use on volumetric data. Our three parallel implementations use POSIX threads, the Message Passing Interface (MPI), and Unified Parallel C (UPC), a Partitioned Global Address Space (PGAS) language. Our parallel performance studies, which were conducted on a Cray XT4 supercomputer and aquad-socket, quad-core Opteron workstation, show our algorithm to have near-perfect scalability up to 120 processors. Parallel algorithms, such as the one we present here, will have an increasingly important role for use in production visual analysis systems as the underlying computational platforms transition from single- to multi-core architectures in the future.

  1. High-performance sport, marijuana, and cannabimimetics.

    Science.gov (United States)

    Hilderbrand, Richard L

    2011-11-01

    The prohibition on use of cannabinoids in sporting competitions has been widely debated and continues to be a contentious issue. Information continues to accumulate on the adverse health effects of smoked marijuana and the decrement of performance caused by the use of cannabinoids. The objective of this article is to provide an overview of cannabinoids and cannabimimetics that directly or indirectly impact sport, the rules of sport, and performance of the athlete. This article reviews some of the history of marijuana in Olympic and Collegiate sport, summarizes the guidelines by which a substance is added to the World Anti-Doping Agency Prohibited List, and updates information on the pharmacologic effects of cannabinoids and their mechanism of action. The recently marketed cannabimimetics Spice and K2 are included in the discussion as they activate the same receptors as are activated by THC. The article also provides a view as to why the World Anti-Doping Agency prohibits cannabinoid or cannabimimetic use incompetition and should continue to do so.

  2. High Performance, Three-Dimensional Bilateral Filtering

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes

    2008-06-05

    Image smoothing is a fundamental operation in computer vision and image processing. This work has two main thrusts: (1) implementation of a bilateral filter suitable for use in smoothing, or denoising, 3D volumetric data; (2) implementation of the 3D bilateral filter in three different parallelization models, along with parallel performance studies on two modern HPC architectures. Our bilateral filter formulation is based upon the work of Tomasi [11], but extended to 3D for use on volumetric data. Our three parallel implementations use POSIX threads, the Message Passing Interface (MPI), and Unified Parallel C (UPC), a Partitioned Global Address Space (PGAS) language. Our parallel performance studies, which were conducted on a Cray XT4 supercomputer and aquad-socket, quad-core Opteron workstation, show our algorithm to have near-perfect scalability up to 120 processors. Parallel algorithms, such as the one we present here, will have an increasingly important role for use in production visual analysis systems as the underlying computational platforms transition from single- to multi-core architectures in the future.

  3. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    Directory of Open Access Journals (Sweden)

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  4. Australia's new high performance research reactor

    International Nuclear Information System (INIS)

    Miller, R.; Abbate, P.M.

    2003-01-01

    A contract for the design and construction of the Replacement Research Reactor was signed in July 2000 between ANSTO and INVAP from Argentina. Since then the detailed design has been completed, a construction authorization has been obtained, and construction has commenced. The reactor design embodies modern safety thinking together with innovative solutions to ensure a highly safe and reliable plant. Also significant effort has been placed on providing the facility with diverse and ample facilities to maximize its use for irradiating material for radioisotope production as well as providing high neutron fluxes for neutron beam research. The project management organization and planing is commensurate with the complexity of the project and the number of players involved. (author)

  5. High Performance Single Nanowire Tunnel Diodes

    DEFF Research Database (Denmark)

    Wallentin, Jesper; Persson, Johan Mikael; Wagner, Jakob Birkedal

    NWs were contacted in a NW-FET setup. Electrical measurements at room temperature display typical tunnel diode behavior, with a Peak-to-Valley Current Ratio (PVCR) as high as 8.2 and a peak current density as high as 329 A/cm2. Low temperature measurements show improved PVCR of up to 27.6....... is the tunnel (Esaki) diode, which provides a low-resistance connection between junctions. We demonstrate an InP-GaAs NW axial heterostructure with tunnel diode behavior. InP and GaAs can be readily n- and p-doped, respectively, and the heterointerface is expected to have an advantageous type II band alignment...

  6. Future Vehicle Technologies : high performance transportation innovations

    Energy Technology Data Exchange (ETDEWEB)

    Pratt, T. [Future Vehicle Technologies Inc., Maple Ridge, BC (Canada)

    2010-07-01

    Battery management systems (BMS) were discussed in this presentation, with particular reference to the basic BMS design considerations; safety; undisclosed information about BMS; the essence of BMS; and Future Vehicle Technologies' BMS solution. Basic BMS design considerations that were presented included the balancing methodology; prismatic/cylindrical cells; cell protection; accuracy; PCB design, size and components; communications protocol; cost of manufacture; and expandability. In terms of safety, the presentation addressed lithium fires; high voltage; high voltage ground detection; crash/rollover shutdown; complete pack shutdown capability; and heat shields, casings, and impact protection. BMS bus bar engineering considerations were discussed along with good chip design. It was concluded that FVTs advantage is a unique skillset in automotive technology and the development of speed and cost effectiveness. tabs., figs.

  7. Radiation cured coatings for high performance products

    International Nuclear Information System (INIS)

    Parkins, J.C.; Teesdale, D.H.

    1984-01-01

    Development over the past ten years of radiation curable coating and lacquer systems and the means of curing them has led to new products in the packaging, flooring, furniture and other industries. Solventless lacquer systems formulated with acrylates and other resins enable high levels of durability, scuff resistance and gloss to be achieved. Ultra violet and electron beam radiation curing are used, the choice depending on the nature of the coating, the product and the scale of the operation. (author)

  8. High thermoelectric performance of graphite nanofibers

    OpenAIRE

    Tran, Van-Truong; Saint-Martin, Jérôme; Dollfus, Philippe; Volz, Sebastian

    2017-01-01

    Graphite nanofibers (GNFs) have been demonstrated to be a promising material for hydrogen storage and heat management in electronic devices. Here, by means of first-principles and transport simulations, we show that GNFs can also be an excellent material for thermoelectric applications thanks to the interlayer weak van der Waals interaction that induces low thermal conductance and a step-like shape in the electronic transmission with mini-gaps, which are necessary ingredients to achieve high ...

  9. New monomers for high performance polymers

    Science.gov (United States)

    Gratz, Roy F.

    1993-01-01

    This laboratory has been concerned with the development of new polymeric materials with high thermo-oxidative stability for use in the aerospace and electronics industries. Currently, there is special emphasis on developing matrix resins and composites for the high speed civil transport (HSCT) program. This application requires polymers that have service lifetimes of 60,000 hr at 350 F (177 C) and that are readily processible into void-free composites, preferably by melt-flow or powder techniques that avoid the use of high boiling solvents. Recent work has focused on copolymers which have thermally stable imide groups separated by flexible arylene ether linkages, some with trifluoromethyl groups attached to the aromatic rings. The presence of trifluoromethyl groups in monomers and polymers often improves their solubility and processibility. The goal of this research was to synthesize several new monomers containing pendant trifluoromethyl groups and to incorporate these monomers into new imide/arylene ether copolymers. Initially, work was begun on the synthesis of three target compounds. The first two, 3,5-dihydroxybenzo trifluoride and 3-amino 5-hydroxybenzo trifluoride, are intermediates in the synthesis of more complex monomers. The third, 3,5-bis (3-amino-phenoxy) benzotrifluoride, is an interesting diamine that could be incorporated into a polyimide directly.

  10. High performance repairing of reinforced concrete structures

    International Nuclear Information System (INIS)

    Iskhakov, I.; Ribakov, Y.; Holschemacher, K.; Mueller, T.

    2013-01-01

    Highlights: ► Steel fibered high strength concrete is effective for repairing concrete elements. ► Changing fibers’ content, required ductility of the repaired element is achieved. ► Experiments prove previously developed design concepts for two layer beams. -- Abstract: Steel fibered high strength concrete (SFHSC) is an effective material that can be used for repairing concrete elements. Design of normal strength concrete (NSC) elements that should be repaired using SFHSC can be based on general concepts for design of two-layer beams, consisting of SFHSC in the compressed zone and NSC without fibers in the tensile zone. It was previously reported that such elements are effective when their section carries rather large bending moments. Steel fibers, added to high strength concrete, increase its ultimate deformations due to the additional energy dissipation potential contributed by fibers. When changing the fibers’ content, a required ductility level of the repaired element can be achieved. Providing proper ductility is important for design of structures to dynamic loadings. The current study discusses experimental results that form a basis for finding optimal fiber content, yielding the highest Poisson coefficient and ductility of the repaired elements’ sections. Some technological issues as well as distribution of fibers in the cross section of two-layer bending elements are investigated. The experimental results, obtained in the frame of this study, form a basis for general technological provisions, related to repairing of NSC beams and slabs, using SFHSC.

  11. Auto-tuning Dense Vector and Matrix-vector Operations for Fermi GPUs

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik Brandenborg

    2012-01-01

    applications. As examples, we develop single-precision CUDA kernels for the Euclidian norm (SNRM2) and the matrix-vector multiplication (SGEMV). The target hardware is the most recent Nvidia Tesla 20-series (Fermi architecture). We show that auto-tuning can be successfully applied to achieve high performance...

  12. Information processing among high-performance managers

    Directory of Open Access Journals (Sweden)

    S.C. Garcia-Santos

    2010-01-01

    Full Text Available The purpose of this study was to evaluate the information processing of 43 business managers with a professional superior performance. The theoretical framework considers three models: the Theory of Managerial Roles of Henry Mintzberg, the Theory of Information Processing, and Process Model Response to Rorschach by John Exner. The participants have been evaluated by Rorschach method. The results show that these managers are able to collect data, evaluate them and establish rankings properly. At same time, they are capable of being objective and accurate in the problems assessment. This information processing style permits an interpretation of the world around on basis of a very personal and characteristic processing way or cognitive style.

  13. High temperature performance of polymer composites

    CERN Document Server

    Keller, Thomas

    2014-01-01

    The authors explain the changes in the thermophysical and thermomechanical properties of polymer composites under elevated temperatures and fire conditions. Using microscale physical and chemical concepts they allow researchers to find reliable solutions to their engineering needs on the macroscale. In a unique combination of experimental results and quantitative models, a framework is developed to realistically predict the behavior of a variety of polymer composite materials over a wide range of thermal and mechanical loads. In addition, the authors treat extreme fire scenarios up to more than 1000°C for two hours, presenting heat-protection methods to improve the fire resistance of composite materials and full-scale structural members, and discuss their performance after fire exposure. Thanks to the microscopic approach, the developed models are valid for a variety of polymer composites and structural members, making this work applicable to a wide audience, including materials scientists, polymer chemist...

  14. High performance concrete with blended cement

    International Nuclear Information System (INIS)

    Biswas, P.P.; Saraswati, S.; Basu, P.C.

    2012-01-01

    Principal objectives of the proposed project are two folds. Firstly, to develop the HPC mix suitable to NPP structures with blended cement, and secondly to study its durability necessary for desired long-term performance. Three grades of concrete to b considered in the proposed projects are M35, M50 and M60 with two types of blended cements, i.e. Portland slag cement (PSC) and Portland pozzolana cement (PPC). Three types of mineral admixtures - silica fume, fly ash and ground granulated blast furnace slag will be used. Concrete mixes with OPc and without any mineral admixture will be considered as reference case. Durability study of these mixes will be carried out

  15. Video Vectorization via Tetrahedral Remeshing.

    Science.gov (United States)

    Wang, Chuan; Zhu, Jie; Guo, Yanwen; Wang, Wenping

    2017-02-09

    We present a video vectorization method that generates a video in vector representation from an input video in raster representation. A vector-based video representation offers the benefits of vector graphics, such as compactness and scalability. The vector video we generate is represented by a simplified tetrahedral control mesh over the spatial-temporal video volume, with color attributes defined at the mesh vertices. We present novel techniques for simplification and subdivision of a tetrahedral mesh to achieve high simplification ratio while preserving features and ensuring color fidelity. From an input raster video, our method is capable of generating a compact video in vector representation that allows a faithful reconstruction with low reconstruction errors.

  16. High performance VLSI telemetry data systems

    Science.gov (United States)

    Chesney, J.; Speciale, N.; Horner, W.; Sabia, S.

    1990-01-01

    NASA's deployment of major space complexes such as Space Station Freedom (SSF) and the Earth Observing System (EOS) will demand increased functionality and performance from ground based telemetry acquisition systems well above current system capabilities. Adaptation of space telemetry data transport and processing standards such as those specified by the Consultative Committee for Space Data Systems (CCSDS) standards and those required for commercial ground distribution of telemetry data, will drive these functional and performance requirements. In addition, budget limitations will force the requirement for higher modularity, flexibility, and interchangeability at lower cost in new ground telemetry data system elements. At NASA's Goddard Space Flight Center (GSFC), the design and development of generic ground telemetry data system elements, over the last five years, has resulted in significant solutions to these problems. This solution, referred to as the functional components approach includes both hardware and software components ready for end user application. The hardware functional components consist of modern data flow architectures utilizing Application Specific Integrated Circuits (ASIC's) developed specifically to support NASA's telemetry data systems needs and designed to meet a range of data rate requirements up to 300 Mbps. Real-time operating system software components support both embedded local software intelligence, and overall system control, status, processing, and interface requirements. These components, hardware and software, form the superstructure upon which project specific elements are added to complete a telemetry ground data system installation. This paper describes the functional components approach, some specific component examples, and a project example of the evolution from VLSI component, to basic board level functional component, to integrated telemetry data system.

  17. High Performance Fuel Technology Development(I)

    International Nuclear Information System (INIS)

    Song, Kun Woo; Kim, Keon Sik; Bang, Jeong Yong; Park, Je Keon; Chen, Tae Hyun; Kim, Hyung Kyu

    2010-04-01

    The dual-cooled annular fuel has been investigated for the purpose of achieving the power uprate of 20% and decreasing pellet temperature by 30%. The 12x12 rod array and basic design was developed, which is mechanically compatible with the OPR-1000. The reactor core analysis has been performed using this design, and the results have shown that the criteria of nuclear, thermohydraulic and safety design are satisfied and pellet temperature can be lowered by 40% even in 120% power. The basic design of fuel component was developed and the cladding thickness was designed through analysis and experiments. The solutions have been proposed and analyzed to the technical issues such as 'inner channel blockage' and 'imbalance between inner and outer coolant'. The annular pellet was fabricated with good control of shape and size, and especially, a new sintering technique has been developed to control the deviation of inner diameter within ±5μm. The irradiation test of annular pellets has been conducted up to 10 MWD/kgU to find out the densification and swelling behaviors. The 11 types of materials candidates have developed for the PCI-endurance pellet, and the material containing the Mn-Al additive showed its creep performance of much better than UO2 material. The HANA cladding has been irradiated up to 61 MWD/kgU, and the results have shown that its oxidation resistance is better by 40% than that of Zircaloy. The 30 types of candidate materials for next generation have been developed through alloy design and property tests

  18. Carbon nanotubes for high-performance logic

    OpenAIRE

    Chen, Zhihong; Wong, H.S. Phillip; Mitra, Subhasish; Bol, Aggeth; Peng, Lianmao; Hills, Gage; Thissen, Nick

    2014-01-01

    Single-wall carbon nanotubes (CNTs) were discovered in 1993 and have been an area of intense research since then. They offer the right dimensions to explore material science and physical chemistry at the nanoscale and are the perfect system to study low-dimensional physics and transport. In the past decade, more attention has been shifted toward making use of this unique nanomaterial in real-world applications. In this article, we focus on potential applications of CNTs in the high-performanc...

  19. Environmentally friendly, high-performance generation

    International Nuclear Information System (INIS)

    Kalmari, A.

    2003-01-01

    The project developer, owner, and operator of the new 45 MWth BFB-based cogeneration plant in Iisalmi is Termia Oy, part of the Atro Group (formerly Savon Voima Oy). Fired on peat and wood waste and handed over to the customer in November 2002, the plant's electrical output is sold to the parent company and heat locally to customers in Iisalmi. When the construction decision was made, one of the main objectives was to utilise as high a level of indigenous fuels (peat and biomass) as possible, at a high level of efficiency. An environmental impact analysis was carried out, taking into account the impact of various fuels and emissions in terms of combustion and logistics. One main benefit of the type of plant ultimately selected was that the bulk of the fuel can be supplied from the surrounding area. This is very important in terms of fuel supply security and local employment. The government provided a EUR 2.7 million grant for the project, equivalent to 13% of the total EUR 21 million investment budget. Before the plant was built, Termia used approximately 95 GWh of indigenous fuels annually. Today, this figure is 220 GWh. The main fuel used is milled peat. Up to 30% green chips from logging residues can be used. Recycled waste fuel can cover up to 3% of the total fuel requirement

  20. Liquid Argon Calorimeter performance at High Rates

    CERN Document Server

    Seifert, F; The ATLAS collaboration

    2013-01-01

    The expected increase of luminosity at HL-LHC by a factor of ten with respect to LHC luminosities has serious consequences for the signal reconstruction, radiation hardness requirements and operations of the ATLAS liquid argon calorimeters in the endcap, respectively forward region. Small modules of each type of calorimeter have been built and exposed to a high intensity proton beam of 50 GeV at IHEP/Protvino. The beam is extracted via the bent crystal technique, offering the unique opportunity to cover intensities ranging from $10^6$ p/s up to $3\\cdot10^{11}$ p/s. This exceeds the deposited energy per time expected at HL-LHC by more than a factor of 100. The correlation between beam intensity and the read-out signal has been studied. The data show clear indications of pulse shape distortion due to the high ionization build-up, in agreement with MC expectations. This is also confirmed from the dependence of the HV currents on beam intensity.

  1. High-performance silicon nanowire bipolar phototransistors

    Science.gov (United States)

    Tan, Siew Li; Zhao, Xingyan; Chen, Kaixiang; Crozier, Kenneth B.; Dan, Yaping

    2016-07-01

    Silicon nanowires (SiNWs) have emerged as sensitive absorbing materials for photodetection at wavelengths ranging from ultraviolet (UV) to the near infrared. Most of the reports on SiNW photodetectors are based on photoconductor, photodiode, or field-effect transistor device structures. These SiNW devices each have their own advantages and trade-offs in optical gain, response time, operating voltage, and dark current noise. Here, we report on the experimental realization of single SiNW bipolar phototransistors on silicon-on-insulator substrates. Our SiNW devices are based on bipolar transistor structures with an optically injected base region and are fabricated using CMOS-compatible processes. The experimentally measured optoelectronic characteristics of the SiNW phototransistors are in good agreement with simulation results. The SiNW phototransistors exhibit significantly enhanced response to UV and visible light, compared with typical Si p-i-n photodiodes. The near infrared responsivities of the SiNW phototransistors are comparable to those of Si avalanche photodiodes but are achieved at much lower operating voltages. Compared with other reported SiNW photodetectors as well as conventional bulk Si photodiodes and phototransistors, the SiNW phototransistors in this work demonstrate the combined advantages of high gain, high photoresponse, low dark current, and low operating voltage.

  2. High Performance Clocks and Gravity Field Determination

    Science.gov (United States)

    Müller, J.; Dirkx, D.; Kopeikin, S. M.; Lion, G.; Panet, I.; Petit, G.; Visser, P. N. A. M.

    2018-02-01

    Time measured by an ideal clock crucially depends on the gravitational potential and velocity of the clock according to general relativity. Technological advances in manufacturing high-precision atomic clocks have rapidly improved their accuracy and stability over the last decade that approached the level of 10^{-18}. This notable achievement along with the direct sensitivity of clocks to the strength of the gravitational field make them practically important for various geodetic applications that are addressed in the present paper. Based on a fully relativistic description of the background gravitational physics, we discuss the impact of those highly-precise clocks on the realization of reference frames and time scales used in geodesy. We discuss the current definitions of basic geodetic concepts and come to the conclusion that the advances in clocks and other metrological technologies will soon require the re-definition of time scales or, at least, clarification to ensure their continuity and consistent use in practice. The relative frequency shift between two clocks is directly related to the difference in the values of the gravity potential at the points of clock's localization. According to general relativity the relative accuracy of clocks in 10^{-18} is equivalent to measuring the gravitational red shift effect between two clocks with the height difference amounting to 1 cm. This makes the clocks an indispensable tool in high-precision geodesy in addition to laser ranging and space geodetic techniques. We show how clock measurements can provide geopotential numbers for the realization of gravity-field-related height systems and can resolve discrepancies in classically-determined height systems as well as between national height systems. Another application of clocks is the direct use of observed potential differences for the improved recovery of regional gravity field solutions. Finally, clock measurements for space-borne gravimetry are analyzed along with

  3. Development of high performance hybrid rocket fuels

    Science.gov (United States)

    Zaseck, Christopher R.

    . In order to examine paraffin/additive combustion in a motor environment, I conducted experiments on well characterized aluminum based additives. In particular, I investigate the influence of aluminum, unpassivated aluminum, milled aluminum/polytetrafluoroethylene (PTFE), and aluminum hydride on the performance of paraffin fuels for hybrid rocket propulsion. I use an optically accessible combustor to examine the performance of the fuel mixtures in terms of characteristic velocity efficiency and regression rate. Each combustor test consumes a 12.7 cm long, 1.9 cm diameter fuel strand under 160 kg/m 2s of oxygen at up to 1.4 MPa. The experimental results indicate that the addition of 5 wt.% 30 mum or 80 nm aluminum to paraffin increases the regression rate by approximately 15% compared to neat paraffin grains. At higher aluminum concentrations and nano-scale particles sizes, the increased melt layer viscosity causes slower regression. Alane and Al/PTFE at 12.5 wt.% increase the regression of paraffin by 21% and 32% respectively. Finally, an aging study indicates that paraffin can protect air and moisture sensitive particles from oxidation. The opposed burner and aluminum/paraffin hybrid rocket experiments show that additives can alter bulk fuel properties, such as viscosity, that regulate entrainment. The general effect of melt layer properties on the entrainment and regression rate of paraffin is not well understood. Improved understanding of how solid additives affect the properties and regression of paraffin is essential to maximize performance. In this document I investigate the effect of melt layer properties on paraffin regression using inert additives. Tests are performed in the optical cylindrical combustor at ˜1 MPa under a gaseous oxygen mass flux of ˜160 kg/m2s. The experiments indicate that the regression rate is proportional to mu0.08rho 0.38kappa0.82. In addition, I explore how to predict fuel viscosity, thermal conductivity, and density prior to testing

  4. Emerging technologies for high performance infrared detectors

    Science.gov (United States)

    Tan, Chee Leong; Mohseni, Hooman

    2018-01-01

    Infrared photodetectors (IRPDs) have become important devices in various applications such as night vision, military missile tracking, medical imaging, industry defect imaging, environmental sensing, and exoplanet exploration. Mature semiconductor technologies such as mercury cadmium telluride and III-V material-based photodetectors have been dominating the industry. However, in the last few decades, significant funding and research has been focused to improve the performance of IRPDs such as lowering the fabrication cost, simplifying the fabrication processes, increasing the production yield, and increasing the operating temperature by making use of advances in nanofabrication and nanotechnology. We will first review the nanomaterial with suitable electronic and mechanical properties, such as two-dimensional material, graphene, transition metal dichalcogenides, and metal oxides. We compare these with more traditional low-dimensional material such as quantum well, quantum dot, quantum dot in well, semiconductor superlattice, nanowires, nanotube, and colloid quantum dot. We will also review the nanostructures used for enhanced light-matter interaction to boost the IRPD sensitivity. These include nanostructured antireflection coatings, optical antennas, plasmonic, and metamaterials.

  5. Emerging technologies for high performance infrared detectors

    Directory of Open Access Journals (Sweden)

    Tan Chee Leong

    2018-01-01

    Full Text Available Infrared photodetectors (IRPDs have become important devices in various applications such as night vision, military missile tracking, medical imaging, industry defect imaging, environmental sensing, and exoplanet exploration. Mature semiconductor technologies such as mercury cadmium telluride and III–V material-based photodetectors have been dominating the industry. However, in the last few decades, significant funding and research has been focused to improve the performance of IRPDs such as lowering the fabrication cost, simplifying the fabrication processes, increasing the production yield, and increasing the operating temperature by making use of advances in nanofabrication and nanotechnology. We will first review the nanomaterial with suitable electronic and mechanical properties, such as two-dimensional material, graphene, transition metal dichalcogenides, and metal oxides. We compare these with more traditional low-dimensional material such as quantum well, quantum dot, quantum dot in well, semiconductor superlattice, nanowires, nanotube, and colloid quantum dot. We will also review the nanostructures used for enhanced light-matter interaction to boost the IRPD sensitivity. These include nanostructured antireflection coatings, optical antennas, plasmonic, and metamaterials.

  6. Video performance for high security applications

    International Nuclear Information System (INIS)

    Connell, Jack C.; Norman, Bradley C.

    2010-01-01

    The complexity of physical protection systems has increased to address modern threats to national security and emerging commercial technologies. A key element of modern physical protection systems is the data presented to the human operator used for rapid determination of the cause of an alarm, whether false (e.g., caused by an animal, debris, etc.) or real (e.g., a human adversary). Alarm assessment, the human validation of a sensor alarm, primarily relies on imaging technologies and video systems. Developing measures of effectiveness (MOE) that drive the design or evaluation of a video system or technology becomes a challenge, given the subjectivity of the application (e.g., alarm assessment). Sandia National Laboratories has conducted empirical analysis using field test data and mathematical models such as binomial distribution and Johnson target transfer functions to develop MOEs for video system technologies. Depending on the technology, the task of the security operator and the distance to the target, the Probability of Assessment (PAs) can be determined as a function of a variety of conditions or assumptions. PAs used as an MOE allows the systems engineer to conduct trade studies, make informed design decisions, or evaluate new higher-risk technologies. This paper outlines general video system design trade-offs, discusses ways video can be used to increase system performance and lists MOEs for video systems used in subjective applications such as alarm assessment.

  7. High performance magnet power supply optimization

    International Nuclear Information System (INIS)

    Jackson, L.T.

    1988-01-01

    The power supply system for the joint LBL--SLAC proposed accelerator PEP provides the opportunity to take a fresh look at the current techniques employed for controlling large amounts of dc power and the possibility of using a new one. A basic requirement of +- 100 ppM regulation is placed on the guide field of the bending magnets and quadrupoles placed around the 2200 meter circumference of the accelerator. The optimization questions to be answered by this paper are threefold: Can a firing circuit be designed to reduce the combined effects of the harmonics and line voltage combined effects of the harmonics and line voltage unbalance to less than 100 ppM in the magnet field. Given the ambiguity of the previous statement, is the addition of a transistor bank to a nominal SCR controlled system the way to go or should one opt for an SCR chopper system running at 1 KHz where multiple supplies are fed from one large dc bus and the cost--performance evaluation of the three possible systems

  8. High Dynamic Performance Nonlinear Source Emulator

    DEFF Research Database (Denmark)

    Nguyen-Duy, Khiem; Knott, Arnold; Andersen, Michael A. E.

    2016-01-01

    As research and development of renewable and clean energy based systems is advancing rapidly, the nonlinear source emulator (NSE) is becoming very essential for testing of maximum power point trackers or downstream converters. Renewable and clean energy sources play important roles in both...... terrestrial and nonterrestrial applications. However, most existing NSEs have only been concerned with simulating energy sources in terrestrial applications, which may not be fast enough for testing of nonterrestrial applications. In this paper, a high-bandwidth NSE is developed that is able to simulate...... change in the input source but also to a load step between nominal and open circuit. Moreover, all of these operation modes have a very fast settling time of only 10 μs, which is hundreds of times faster than that of existing works. This attribute allows for higher speed and a more efficient maximum...

  9. High-Performance Energy Applications and Systems

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Barton [Univ. of Wisconsin, Madison, WI (United States)

    2014-01-01

    The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building tools and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “Foundational Tools for Petascale Computing”, SC0003922/FG02-10ER25940, UW PRJ27NU.

  10. High-performance sensorless nonlinear power control of a flywheel energy storage system

    International Nuclear Information System (INIS)

    Amodeo, S.J.; Chiacchiarini, H.G.; Solsona, J.A.; Busada, C.A.

    2009-01-01

    The flywheel energy storage systems (FESS) can be used to store and release energy in high power pulsed systems. Based on the use of a homopolar synchronous machine in a FESS, a high performance model-based power flow control law is developed using the feedback linearization methodology. This law is based on the voltage space vector reference frame machine model. To reduce the magnetic losses, a pulse amplitude modulation driver for the armature is more adequate. The restrictions in amplitude and phase imposed by the driver are also included. A full order Luenberger observer for the torque angle and rotor speed is developed to implement a sensorless control strategy. Simulation results are presented to illustrate the performance.

  11. A C++11 implementation of arbitrary-rank tensors for high-performance computing

    Science.gov (United States)

    Aragón, Alejandro M.

    2014-11-01

    This article discusses an efficient implementation of tensors of arbitrary rank by using some of the idioms introduced by the recently published C++ ISO Standard (C++11). With the aims at providing a basic building block for high-performance computing, a single Array class template is carefully crafted, from which vectors, matrices, and even higher-order tensors can be created. An expression template facility is also built around the array class template to provide convenient mathematical syntax. As a result, by using templates, an extra high-level layer is added to the C++ language when dealing with algebraic objects and their operations, without compromising performance. The implementation is tested running on both CPU and GPU.

  12. High performance multiple stream data transfer

    International Nuclear Information System (INIS)

    Rademakers, F.; Saiz, P.

    2001-01-01

    The ALICE detector at LHC (CERN), will record raw data at a rate of 1.2 Gigabytes per second. Trying to analyse all this data at CERN will not be feasible. As originally proposed by the MONARC project, data collected at CERN will be transferred to remote centres to use their computing infrastructure. The remote centres will reconstruct and analyse the events, and make available the results. Therefore high-rate data transfer between computing centres (Tiers) will become of paramount importance. The authors will present several tests that have been made between CERN and remote centres in Padova (Italy), Torino (Italy), Catania (Italy), Lyon (France), Ohio (United States), Warsaw (Poland) and Calcutta (India). These tests consisted, in a first stage, of sending raw data from CERN to the remote centres and back, using a ftp method that allows connections of several streams at the same time. Thanks to these multiple streams, it is possible to increase the rate at which the data is transferred. While several 'multiple stream ftp solutions' already exist, the authors' method is based on a parallel socket implementation which allows, besides files, also objects (or any large message) to be send in parallel. A prototype will be presented able to manage different transfers. This is the first step of a system to be implemented that will be able to take care of the connections with the remote centres to exchange data and monitor the status of the transfer

  13. High performance parallel backprojection on FPGA

    Energy Technology Data Exchange (ETDEWEB)

    Pfanner, Florian; Knaup, Michael; Kachelriess, Marc [Erlangen-Nuernberg Univ., Erlangen (Germany). Inst. of Medical Physics (IMP)

    2011-07-01

    Reconstruction of tomographic images, i.e., images from a Computed Tomography scanner, is a very time consuming issue. The most calculation power is needed for the backprojection step. A closer inspection shows that the algorithm for backprojection is easy to parallelize. FPGAs are able to execute many operations in the same time, so a highly parallel algorithm is a requirement for a powerful acceleration. For data flow rate maximization, we realized the backprojection in a pipelined structure with data throughput of one clock cycle. Due the hardware limitations of the FPGA, it is not possible to reconstruct the image as a whole. So it is necessary to split up the image and reconstruct these parts separately. Despite that, a reconstruction of 512 projections into a 5122 image is calculated within 13 ms on a Virtex 5 FPGA. To save hardware resources we use fixed point arithmetic with an accuracy of 23 bit for calculation. A comparison of the result image and an image, calculated with floating point arithmetic on CPU, shows that there are no differences between these images. (orig.)

  14. Systematics of strong nuclear amplification of gluon saturation from exclusive vector meson production in high energy electron-nucleus collisions

    Science.gov (United States)

    Mäntysaari, Heikki; Venugopalan, Raju

    2018-06-01

    We show that gluon saturation gives rise to a strong modification of the scaling in both the nuclear mass number A and the virtuality Q2 of the vector meson production cross-section in exclusive deep-inelastic scattering off nuclei. We present qualitative analytic expressions for how the scaling exponents are modified as well as quantitative predictions that can be tested at an Electron-Ion Collider.

  15. Enabling High Performance Large Scale Dense Problems through KBLAS

    KAUST Repository

    Abdelfattah, Ahmad

    2014-05-04

    KBLAS (KAUST BLAS) is a small library that provides highly optimized BLAS routines on systems accelerated with GPUs. KBLAS is entirely written in CUDA C, and targets NVIDIA GPUs with compute capability 2.0 (Fermi) or higher. The current focus is on level-2 BLAS routines, namely the general matrix vector multiplication (GEMV) kernel, and the symmetric/hermitian matrix vector multiplication (SYMV/HEMV) kernel. KBLAS provides these two kernels in all four precisions (s, d, c, and z), with support to multi-GPU systems. Through advanced optimization techniques that target latency hiding and pushing memory bandwidth to the limit, KBLAS outperforms state-of-the-art kernels by 20-90% improvement. Competitors include CUBLAS-5.5, MAGMABLAS-1.4.0, and CULAR17. The SYMV/HEMV kernel from KBLAS has been adopted by NVIDIA, and should appear in CUBLAS-6.0. KBLAS has been used in large scale simulations of multi-object adaptive optics.

  16. Technologies of high-performance thermography systems

    Science.gov (United States)

    Breiter, R.; Cabanski, Wolfgang A.; Mauk, K. H.; Kock, R.; Rode, W.

    1997-08-01

    A family of 2 dimensional detection modules based on 256 by 256 and 486 by 640 platinum silicide (PtSi) focal planes, or 128 by 128 and 256 by 256 mercury cadmium telluride (MCT) focal planes for applications in either the 3 - 5 micrometer (MWIR) or 8 - 10 micrometer (LWIR) range was recently developed by AIM. A wide variety of applications is covered by the specific features unique for these two material systems. The PtSi units provide state of the art correctability with long term stable gain and offset coefficients. The MCT units provide extremely fast frame rates like 400 Hz with snapshot integration times as short as 250 microseconds and with a thermal resolution NETD less than 20 mK for e.g. the 128 by 128 LWIR module. The unique design idea general for all of these modules is the exclusively digital interface, using 14 bit analog to digital conversion to provide state of the art correctability, access to highly dynamic scenes without any loss of information and simplified exchangeability of the units. Device specific features like bias voltages etc. are identified during the final test and stored in a memory on the driving electronics. This concept allows an easy exchange of IDCAs of the same type without any need for tuning or e.g. the possibility to upgrade a PtSi based unit to an MCT module by just loading the suitable software. Miniaturized digital signal processor (DSP) based image correction units were developed for testing and operating the units with output data rates of up to 16 Mpixels/s. These boards provide the ability for freely programmable realtime functions like two point correction and various data manipulations in thermography applications.

  17. High energy permanent magnets - Solutions to high performance devices

    International Nuclear Information System (INIS)

    Ma, B.M.; Willman, C.J.

    1986-01-01

    Neodymium iron boron magnets are a special class of magnets providing the highest level of performance with the least amount of material. Crucible Research Center produced the highest energy product magnet of 45 MGOe - a world record. Commercialization of this development has already taken place. Crucible Magnetics Division, located in Elizabethtown, Kentucky, is currently manufacturing and marketing six different grades of NdFeB magnets. Permanent magnets find application in motors, speakers, electron beam focusing devices for military and Star Wars. The new NdFeB magnets are of considerable interest for a wide range of applications

  18. Safety profile, efficacy, and biodistribution of a bicistronic high-capacity adenovirus vector encoding a combined immunostimulation and cytotoxic gene therapy as a prelude to a phase I clinical trial for glioblastoma

    Energy Technology Data Exchange (ETDEWEB)

    Puntel, Mariana [Department of Neurosurgery, The University of Michigan School of Medicine, MSRB II, RM 4570C, 1150 West Medical Center Drive, Ann Arbor, MI 48109-5689 (United States); Department of Cell and Developmental Biology, The University of Michigan School of Medicine, MSRB II, RM 4570C, 1150 West Medical Center Drive, Ann Arbor, MI 48109-5689 (United States); Gene Therapeutics Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA 90048 (United States); Ghulam, Muhammad A.K.M. [Gene Therapeutics Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA 90048 (United States); Farrokhi, Catherine [Department of Psychiatry and Behavioral Neurosciences, Cedars Sinai Medical Center, Los Angeles, CA 90048 (United States); VanderVeen, Nathan; Paran, Christopher; Appelhans, Ashley [Department of Neurosurgery, The University of Michigan School of Medicine, MSRB II, RM 4570C, 1150 West Medical Center Drive, Ann Arbor, MI 48109-5689 (United States); Department of Cell and Developmental Biology, The University of Michigan School of Medicine, MSRB II, RM 4570C, 1150 West Medical Center Drive, Ann Arbor, MI 48109-5689 (United States); Kroeger, Kurt M.; Salem, Alireza [Gene Therapeutics Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA 90048 (United States); Lacayo, Liliana [Department of Psychiatry and Behavioral Neurosciences, Cedars Sinai Medical Center, Los Angeles, CA 90048 (United States); Pechnick, Robert N. [Department of Psychiatry and Behavioral Neurosciences, Cedars Sinai Medical Center, Los Angeles, CA 90048 (United States); Department of Psychiatry and Behavioral Neurosciences, David Geffen School of Medicine, University of California, Los Angeles, CA (United States); Kelson, Kyle R.; Kaur, Sukhpreet; Kennedy, Sean [Gene Therapeutics Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA 90048 (United States); Palmer, Donna; Ng, Philip [Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, TX 77030 (United States); and others

    2013-05-01

    Adenoviral vectors (Ads) are promising gene delivery vehicles due to their high transduction efficiency; however, their clinical usefulness has been hampered by their immunogenicity and the presence of anti-Ad immunity in humans. We reported the efficacy of a gene therapy approach for glioma consisting of intratumoral injection of Ads encoding conditionally cytotoxic herpes simplex type 1 thymidine kinase (Ad-TK) and the immunostimulatory cytokine fms-like tyrosine kinase ligand 3 (Ad-Flt3L). Herein, we report the biodistribution, efficacy, and neurological and systemic effects of a bicistronic high-capacity Ad, i.e., HC-Ad-TK/TetOn-Flt3L. HC-Ads elicit sustained transgene expression, even in the presence of anti-Ad immunity, and can encode large therapeutic cassettes, including regulatory elements to enable turning gene expression “on” or “off” according to clinical need. The inclusion of two therapeutic transgenes within a single vector enables a reduction of the total vector load without adversely impacting efficacy. Because clinically the vectors will be delivered into the surgical cavity, normal regions of the brain parenchyma are likely to be transduced. Thus, we assessed any potential toxicities elicited by escalating doses of HC-Ad-TK/TetOn-Flt3L (1 × 10{sup 8}, 1 × 10{sup 9}, or 1 × 10{sup 10} viral particles [vp]) delivered into the rat brain parenchyma. We assessed neuropathology, biodistribution, transgene expression, systemic toxicity, and behavioral impact at acute and chronic time points. The results indicate that doses up to 1 × 10{sup 9} vp of HC-Ad-TK/TetOn-Flt3L can be safely delivered into the normal rat brain and underpin further developments for its implementation in a phase I clinical trial for glioma. - Highlights: ► High capacity Ad vectors elicit sustained therapeutic gene expression in the brain. ► HC-Ad-TK/TetOn-Flt3L encodes two therapeutic genes and a transcriptional switch. ► We performed a dose escalation study at

  19. Design Specification for a Thrust-Vectoring, Actuated-Nose-Strake Flight Control Law for the High-Alpha Research Vehicle

    Science.gov (United States)

    Bacon, Barton J.; Carzoo, Susan W.; Davidson, John B.; Hoffler, Keith D.; Lallman, Frederick J.; Messina, Michael D.; Murphy, Patrick C.; Ostroff, Aaron J.; Proffitt, Melissa S.; Yeager, Jessie C.; hide

    1996-01-01

    Specifications for a flight control law are delineated in sufficient detail to support coding the control law in flight software. This control law was designed for implementation and flight test on the High-Alpha Research Vehicle (HARV), which is an F/A-18 aircraft modified to include an experimental multi-axis thrust-vectoring system and actuated nose strakes for enhanced rolling (ANSER). The control law, known as the HARV ANSER Control Law, was designed to utilize a blend of conventional aerodynamic control effectors, thrust vectoring, and actuated nose strakes to provide increased agility and good handling qualities throughout the HARV flight envelope, including angles of attack up to 70 degrees.

  20. DOE research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  1. High-Performance Management Practices and Employee Outcomes in Denmark

    DEFF Research Database (Denmark)

    Cristini, Annalisa; Eriksson, Tor; Pozzoli, Dario

    High-performance work practices are frequently considered to have positive effects on corporate performance, but what do they do for employees? After showing that organizational innovation is indeed positively associated with firm performance, we investigate whether high-involvement work practices...

  2. Vector analysis of high (≥3 diopters) astigmatism correction using small-incision lenticule extraction and laser in situ keratomileusis.

    Science.gov (United States)

    Chan, Tommy C Y; Wang, Yan; Ng, Alex L K; Zhang, Jiamei; Yu, Marco C Y; Jhanji, Vishal; Cheng, George P M

    2018-06-13

    To compare the astigmatic correction in high myopic astigmatism between small-incision lenticule extraction and laser in situ keratomileusis (LASIK) using vector analysis. Hong Kong Laser Eye Center, Hong Kong. Retrospective case series. Patients who had correction of myopic astigmatism of 3.0 diopters (D) or more and had either small-incision lenticule extraction or femtosecond laser-assisted LASIK were included. Only the left eye was included for analysis. Visual and refractive results were presented and compared between groups. The study comprised 105 patients (40 eyes in the small-incision lenticule extraction group and 65 eyes in the femtosecond laser-assisted LASIK group.) The mean preoperative manifest cylinder was -3.42 D ± 0.55 (SD) in the small-incision lenticule extraction group and -3.47 ± 0.49 D in the LASIK group (P = .655). At 3 months, there was no significant between-group difference in uncorrected distance visual acuity (P = .915) and manifest spherical equivalent (P = .145). Ninety percent and 95.4% of eyes were within ± 0.5 D of the attempted cylindrical correction for the small-incision lenticule extraction and LASIK group, respectively (P = .423). Vector analysis showed comparable target-induced astigmatism (P = .709), surgically induced astigmatism vector (P = .449), difference vector (P = .335), and magnitude of error (P = .413) between groups. The absolute angle of error was 1.88 ± 2.25 degrees in the small-incision lenticule extraction group and 1.37 ± 1.58 degrees in the LASIK group (P = .217). Small-incision lenticule extraction offered astigmatic correction comparable to LASIK in eyes with high myopic astigmatism. Copyright © 2018 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  3. Unrestricted Hepatocyte Transduction with Adeno-Associated Virus Serotype 8 Vectors in Mice

    Science.gov (United States)

    Nakai, Hiroyuki; Fuess, Sally; Storm, Theresa A.; Muramatsu, Shin-ichi; Nara, Yuko; Kay, Mark A.

    2005-01-01

    Recombinant adeno-associated virus (rAAV) vectors can mediate long-term stable transduction in various target tissues. However, with rAAV serotype 2 (rAAV2) vectors, liver transduction is confined to only a small portion of hepatocytes even after administration of extremely high vector doses. In order to investigate whether rAAV vectors of other serotypes exhibit similar restricted liver transduction, we performed a dose-response study by injecting mice with β-galactosidase-expressing rAAV1 and rAAV8 vectors via the portal vein. The rAAV1 vector showed a blunted dose-response similar to that of rAAV2 at high doses, while the rAAV8 vector dose-response remained unchanged at any dose and ultimately could transduce all the hepatocytes at a dose of 7.2 × 1012 vector genomes/mouse without toxicity. This indicates that all hepatocytes have the ability to process incoming single-stranded vector genomes into duplex DNA. A single tail vein injection of the rAAV8 vector was as efficient as portal vein injection at any dose. In addition, intravascular administration of the rAAV8 vector at a high dose transduced all the skeletal muscles throughout the body, including the diaphragm, the entire cardiac muscle, and substantial numbers of cells in the pancreas, smooth muscles, and brain. Thus, rAAV8 is a robust vector for gene transfer to the liver and provides a promising research tool for delivering genes to various target organs. In addition, the rAAV8 vector may offer a potential therapeutic agent for various diseases affecting nonhepatic tissues, but great caution is required for vector spillover and tight control of tissue-specific gene expression. PMID:15596817

  4. Academic performance in high school as factor associated to academic performance in college

    Directory of Open Access Journals (Sweden)

    Mileidy Salcedo Barragán

    2008-12-01

    Full Text Available This study intends to find the relationship between academic performance in High School and College, focusing on Natural Sciences and Mathematics. It is a descriptive correlational study, and the variables were academic performance in High School, performance indicators and educational history. The correlations between variables were established with Spearman’s correlation coefficient. Results suggest that there is a positive relationship between academic performance in High School and Educational History, and a very weak relationship between performance in Science and Mathematics in High School and performance in College.

  5. Adenoviral vectors for highly selective gene expression in central serotonergic neurons reveal quantal characteristics of serotonin release in the rat brain

    Directory of Open Access Journals (Sweden)

    Teschemacher Anja G

    2009-03-01

    Full Text Available Abstract Background 5-hydroxytryptamine (5 HT, serotonin is one of the key neuromodulators in mammalian brain, but many fundamental properties of serotonergic neurones and 5 HT release remain unknown. The objective of this study was to generate an adenoviral vector system for selective targeting of serotonergic neurones and apply it to study quantal characteristics of 5 HT release in the rat brain. Results We have generated adenoviral vectors which incorporate a 3.6 kb fragment of the rat tryptophan hydroxylase-2 (TPH-2 gene which selectively (97% co-localisation with TPH-2 target raphe serotonergic neurones. In order to enhance the level of expression a two-step transcriptional amplification strategy was employed. This allowed direct visualization of serotonergic neurones by EGFP fluorescence. Using these vectors we have performed initial characterization of EGFP-expressing serotonergic neurones in rat organotypic brain slice cultures. Fluorescent serotonergic neurones were identified and studied using patch clamp and confocal Ca2+ imaging and had features consistent with those previously reported using post-hoc identification approaches. Fine processes of serotonergic neurones could also be visualized in un-fixed tissue and morphometric analysis suggested two putative types of axonal varicosities. We used micro-amperometry to analyse the quantal characteristics of 5 HT release and found that central 5 HT exocytosis occurs predominantly in quanta of ~28000 molecules from varicosities and ~34000 molecules from cell bodies. In addition, in somata, we observed a minority of large release events discharging on average ~800000 molecules. Conclusion For the first time quantal release of 5 HT from somato-dendritic compartments and axonal varicosities in mammalian brain has been demonstrated directly and characterised. Release from somato-dendritic and axonal compartments might have different physiological functions. Novel vectors generated in this

  6. Snow Radiance Data Assimilation over High Mountain Asia Using the NASA Land Information System and a Well-Trained Support Vector Machine

    Science.gov (United States)

    Kwon, Y.; Forman, B. A.; Yoon, Y.; Kumar, S.

    2017-12-01

    High Mountain Asia (HMA) has been progressively losing ice and snow in recent decades, which could negatively impact regional water supply and native ecosystems. One goal of this study is to characterize the spatiotemporal variability of snow (and ice) across the HMA region. In addition, modeled snow water equivalent (SWE) estimates will be enhanced through the assimilation of passive microwave brightness temperatures (TB) collected by the Advanced Microwave Scanning Radiometer for Earth Observing System (AMSR-E) as part of a radiance assimilation system. The radiance assimilation framework includes the NASA Land Information System (LIS) in conjunction with a well-trained support vector machine (SVM) that acts as the observation operator. The Noah Land Surface Model with multi-parameterization options (Noah-MP) is used as the prior model for simulating snow dynamics. Noah-MP is forced by meteorological fields from the NASA Modern-Era Retrospective analysis for Research and Applications, version 2 (MERRA-2) atmospheric reanalysis for the periods 01 Sep. 2002 to 01 Sep. 2011. The radiance assimilation system requires two separate phases: 1) training and 2) assimilation. During the training phase, a nonlinear SVM is generated for three different AMSR-E frequencies - 10.65, 18.7, and 36.5 GHz - at both vertical and horizontal polarization. The trained SVM is then used to predict TB during the assimilation phase. An ensemble Kalman filter will be used to condition the model on AMSR-E brightness temperatures not used during SVM training. The performance of the Noah-MP (with and without radiance assimilation) will be assessed via comparison to in-situ measurements, remotely-sensing geophysical retrievals, and other reanalysis products.

  7. Performance of a high efficiency high power UHF klystron

    International Nuclear Information System (INIS)

    Konrad, G.T.

    1977-03-01

    A 500 kW c-w klystron was designed for the PEP storage ring at SLAC. The tube operates at 353.2 MHz, 62 kV, a microperveance of 0.75, and a gain of approximately 50 dB. Stable operation is required for a VSWR as high as 2 : 1 at any phase angle. The design efficiency is 70%. To obtain this value of efficiency, a second harmonic cavity is used in order to produce a very tightly bunched beam in the output gap. At the present time it is planned to install 12 such klystrons in PEP. A tube with a reduced size collector was operated at 4% duty at 500 kW. An efficiency of 63% was observed. The same tube was operated up to 200 kW c-w for PEP accelerator cavity tests. A full-scale c-w tube reached 500 kW at 65 kV with an efficiency of 55%. In addition to power and phase measurements into a matched load, some data at various load mismatches are presented

  8. Use of a vectored vaccine against infectious bursal disease of chickens in the face of high-titred maternally derived antibody.

    Science.gov (United States)

    Bublot, M; Pritchard, N; Le Gros, F-X; Goutebroze, S

    2007-07-01

    Interference by maternally derived antibody (MDA) is a major problem for the vaccination of young chickens against infectious bursal disease (IBD). The choice of the timing of vaccination and of the type (degree of attenuation) of modified-live vaccine (MLV) to use is often difficult. An IBD vectored vaccine (vHVT13), in which turkey herpesvirus (HVT) is used as the vector, was recently developed. This vaccine is administered once at the hatchery, either in ovo or by the subcutaneous route, to 1-day-old chicks at a time when MDA is maximal. In terms of safety, the vHVT13 vaccine had negligible impact on the bursa of Fabricius when compared with classical IBD MLV. Vaccination and challenge studies demonstrated that this vaccine is able to protect chickens against various IBD virus (IBDV) challenge strains including very virulent, classical, and USA variant IBDV, despite the presence of high-titred IBD MDA at the time of vaccination. These data show that the vector vaccine combines a safety and efficacy profile that cannot be achieved with classical IBD vaccines.

  9. Production of lentiviral vectors

    Directory of Open Access Journals (Sweden)

    Otto-Wilhelm Merten

    2016-01-01

    Full Text Available Lentiviral vectors (LV have seen considerably increase in use as gene therapy vectors for the treatment of acquired and inherited diseases. This review presents the state of the art of the production of these vectors with particular emphasis on their large-scale production for clinical purposes. In contrast to oncoretroviral vectors, which are produced using stable producer cell lines, clinical-grade LV are in most of the cases produced by transient transfection of 293 or 293T cells grown in cell factories. However, more recent developments, also, tend to use hollow fiber reactor, suspension culture processes, and the implementation of stable producer cell lines. As is customary for the biotech industry, rather sophisticated downstream processing protocols have been established to remove any undesirable process-derived contaminant, such as plasmid or host cell DNA or host cell proteins. This review compares published large-scale production and purification processes of LV and presents their process performances. Furthermore, developments in the domain of stable cell lines and their way to the use of production vehicles of clinical material will be presented.

  10. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  11. Study and performances analysis of fuel cell assisted vector control variable speed drive system used for electric vehicles

    Science.gov (United States)

    Pachauri, Rupendra Kumar; Chauhan, Yogesh K.

    2017-02-01

    This paper is a novel attempt to combine two important aspects of fuel cell (FC). First, it presents investigations on FC technology and its applications. A description of FC operating principles is followed by the comparative analysis of the present FC technologies together with the issues concerning various fuels. Second, this paper also proposes a model for the simulation and performances evaluation of a proton exchange membrane fuel cell (PEMFC) generation system. Furthermore, a MATLAB/Simulink-based dynamic model of PEMFC is developed and parameters of FC are so adjusted to emulate a commercially available PEMFC. The system results are obtained for the PEMFC-driven adjusted speed induction motor drive (ASIMD) system, normally used in electric vehicles and analysis is carried out for different operating conditions of FC and ASIMD system. The obtained results prove the validation of system concept and modelling.

  12. Inclusive vision for high performance computing at the CSIR

    CSIR Research Space (South Africa)

    Gazendam, A

    2006-02-01

    Full Text Available and computationally intensive applications. A number of different technologies and standards were identified as core to the open and distributed high-performance infrastructure envisaged...

  13. High Performance Low Mass Nanowire Enabled Heatpipe, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Heat pipes are widely used for passive, two-phase electronics cooling. As advanced high power, high performance electronics in space based and terrestrial...

  14. Performance of the ATLAS forward calorimeter and search for the invisible Higgs via vector boson fusion at ATLAS

    CERN Document Server

    Schram, Malachi

    2008-01-01

    The ATLAS detector will examine proton-proton collisions at 14 TeV provided by CERN's Large Hadron Collider (LHC). ATLAS is a general purpose detector with tracking, calorime- try and a large muon system. The calorimeter system provides hermetic coverage of a large fraction of the solid angle of the detector. In the region close to the beam line, the calorimeter components are the FCal detectors which provide additional rj coverage im- proving the jet tagging efficiency and the missing energy resolution. The performance of the FCal calorimeter for both electrons and hadrons is one of the major topics of this thesis. The measured electromagnetic response for the FCal 1 module was 12.14±0.06 ADC/GeV which is in good agreement with the predicted value of 12 ADC/GeV from IE the simulation which will be used to provide the initial electromagnetic response for the FCal modules during the early stages of ATLAS data taking. The hadronic per- formance was investigated using two calibration schemes: flat weights and t...

  15. High performance leadership in unusually challenging educational circumstances

    Directory of Open Access Journals (Sweden)

    Andy Hargreaves

    2015-04-01

    Full Text Available This paper draws on findings from the results of a study of leadership in high performing organizations in three sectors. Organizations were sampled and included on the basis of high performance in relation to no performance, past performance, performance among similar peers and performance in the face of limited resources or challenging circumstances. The paper concentrates on leadership in four schools that met the sample criteria.  It draws connections to explanations of the high performance ofEstoniaon the OECD PISA tests of educational achievement. The article argues that leadership in these four schools that performed above expectations comprised more than a set of competencies. Instead, leadership took the form of a narrative or quest that pursued an inspiring dream with relentless determination; took improvement pathways that were more innovative than comparable peers; built collaboration and community including with competing schools; and connected short-term success to long-term sustainability.

  16. System for Automated Calibration of Vector Modulators

    Science.gov (United States)

    Lux, James; Boas, Amy; Li, Samuel

    2009-01-01

    Vector modulators are used to impose baseband modulation on RF signals, but non-ideal behavior limits the overall performance. The non-ideal behavior of the vector modulator is compensated using data collected with the use of an automated test system driven by a LabVIEW program that systematically applies thousands of control-signal values to the device under test and collects RF measurement data. The technology innovation automates several steps in the process. First, an automated test system, using computer controlled digital-to-analog converters (DACs) and a computer-controlled vector network analyzer (VNA) systematically can apply different I and Q signals (which represent the complex number by which the RF signal is multiplied) to the vector modulator under test (VMUT), while measuring the RF performance specifically, gain and phase. The automated test system uses the LabVIEW software to control the test equipment, collect the data, and write it to a file. The input to the Lab - VIEW program is either user-input for systematic variation, or is provided in a file containing specific test values that should be fed to the VMUT. The output file contains both the control signals and the measured data. The second step is to post-process the file to determine the correction functions as needed. The result of the entire process is a tabular representation, which allows translation of a desired I/Q value to the required analog control signals to produce a particular RF behavior. In some applications, corrected performance is needed only for a limited range. If the vector modulator is being used as a phase shifter, there is only a need to correct I and Q values that represent points on a circle, not the entire plane. This innovation has been used to calibrate 2-GHz MMIC (monolithic microwave integrated circuit) vector modulators in the High EIRP Cluster Array project (EIRP is high effective isotropic radiated power). These calibrations were then used to create

  17. Sex Differences in Mathematics Performance among Senior High ...

    African Journals Online (AJOL)

    This study explored sex differences in mathematics performance of students in the final year of high school and changes in these differences over a 3-year period in Ghana. A convenience sample of 182 students, 109 boys and 72 girls in three high schools in Ghana was used. Mathematics performance was assessed using ...

  18. Control switching in high performance and fault tolerant control

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2010-01-01

    The problem of reliability in high performance control and in fault tolerant control is considered in this paper. A feedback controller architecture for high performance and fault tolerance is considered. The architecture is based on the Youla-Jabr-Bongiorno-Kucera (YJBK) parameterization. By usi...

  19. Mechanical Properties of High Performance Cementitious Grout (II)

    DEFF Research Database (Denmark)

    Sørensen, Eigil V.

    The present report is an update of the report “Mechanical Properties of High Performance Cementitious Grout (I)” [1] and describes tests carried out on the high performance grout MASTERFLOW 9500, marked “WMG 7145 FP”, developed by BASF Construction Chemicals A/S and designed for use in grouted...

  20. Development of new high-performance stainless steels

    International Nuclear Information System (INIS)

    Park, Yong Soo

    2002-01-01

    This paper focused on high-performance stainless steels and their development status. Effect of nitrogen addition on super-stainless steel was discussed. Research activities at Yonsei University, on austenitic and martensitic high-performance stainless, steels, and the next-generation duplex stainless steels were introduced