WorldWideScience

Sample records for research kernel spark

  1. A Experimental Study of the Growth of Laser Spark and Electric Spark Ignited Flame Kernels.

    Science.gov (United States)

    Ho, Chi Ming

    1995-01-01

    Better ignition sources are constantly in demand for enhancing the spark ignition in practical applications such as automotive and liquid rocket engines. In response to this practical challenge, the present experimental study was conducted with the major objective to obtain a better understanding on how spark formation and hence spark characteristics affect the flame kernel growth. Two laser sparks and one electric spark were studied in air, propane-air, propane -air-nitrogen, methane-air, and methane-oxygen mixtures that were initially at ambient pressure and temperature. The growth of the kernels was monitored by imaging the kernels with shadowgraph systems, and by imaging the planar laser -induced fluorescence of the hydroxyl radicals inside the kernels. Characteristic dimensions and kernel structures were obtained from these images. Since different energy transfer mechanisms are involved in the formation of a laser spark as compared to that of an electric spark; a laser spark is insensitive to changes in mixture ratio and mixture type, while an electric spark is sensitive to changes in both. The detailed structures of the kernels in air and propane-air mixtures primarily depend on the spark characteristics. But the combustion heat released rapidly in methane-oxygen mixtures significantly modifies the kernel structure. Uneven spark energy distribution causes remarkably asymmetric kernel structure. The breakdown energy of a spark creates a blast wave that shows good agreement with the numerical point blast solution, and a succeeding complex spark-induced flow that agrees reasonably well with a simple puff model. The transient growth rates of the propane-air, propane-air -nitrogen, and methane-air flame kernels can be interpreted in terms of spark effects, flame stretch, and preferential diffusion. For a given mixture, a spark with higher breakdown energy produces a greater and longer-lasting enhancing effect on the kernel growth rate. By comparing the growth

  2. Experimental investigation and phenomenological model development of flame kernel growth rate in a gasoline fuelled spark ignition engine

    International Nuclear Information System (INIS)

    Salvi, B.L.; Subramanian, K.A.

    2015-01-01

    Highlights: • Experimental measurement of the flame kernel growth rate (FKGR) in SI engine. • FKGR is the highest at MBT timing as compared with retarded and advanced timings. • FKGR decreases with increase in engine speed. • FKGR is correlated with equivalence ratio, charge density, in-cylinder pressure and engine speed. - Abstract: As flame kernel growth plays a major role in combustion of premixed-charge in spark ignition engines for higher energy efficiency and less emission, the experimental study was carried out on a single cylinder spark ignition research engine for measurement of flame kernel growth rate (FKGR) using spark plug fibre optics probe (VisioFlame sensor). The FKGR was measured on the engine at different power output with varied spark ignition timings and different engine speeds. The experimental results indicate that the FKGR was the highest with the maximum brake torque (MBT) spark timing and it decreases with increase in the engine speed. The FKGR at engine speed of 1000 RPM was the highest of 1.81 m/s with MBT timing (20° bTDC) as compared to 1.6 m/s (15° bTDC), 1.67 m/s (25° bTDC), and 1.61 m/s (30° bTDC) with retarded and advanced timing. In addition to this, a phenomenological model was developed for calculation of FKGR. It was observed from the model that FKGR is function of equivalence ratio, engine speed, in-cylinder pressure and charge density. The experimental results and methodology emerged from this study would be useful for optimization of engine parameters using the FKGR and also further development of model for alternative fuels

  3. The relative effects of fuel concentration, residual-gas fraction, gas motion, spark energy and heat losses to the electrodes on flame-kernel development in a lean-burn spark ignition engine

    Energy Technology Data Exchange (ETDEWEB)

    Aleiferis, P.G.; Taylor, A.M.K.P. [Imperial College of Science, Technology and Medicine, London (United Kingdom). Dept. of Mechanical Engineering; Ishii, K. [Honda International Technical School, Saitama (Japan); Urata, Y. [Honda R and D Co., Ltd., Tochigi (Japan). Tochigi R and D Centre

    2004-04-01

    The potential of lean combustion for the reduction in exhaust emissions and fuel consumption in spark ignition engines has long been established. However, the operating range of lean-burn spark ignition engines is limited by the level of cyclic variability in the early-flame development stage that typically corresponds to the 0-5 per cent mass fraction burned duration. In the current study, the cyclic variations in early flame development were investigated in an optical stratified-charge spark ignition engine at conditions close to stoichiometry [air-to-fuel ratio (A/F) = 15] and to the lean limit of stable operation (A/F = 22). Flame images were acquired through either a pentroof window ('tumble plane' of view) or the piston crown ('swirl plane' of view) and these were processed to calculate the intra-cycle flame-kernel radius evolution. In order to quantify the relative effects of local fuel concentration, gas motion, spark-energy release and heat losses to the electrodes on the flame-kernel growth rate, a zero-dimensional flame-kernel growth model, in conjunction with a one-dimensional spark ignition model, was employed. Comparison of the calculated flame-radius evolutions with the experimental data suggested that a variation in A/F around the spark plug of {delta}(A/F) {approx} 4 or, in terms of equivalence ratio {phi}, a variation in {delta}{phi} {approx} 0.15 at most was large enough to account for 100 per cent of the observed cyclic variability in flame-kernel radius. A variation in the residual-gas fraction of about 20 per cent around the mean was found to account for up to 30 per cent of the variability in flame-kernel radius at the timing of 5 per cent mass fraction burned. The individual effect of 20 per cent variations in the 'mean' in-cylinder velocity at the spark plug at ignition timing was found to account for no more than 20 per cent of the measured cyclic variability in flame kernel radius. An individual effect of

  4. Ignition of turbulent swirling n-heptane spray flames using single and multiple sparks

    Energy Technology Data Exchange (ETDEWEB)

    Marchionea, T.; Ahmeda, S.F.; Mastorakos, E. [Department of Engineering, University of Cambridge (United Kingdom)

    2009-01-15

    This paper examines ignition processes of an n-heptane spray in a flow typical of a liquid-fuelled burner. The spray is created by a hollow-cone pressure atomiser placed in the centre of a bluff body, around which swirling air induces a strong recirculation zone. Ignition was achieved by single small sparks of short duration (2 mm; 0.5 ms), located at various places inside the flow so as to identify the most ignitable regions, or larger sparks of longer duration (5 mm; 8 ms) repeated at 100 Hz, located close to the combustion chamber enclosure so as to mimic the placement and characteristics of a gas turbine combustor surface igniter. The air and droplet velocities, the droplet diameter, and the total (i.e. liquid plus vapour) equivalence ratio were measured in inert flow by phase Doppler anemometry and sampling respectively. Fast camera imaging suggested that successful ignition events were associated with flamelets that propagated back towards the spray nozzle. Measurements of ignition probability with the single spark showed that localised ignition inside the spray is more likely to result in successful flame establishment when the spark is located in a region of negative velocity, relatively small droplet Sauter mean diameter, and mean equivalence ratio within the flammability limits. Ignition with the single spark was not possible at the location where the multiple spark experiments were performed. For those, the multiple spark sequence lasted approximately 1 to 5 s. It was found that a long spark sequence increases the ignition efficiency, which reached a maximum of 100% at the axial distance where the recirculation zone had maximum width. Ignition was not feasible with the spark downstream of about two burner diameters. Visualisation showed that small flame kernels emanate very often from the spark, which can be stretched as far as 20 mm from the electrodes by the turbulent velocity fluctuations. These kernels survive very little time. Successful overall

  5. Enhancement of flame development by microwave-assisted spark ignition in constant volume combustion chamber

    KAUST Repository

    Wolk, Benjamin

    2013-07-01

    The enhancement of laminar flame development using microwave-assisted spark ignition has been investigated for methane-air mixtures at a range of initial pressures and equivalence ratios in a 1.45. l constant volume combustion chamber. Microwave enhancement was evaluated on the basis of several parameters including flame development time (FDT) (time for 0-10% of total net heat release), flame rise time (FRT) (time for 10-90% of total net heat release), total net heat release, flame kernel growth rate, flame kernel size, and ignitability limit extension. Compared to a capacitive discharge spark, microwave-assisted spark ignition extended the lean and rich ignition limits at all pressures investigated (1.08-7.22. bar). The addition of microwaves to a capacitive discharge spark reduced FDT and increased the flame kernel size for all equivalence ratios tested and resulted in increases in the spatial flame speed for sufficiently lean flames. Flame enhancement is believed to be caused by (1) a non-thermal chemical kinetic enhancement from energy deposition to free electrons in the flame front and (2) induced flame wrinkling from excitation of flame (plasma) instability. The enhancement of flame development by microwaves diminishes as the initial pressure of the mixture increases, with negligible flame enhancement observed above 3. bar. © 2013 The Combustion Institute.

  6. Numerical study of the ignition behavior of a post-discharge kernel injected into a turbulent stratified cross-flow

    Science.gov (United States)

    Jaravel, Thomas; Labahn, Jeffrey; Ihme, Matthias

    2017-11-01

    The reliable initiation of flame ignition by high-energy spark kernels is critical for the operability of aviation gas turbines. The evolution of a spark kernel ejected by an igniter into a turbulent stratified environment is investigated using detailed numerical simulations with complex chemistry. At early times post ejection, comparisons of simulation results with high-speed Schlieren data show that the initial trajectory of the kernel is well reproduced, with a significant amount of air entrainment from the surrounding flow that is induced by the kernel ejection. After transiting in a non-flammable mixture, the kernel reaches a second stream of flammable methane-air mixture, where the successful of the kernel ignition was found to depend on the local flow state and operating conditions. By performing parametric studies, the probability of kernel ignition was identified, and compared with experimental observations. The ignition behavior is characterized by analyzing the local chemical structure, and its stochastic variability is also investigated.

  7. DNS of spark ignition and edge flame propagation in turbulent droplet-laden mixing layers

    Energy Technology Data Exchange (ETDEWEB)

    Neophytou, A.; Mastorakos, E.; Cant, R.S. [Hopkinson Laboratory, Department of Engineering, University of Cambridge (United Kingdom)

    2010-06-15

    A parametric study of forced ignition at the mixing layer between air and air carrying fine monosized fuel droplets is done through one-step chemistry direct numerical simulations to determine the influence of the size and volatility of the droplets, the spark location, the droplet-air mixing layer initial thickness and the turbulence intensity on the ignition success and the subsequent flame propagation. The propagation is analyzed in terms of edge flame displacement speed, which has not been studied before for turbulent edge spray flames. Spark ignition successfully resulted in a tribrachial flame if enough fuel vapour was available at the spark location, which occurred when the local droplet number density was high. Ignition was achieved even when the spark was offset from the spray, on the air side, due to the diffusion of heat from the spark, provided droplets evaporated rapidly. Large kernels were obtained by sparking close to the spray, since fuel was more readily available. At long times after the spark, for all flames studied, the probability density function of the displacement speed was wide, with a mean value in the range 0.55-0.75S{sub L}, with S{sub L} the laminar burning velocity of a stoichiometric gaseous premixed flame. This value is close to the mean displacement speed in turbulent edge flames with gaseous fuel. The displacement speed was negatively correlated with curvature. The detrimental effect of curvature was attenuated with a large initial kernel and by increasing the thickness of the mixing layer. The mixing layer was thicker when evaporation was slow and the turbulence intensity higher. However, high turbulence intensity also distorted the kernel which could lead to high values of curvature. The edge flame reaction component increased when the maximum temperature coincided with the stoichiometric contour. The results are consistent with the limited available experimental evidence and provide insights into the processes associated with

  8. Modelling of spark to ignition transition in gas mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Akram, M.

    1996-10-01

    This thesis pertains to the models for studying sparking in chemically inert gases. The processes taking place in a spark to flame transition can be segregated into physical and chemical processes, and this study is focused on physical processes. The plasma is regarded as a single-substance material. One and two-dimensional models are developed. The transfer of electrical energy into thermal energy of the gas and its redistribution in space and time along with the evolution of a plasma kernel is studied in the time domain ranging from 10 ns to 40 micros. In the case of ultra-fast sparks, the propagation of the shock and its reflection from a rigid wall is presented. The influence of electrode shape and the gap size on the flow structure development is found to be a dominating factor. It is observed that the flow structure that has developed in the early stage more or less prevails at later stages and strongly influences the shape and evolution of the hot kernel. The electrode geometry and configuration are responsible for the development of the flow structure. The strength of the vortices generated in the flow field is influenced by the power input to the gap and their location of emergence is dictated by the electrode shape and configuration. The heat transfer after 2 micros in the case of ultra-fast sparks is dominated by convection and diffusion. The strong mixing produced by hydrodynamic effects and the electrode geometry give the indication that the magnetic pinch effect might be negligible. Finally, a model for a multicomponent gas mixture is presented. The chemical kinetics mechanism for dissociation and ionization is introduced. 56 refs

  9. On flame kernel formation and propagation in premixed gases

    Energy Technology Data Exchange (ETDEWEB)

    Eisazadeh-Far, Kian; Metghalchi, Hameed [Northeastern University, Mechanical and Industrial Engineering Department, Boston, MA 02115 (United States); Parsinejad, Farzan [Chevron Oronite Company LLC, Richmond, CA 94801 (United States); Keck, James C. [Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2010-12-15

    Flame kernel formation and propagation in premixed gases have been studied experimentally and theoretically. The experiments have been carried out at constant pressure and temperature in a constant volume vessel located in a high speed shadowgraph system. The formation and propagation of the hot plasma kernel has been simulated for inert gas mixtures using a thermodynamic model. The effects of various parameters including the discharge energy, radiation losses, initial temperature and initial volume of the plasma have been studied in detail. The experiments have been extended to flame kernel formation and propagation of methane/air mixtures. The effect of energy terms including spark energy, chemical energy and energy losses on flame kernel formation and propagation have been investigated. The inputs for this model are the initial conditions of the mixture and experimental data for flame radii. It is concluded that these are the most important parameters effecting plasma kernel growth. The results of laminar burning speeds have been compared with previously published results and are in good agreement. (author)

  10. Research on spark discharge of floating roof tank shunt

    International Nuclear Information System (INIS)

    Bi, Xiaolei; Liu, Quanzhen; Liu, Baoquan; Gao, Xin; Hu, Haiyan; Liu, Juan

    2013-01-01

    In order to quantitatively analyze the spark discharge risk of floating roof tank shunts, the breakdown voltage of shunt has been calculated by Townsend theory, the shunt spark discharge experiment is carried out by using 1.2/50 μs impulse voltage wave, and the relationship between breakdown voltage of shunt spark discharge and air gap is analyzed. It has been indicated by theoretical analysis and experimental study that the small gap is more easily cause spark discharge than the big gap when the contact between shunt and tank shell is poor. When air gap distance is equal to 0.1 cm, average breakdown voltage is 5280 V. When the air gap distance is less than 0.3 cm, experiment data agree well with Townsend theory. Therefore, in the condition of small gap, Townsend theory can be used to calculated breakdown voltage of shunt. Finally, based on the above conclusions, improvements for avoiding the spark discharge risk of shunt of floating roof tanks have been proposed.

  11. The Flux OSKit: A Substrate for Kernel and Language Research

    Science.gov (United States)

    1997-10-01

    unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 tions. Our own microkernel -based OS, Fluke [17], puts almost all of the OSKit to use...kernels distance the language from the hardware; even microkernels and other extensible kernels enforce some default policy which often conflicts with a...be particu- larly useful in these research projects. 6.1.1 The Fluke OS In 1996 we developed an entirely new microkernel - based system called Fluke

  12. Research of Performance Linux Kernel File Systems

    Directory of Open Access Journals (Sweden)

    Andrey Vladimirovich Ostroukh

    2015-10-01

    Full Text Available The article describes the most common Linux Kernel File Systems. The research was carried out on a personal computer, the characteristics of which are written in the article. The study was performed on a typical workstation running GNU/Linux with below characteristics. On a personal computer for measuring the file performance, has been installed the necessary software. Based on the results, conclusions and proposed recommendations for use of file systems. Identified and recommended by the best ways to store data.

  13. Spark Channels

    Energy Technology Data Exchange (ETDEWEB)

    Haydon, S. C. [Department of Physics, University of New England, Armidale, NSW (Australia)

    1968-04-15

    A brief summary is given of the principal methods used for initiating spark channels and the various highly time-resolved techniques developed recently for studies with nanosecond resolution. The importance of the percentage overvoltage in determining the early history and subsequent development of the various phases of the growth of the spark channel is discussed. An account is then given of the recent photographic, oscillographic and spectroscopic investigations of spark channels initiated by co-axial cable discharges of spark gaps at low [{approx} 1%] overvoltages. The phenomena observed in the development of the immediate post-breakdown phase, the diffuse glow structure, the growth of the luminous filament and the final formation of the spark channel in hydrogen are described. A brief account is also given of the salient features emerging from corresponding studies of highly overvolted spark gaps in which the spark channel develops from single avalanche conditions. The essential differences between the two types of channel formation are summarized and possible explanations of the general features are indicated. (author)

  14. Research on retailer data clustering algorithm based on Spark

    Science.gov (United States)

    Huang, Qiuman; Zhou, Feng

    2017-03-01

    Big data analysis is a hot topic in the IT field now. Spark is a high-reliability and high-performance distributed parallel computing framework for big data sets. K-means algorithm is one of the classical partition methods in clustering algorithm. In this paper, we study the k-means clustering algorithm on Spark. Firstly, the principle of the algorithm is analyzed, and then the clustering analysis is carried out on the supermarket customers through the experiment to find out the different shopping patterns. At the same time, this paper proposes the parallelization of k-means algorithm and the distributed computing framework of Spark, and gives the concrete design scheme and implementation scheme. This paper uses the two-year sales data of a supermarket to validate the proposed clustering algorithm and achieve the goal of subdividing customers, and then analyze the clustering results to help enterprises to take different marketing strategies for different customer groups to improve sales performance.

  15. GeoSpark SQL: An Effective Framework Enabling Spatial Queries on Spark

    Directory of Open Access Journals (Sweden)

    Zhou Huang

    2017-09-01

    Full Text Available In the era of big data, Internet-based geospatial information services such as various LBS apps are deployed everywhere, followed by an increasing number of queries against the massive spatial data. As a result, the traditional relational spatial database (e.g., PostgreSQL with PostGIS and Oracle Spatial cannot adapt well to the needs of large-scale spatial query processing. Spark is an emerging outstanding distributed computing framework in the Hadoop ecosystem. This paper aims to address the increasingly large-scale spatial query-processing requirement in the era of big data, and proposes an effective framework GeoSpark SQL, which enables spatial queries on Spark. On the one hand, GeoSpark SQL provides a convenient SQL interface; on the other hand, GeoSpark SQL achieves both efficient storage management and high-performance parallel computing through integrating Hive and Spark. In this study, the following key issues are discussed and addressed: (1 storage management methods under the GeoSpark SQL framework, (2 the spatial operator implementation approach in the Spark environment, and (3 spatial query optimization methods under Spark. Experimental evaluation is also performed and the results show that GeoSpark SQL is able to achieve real-time query processing. It should be noted that Spark is not a panacea. It is observed that the traditional spatial database PostGIS/PostgreSQL performs better than GeoSpark SQL in some query scenarios, especially for the spatial queries with high selectivity, such as the point query and the window query. In general, GeoSpark SQL performs better when dealing with compute-intensive spatial queries such as the kNN query and the spatial join query.

  16. Spark discharge and flame inception analysis through spectroscopy in a DISI engine fuelled with gasoline and butanol

    Science.gov (United States)

    Irimescu, A.; Merola, S. S.

    2017-10-01

    Extensive application of downsizing, as well as the application of alternative combustion control with respect to well established stoichiometric operation, have determined a continuous increase in the energy that is delivered to the working fluid in order to achieve stable and repeatable ignition. Apart from the complexity of fluid-arc interactions, the extreme thermodynamic conditions of this initial combustion stage make its characterization difficult, both through experimental and numerical techniques. Within this context, the present investigation looks at the analysis of spark discharge and flame kernel formation, through the application of UV-visible spectroscopy. Characterization of the energy transfer from the spark plug’s electrodes to the air-fuel mixture was achieved by the evaluation of vibrational and rotational temperatures during ignition, for stoichiometric and lean fuelling of a direct injection spark ignition engine. Optical accessibility was ensured from below the combustion chamber through an elongated piston design, that allowed the central region of the cylinder to be investigated. Fuel effects were evaluated for gasoline and n-butanol; roughly the same load was investigated in throttled and wide-open throttle conditions for both fuels. A brief thermodynamic analysis confirmed that significant gains in efficiency can be obtained with lean fuelling, mainly due to the reduction of pumping losses. Minimal effect of fuel type was observed, while mixture strength was found to have a stronger influence on calculated temperature values, especially during the initial stage of ignition. In-cylinder pressure was found to directly determine emission intensity during ignition, but the vibrational and rotational temperatures featured reduced dependence on this parameter. As expected, at the end of kernel formation, temperature values converged towards those typically found for adiabatic flames. The results show that indeed only a relatively small part

  17. Experimental study of plume induced by nanosecond repetitively pulsed spark microdischarges in air at atmospheric pressure

    Science.gov (United States)

    Orriere, Thomas; Benard, Nicolas; Moreau, Eric; Pai, David

    2016-09-01

    Nanosecond repetitively pulsed (NRP) spark discharges have been widely studied due to their high chemical reactivity, low gas temperature, and high ionization efficiency. They are useful in many research areas: nanomaterials synthesis, combustion, and aerodynamic flow control. In all of these fields, particular attention has been devoted to chemical species transport and/or hydrodynamic and thermal effects for applications. The aim of this study is to generate an electro-thermal plume by combining an NRP spark microdischarge in a pin-to-pin configuration with a third DC-biased electrode placed a few centimeters away. First, electrical characterization and optical emission spectroscopy were performed to reveal important plasma processes. Second, particle image velocimetry was combined with schlieren photography to investigate the main characteristics of the generated flow. Heating processes are measured by using the N2(C ->B) (0,2) and (1,3) vibrational bands, and effects due to the confinement of the discharge are described. Moreover, the presence of atomic ions N+ and O+ is discussed. Finally, the electro-thermal plume structure is characterized by a flow velocity around 1.8 m.s-1, and the thermal kernel has a spheroidal shape.

  18. Robust Kernel (Cross-) Covariance Operators in Reproducing Kernel Hilbert Space toward Kernel Methods

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2016-01-01

    To the best of our knowledge, there are no general well-founded robust methods for statistical unsupervised learning. Most of the unsupervised methods explicitly or implicitly depend on the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). They are sensitive to contaminated data, even when using bounded positive definite kernels. First, we propose robust kernel covariance operator (robust kernel CO) and robust kernel crosscovariance operator (robust kern...

  19. SparkRS - Spark for Remote Sensing, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed innovation is Spark-RS, an open source software project that enables GPU-accelerated remote sensing workflows in an Apache Spark distributed computing...

  20. TORCH Computational Reference Kernels - A Testbed for Computer Science Research

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, Alex; Williams, Samuel Webb; Madduri, Kamesh; Ibrahim, Khaled; Bailey, David H.; Demmel, James W.; Strohmaier, Erich

    2010-12-02

    For decades, computer scientists have sought guidance on how to evolve architectures, languages, and programming models in order to improve application performance, efficiency, and productivity. Unfortunately, without overarching advice about future directions in these areas, individual guidance is inferred from the existing software/hardware ecosystem, and each discipline often conducts their research independently assuming all other technologies remain fixed. In today's rapidly evolving world of on-chip parallelism, isolated and iterative improvements to performance may miss superior solutions in the same way gradient descent optimization techniques may get stuck in local minima. To combat this, we present TORCH: A Testbed for Optimization ResearCH. These computational reference kernels define the core problems of interest in scientific computing without mandating a specific language, algorithm, programming model, or implementation. To compliment the kernel (problem) definitions, we provide a set of algorithmically-expressed verification tests that can be used to verify a hardware/software co-designed solution produces an acceptable answer. Finally, to provide some illumination as to how researchers have implemented solutions to these problems in the past, we provide a set of reference implementations in C and MATLAB.

  1. High performance Spark best practices for scaling and optimizing Apache Spark

    CERN Document Server

    Karau, Holden

    2017-01-01

    Apache Spark is amazing when everything clicks. But if you haven’t seen the performance improvements you expected, or still don’t feel confident enough to use Spark in production, this practical book is for you. Authors Holden Karau and Rachel Warren demonstrate performance optimizations to help your Spark queries run faster and handle larger data sizes, while using fewer resources. Ideal for software engineers, data engineers, developers, and system administrators working with large-scale data applications, this book describes techniques that can reduce data infrastructure costs and developer hours. Not only will you gain a more comprehensive understanding of Spark, you’ll also learn how to make it sing. With this book, you’ll explore: How Spark SQL’s new interfaces improve performance over SQL’s RDD data structure The choice between data joins in Core Spark and Spark SQL Techniques for getting the most out of standard RDD transformations How to work around performance issues i...

  2. Kernel learning algorithms for face recognition

    CERN Document Server

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  3. Research on offense and defense technology for iOS kernel security mechanism

    Science.gov (United States)

    Chu, Sijun; Wu, Hao

    2018-04-01

    iOS is a strong and widely used mobile device system. It's annual profits make up about 90% of the total profits of all mobile phone brands. Though it is famous for its security, there have been many attacks on the iOS operating system, such as the Trident apt attack in 2016. So it is important to research the iOS security mechanism and understand its weaknesses and put forward targeted protection and security check framework. By studying these attacks and previous jailbreak tools, we can see that an attacker could only run a ROP code and gain kernel read and write permissions based on the ROP after exploiting kernel and user layer vulnerabilities. However, the iOS operating system is still protected by the code signing mechanism, the sandbox mechanism, and the not-writable mechanism of the system's disk area. This is far from the steady, long-lasting control that attackers expect. Before iOS 9, breaking these security mechanisms was usually done by modifying the kernel's important data structures and security mechanism code logic. However, after iOS 9, the kernel integrity protection mechanism was added to the 64-bit operating system and none of the previous methods were adapted to the new versions of iOS [1]. But this does not mean that attackers can not break through. Therefore, based on the analysis of the vulnerability of KPP security mechanism, this paper implements two possible breakthrough methods for kernel security mechanism for iOS9 and iOS10. Meanwhile, we propose a defense method based on kernel integrity detection and sensitive API call detection to defense breakthrough method mentioned above. And we make experiments to prove that this method can prevent and detect attack attempts or invaders effectively and timely.

  4. Scattering profiles of sparks and combustibility of filter against hot sparks

    International Nuclear Information System (INIS)

    Asazuma, Shinichiro; Okada, Takashi; Kashiro, Kashio

    2004-01-01

    The glove-box dismantling facility in the Plutonium Fuel Production Facility is developed to dismantle after-service glove-boxes with remote-controlled devices such as an arm-type manipulator. An abrasive wheel cutter, which is used to size reduce the gloveboxes, generates sparks during operation. This dispersing spark was a problem from the fire prevention point of view. A suitable spark control measures for this operation were required. We developed panels to minimize spark dispersion, shields to prevent the income of sparks to the pre-filter, and incombustible pre-filters. The equipment was tested and effectiveness was confirmed. This report provides the results of these tests. (author)

  5. Laser ignition - Spark plug development and application in reciprocating engines

    Science.gov (United States)

    Pavel, Nicolaie; Bärwinkel, Mark; Heinz, Peter; Brüggemann, Dieter; Dearden, Geoff; Croitoru, Gabriela; Grigore, Oana Valeria

    2018-03-01

    Combustion is one of the most dominant energy conversion processes used in all areas of human life, but global concerns over exhaust gas pollution and greenhouse gas emission have stimulated further development of the process. Lean combustion and exhaust gas recirculation are approaches to improve the efficiency and to reduce pollutant emissions; however, such measures impede reliable ignition when applied to conventional ignition systems. Therefore, alternative ignition systems are a focus of scientific research. Amongst others, laser induced ignition seems an attractive method to improve the combustion process. In comparison with conventional ignition by electric spark plugs, laser ignition offers a number of potential benefits. Those most often discussed are: no quenching of the combustion flame kernel; the ability to deliver (laser) energy to any location of interest in the combustion chamber; the possibility of delivering the beam simultaneously to different positions, and the temporal control of ignition. If these advantages can be exploited in practice, the engine efficiency may be improved and reliable operation at lean air-fuel mixtures can be achieved, making feasible savings in fuel consumption and reduction in emission of exhaust gasses. Therefore, laser ignition can enable important new approaches to address global concerns about the environmental impact of continued use of reciprocating engines in vehicles and power plants, with the aim of diminishing pollutant levels in the atmosphere. The technology can also support increased use of electrification in powered transport, through its application to ignition of hybrid (electric-gas) engines, and the efficient combustion of advanced fuels. In this work, we review the progress made over the last years in laser ignition research, in particular that aimed towards realizing laser sources (or laser spark plugs) with dimensions and properties suitable for operating directly on an engine. The main envisaged

  6. Research on a Novel Kernel Based Grey Prediction Model and Its Applications

    Directory of Open Access Journals (Sweden)

    Xin Ma

    2016-01-01

    Full Text Available The discrete grey prediction models have attracted considerable interest of research due to its effectiveness to improve the modelling accuracy of the traditional grey prediction models. The autoregressive GM(1,1 model, abbreviated as ARGM(1,1, is a novel discrete grey model which is easy to use and accurate in prediction of approximate nonhomogeneous exponential time series. However, the ARGM(1,1 is essentially a linear model; thus, its applicability is still limited. In this paper a novel kernel based ARGM(1,1 model is proposed, abbreviated as KARGM(1,1. The KARGM(1,1 has a nonlinear function which can be expressed by a kernel function using the kernel method, and its modelling procedures are presented in details. Two case studies of predicting the monthly gas well production are carried out with the real world production data. The results of KARGM(1,1 model are compared to the existing discrete univariate grey prediction models, including ARGM(1,1, NDGM(1,1,k, DGM(1,1, and NGBMOP, and it is shown that the KARGM(1,1 outperforms the other four models.

  7. Development of a SPARK Training Dataset

    International Nuclear Information System (INIS)

    Sayre, Amanda M.; Olson, Jarrod R.

    2015-01-01

    In its first five years, the National Nuclear Security Administration's (NNSA) Next Generation Safeguards Initiative (NGSI) sponsored more than 400 undergraduate, graduate, and post-doctoral students in internships and research positions (Wyse 2012). In the past seven years, the NGSI program has, and continues to produce a large body of scientific, technical, and policy work in targeted core safeguards capabilities and human capital development activities. Not only does the NGSI program carry out activities across multiple disciplines, but also across all U.S. Department of Energy (DOE)/NNSA locations in the United States. However, products are not readily shared among disciplines and across locations, nor are they archived in a comprehensive library. Rather, knowledge of NGSI-produced literature is localized to the researchers, clients, and internal laboratory/facility publication systems such as the Electronic Records and Information Capture Architecture (ERICA) at the Pacific Northwest National Laboratory (PNNL). There is also no incorporated way of analyzing existing NGSI literature to determine whether the larger NGSI program is achieving its core safeguards capabilities and activities. A complete library of NGSI literature could prove beneficial to a cohesive, sustainable, and more economical NGSI program. The Safeguards Platform for Automated Retrieval of Knowledge (SPARK) has been developed to be a knowledge storage, retrieval, and analysis capability to capture safeguards knowledge to exist beyond the lifespan of NGSI. During the development process, it was necessary to build a SPARK training dataset (a corpus of documents) for initial entry into the system and for demonstration purposes. We manipulated these data to gain new information about the breadth of NGSI publications, and they evaluated the science-policy interface at PNNL as a practical demonstration of SPARK's intended analysis capability. The analysis demonstration sought to answer

  8. Study of ignition in a high compression ratio SI (spark ignition) methanol engine using LES (large eddy simulation) with detailed chemical kinetics

    International Nuclear Information System (INIS)

    Zhen, Xudong; Wang, Yang

    2013-01-01

    Methanol has been recently used as an alternative to conventional fuels for internal combustion engines in order to satisfy some environmental and economical concerns. In this paper, the ignition in a high compression ratio SI (spark ignition) methanol engine was studied by using LES (large eddy simulation) with detailed chemical kinetics. A 21-species, 84-reaction methanol mechanism was adopted to simulate the auto-ignition process of the methanol/air mixture. The MIT (minimum ignition temperature) and MIE (minimum ignition energy) are two important properties for designing safety standards and understanding the ignition process of combustible mixtures. The effects of the flame kernel size, flame kernel temperature and equivalence ratio were also examined on MIT, MIE and IDP (ignition delay period). The methanol mechanism was validated by experimental test. The simulated results showed that the flame kernel size, temperature and energy dramatically affected the values of the MIT, MIE and IDP for a methanol/air mixture, the value of the ignition delay period was not only related to the flame kernel energy, but also to the flame kernel temperature. - Highlights: • We used LES (large eddy simulation) coupled with detailed chemical kinetics to simulate methanol ignition. • The flame kernel size and temperature affected the minimum ignition temperature. • The flame kernel temperature and energy affected the ignition delay period. • The equivalence ratio of methanol–air mixture affected the ignition delay period

  9. Testing Infrastructure for Operating System Kernel Development

    DEFF Research Database (Denmark)

    Walter, Maxwell; Karlsson, Sven

    2014-01-01

    Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....

  10. A spectroscopy study of gasoline partially premixed compression ignition spark assisted combustion

    International Nuclear Information System (INIS)

    Pastor, J.V.; García-Oliver, J.M.; García, A.; Micó, C.; Durrett, R.

    2013-01-01

    Highlights: ► PPC combustion combined with spark assistance and gasoline fuel on a CI engine. ► Chemiluminescence of different chemical species describes the progress of combustion reaction. ► Spectra of a novel combustion mode under SACI conditions is described. ► UV–Visible spectrometry, high speed imaging and pressure diagnostic were employed for analysis. - Abstract: Nowadays many research efforts are focused on the study and development of new combustion modes, mainly based on the use of locally lean air–fuel mixtures. This characteristic, combined with exhaust gas recirculation, provides low combustion temperatures that reduces pollutant formation and increases efficiency. However these combustion concepts have some drawbacks, related to combustion phasing control, which must be overcome. In this way, the use of a spark plug has shown to be a good solution to improve phasing control in combination with lean low temperature combustion. Its performance is well reported on bibliography, however phenomena involving the combustion process are not completely described. The aim of the present work is to develop a detailed description of the spark assisted compression ignition mode by means of application of UV–Visible spectrometry, in order to improve insight on the combustion process. Tests have been performed in an optical engine by means of broadband radiation imaging and emission spectrometry. The engine hardware is typical of a compression ignition passenger car application. Gasoline was used as the fuel due to its low reactivity. Combining broadband luminosity images with pressure-derived heat-release rate and UV–Visible spectra, it was possible to identify different stages of the combustion reaction. After the spark discharge, a first flame kernel appears and starts growing as a premixed flame front, characterized by a low and constant heat-release rate in combination with the presence of remarkable OH radical radiation. Heat release increases

  11. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    Science.gov (United States)

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function

  12. Locally linear approximation for Kernel methods : the Railway Kernel

    OpenAIRE

    Muñoz, Alberto; González, Javier

    2008-01-01

    In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capab...

  13. Modelling Spark Integration in Science Classroom

    Directory of Open Access Journals (Sweden)

    Marie Paz E. Morales

    2014-02-01

    Full Text Available The study critically explored how a PASCO-designed technology (SPARK ScienceLearning System is meaningfully integrated into the teaching of selected topics in Earth and Environmental Science. It highlights on modelling the effectiveness of using the SPARK Learning System as a primary tool in learning science that leads to learning and achievement of the students. Data and observation gathered and correlation of the ability of the technology to develop high intrinsic motivation to student achievement were used to design framework on how to meaningfully integrate SPARK ScienceLearning System in teaching Earth and Environmental Science. Research instruments used in this study were adopted from standardized questionnaires available from literature. Achievement test and evaluation form were developed and validated for the purpose of deducing data needed for the study. Interviews were done to delve into the deeper thoughts and emotions of the respondents. Data from the interviews served to validate all numerical data culled from this study. Cross-case analysis of the data was done to reveal some recurring themes, problems and benefits derived by the students in using the SPARK Science Learning System to further establish its effectiveness in the curriculum as a forerunner to the shift towards the 21st Century Learning.

  14. An Extreme Learning Machine Based on the Mixed Kernel Function of Triangular Kernel and Generalized Hermite Dirichlet Kernel

    Directory of Open Access Journals (Sweden)

    Senyue Zhang

    2016-01-01

    Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.

  15. Development of a SPARK Training Dataset

    Energy Technology Data Exchange (ETDEWEB)

    Sayre, Amanda M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Olson, Jarrod R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-03-01

    In its first five years, the National Nuclear Security Administration’s (NNSA) Next Generation Safeguards Initiative (NGSI) sponsored more than 400 undergraduate, graduate, and post-doctoral students in internships and research positions (Wyse 2012). In the past seven years, the NGSI program has, and continues to produce a large body of scientific, technical, and policy work in targeted core safeguards capabilities and human capital development activities. Not only does the NGSI program carry out activities across multiple disciplines, but also across all U.S. Department of Energy (DOE)/NNSA locations in the United States. However, products are not readily shared among disciplines and across locations, nor are they archived in a comprehensive library. Rather, knowledge of NGSI-produced literature is localized to the researchers, clients, and internal laboratory/facility publication systems such as the Electronic Records and Information Capture Architecture (ERICA) at the Pacific Northwest National Laboratory (PNNL). There is also no incorporated way of analyzing existing NGSI literature to determine whether the larger NGSI program is achieving its core safeguards capabilities and activities. A complete library of NGSI literature could prove beneficial to a cohesive, sustainable, and more economical NGSI program. The Safeguards Platform for Automated Retrieval of Knowledge (SPARK) has been developed to be a knowledge storage, retrieval, and analysis capability to capture safeguards knowledge to exist beyond the lifespan of NGSI. During the development process, it was necessary to build a SPARK training dataset (a corpus of documents) for initial entry into the system and for demonstration purposes. We manipulated these data to gain new information about the breadth of NGSI publications, and they evaluated the science-policy interface at PNNL as a practical demonstration of SPARK’s intended analysis capability. The analysis demonstration sought to answer the

  16. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    Science.gov (United States)

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  17. The SPARK Tool to prioritise questions for systematic reviews in health policy and systems research: development and initial validation.

    Science.gov (United States)

    Akl, Elie A; Fadlallah, Racha; Ghandour, Lilian; Kdouh, Ola; Langlois, Etienne; Lavis, John N; Schünemann, Holger; El-Jardali, Fadi

    2017-09-04

    Groups or institutions funding or conducting systematic reviews in health policy and systems research (HPSR) should prioritise topics according to the needs of policymakers and stakeholders. The aim of this study was to develop and validate a tool to prioritise questions for systematic reviews in HPSR. We developed the tool following a four-step approach consisting of (1) the definition of the purpose and scope of tool, (2) item generation and reduction, (3) testing for content and face validity, (4) and pilot testing of the tool. The research team involved international experts in HPSR, systematic review methodology and tool development, led by the Center for Systematic Reviews on Health Policy and Systems Research (SPARK). We followed an inclusive approach in determining the final selection of items to allow customisation to the user's needs. The purpose of the SPARK tool was to prioritise questions in HPSR in order to address them in systematic reviews. In the item generation and reduction phase, an extensive literature search yielded 40 relevant articles, which were reviewed by the research team to create a preliminary list of 19 candidate items for inclusion in the tool. As part of testing for content and face validity, input from international experts led to the refining, changing, merging and addition of new items, and to organisation of the tool into two modules. Following pilot testing, we finalised the tool, with 22 items organised in two modules - the first module including 13 items to be rated by policymakers and stakeholders, and the second including 9 items to be rated by systematic review teams. Users can customise the tool to their needs, by omitting items that may not be applicable to their settings. We also developed a user manual that provides guidance on how to use the SPARK tool, along with signaling questions. We have developed and conducted initial validation of the SPARK tool to prioritise questions for systematic reviews in HPSR, along with

  18. Fiber coupled optical spark delivery system

    Science.gov (United States)

    Yalin, Azer; Willson, Bryan; Defoort, Morgan

    2008-08-12

    A spark delivery system for generating a spark using a laser beam is provided, the spark delivery system including a laser light source and a laser delivery assembly. The laser delivery assembly includes a hollow fiber and a launch assembly comprising launch focusing optics to input the laser beam in the hollow fiber. In addition, the laser delivery assembly includes exit focusing optics that demagnify an exit beam of laser light from the hollow fiber, thereby increasing the intensity of the laser beam and creating a spark. In accordance with embodiments of the present invention, the assembly may be used to create a spark in a combustion engine. In accordance with other embodiments of the present invention, a method of using the spark delivery system is provided. In addition, a method of choosing an appropriate fiber for creating a spark using a laser beam is also presented.

  19. Fastdata processing with Spark

    CERN Document Server

    Karau, Holden

    2013-01-01

    This book will be a basic, step-by-step tutorial, which will help readers take advantage of all that Spark has to offer.Fastdata Processing with Spark is for software developers who want to learn how to write distributed programs with Spark. It will help developers who have had problems that were too much to be dealt with on a single computer. No previous experience with distributed programming is necessary. This book assumes knowledge of either Java, Scala, or Python.

  20. INFLUENCE OF ELECTRIC SPARK ON HARDNESS OF CARBON STEEL

    Directory of Open Access Journals (Sweden)

    I. O. Vakulenko

    2014-03-01

    Full Text Available Purpose. The purpose of work is an estimation of influence of an electric spark treatment on the state of mouldable superficial coverage of carbon steel. Methodology. The steel of fragment of railway wheel rim served as material for research with chemical composition 0.65% С, 0.67% Mn, 0.3% Si, 0.027% P, 0.028% S. Structural researches were conducted with the use of light microscopy and methods of quantitative metallography. The structural state of the probed steel corresponded to the state after hot plastic deformation. The analysis of hardness distribution in the micro volumes of cathode metal was carried out with the use of microhardness tester of type of PMT-3. An electric spark treatment of carbon steel surface was executed with the use of equipment type of EFI-25M. Findings. After electric spark treatment of specimen surface from carbon steel the forming of multi-layered coverage was observed. The analysis of microstructure found out the existence of high-quality distinctions in the internal structure of coverage metal, depending on the probed area. The results obtained in the process are confirmed by the well-known theses, that forming of superficial coverage according to technology of electric spark is determined by the terms of transfer and crystallization of metal. The gradient of structures on the coverage thickness largely depends on development of structural transformation processes similar to the thermal character influence. Originality. As a result of electric spark treatment on the condition of identical metal of anode and cathode, the first formed layer of coverage corresponds to the monophase state according to external signs. In the volume of coverage metal, the appearance of carbide phase particles is accompanied by the decrease of microhardness values. Practical value. Forming of multi-layered superficial coverage during electric spark treatment is accompanied by the origin of structure gradient on a thickness. The effect

  1. SparkText: Biomedical Text Mining on Big Data Framework.

    Science.gov (United States)

    Ye, Zhan; Tafti, Ahmad P; He, Karen Y; Wang, Kai; He, Max M

    Many new biomedical research articles are published every day, accumulating rich information, such as genetic variants, genes, diseases, and treatments. Rapid yet accurate text mining on large-scale scientific literature can discover novel knowledge to better understand human diseases and to improve the quality of disease diagnosis, prevention, and treatment. In this study, we designed and developed an efficient text mining framework called SparkText on a Big Data infrastructure, which is composed of Apache Spark data streaming and machine learning methods, combined with a Cassandra NoSQL database. To demonstrate its performance for classifying cancer types, we extracted information (e.g., breast, prostate, and lung cancers) from tens of thousands of articles downloaded from PubMed, and then employed Naïve Bayes, Support Vector Machine (SVM), and Logistic Regression to build prediction models to mine the articles. The accuracy of predicting a cancer type by SVM using the 29,437 full-text articles was 93.81%. While competing text-mining tools took more than 11 hours, SparkText mined the dataset in approximately 6 minutes. This study demonstrates the potential for mining large-scale scientific articles on a Big Data infrastructure, with real-time update from new articles published daily. SparkText can be extended to other areas of biomedical research.

  2. Damping Resonant Current in a Spark-Gap Trigger Circuit to Reduce Noise

    Science.gov (United States)

    2009-06-01

    DAMPING RESONANT CURRENT IN A SPARK- GAP TRIGGER CIRCUIT TO REDUCE NOISE E. L. Ruden Air Force Research Laboratory, Directed Energy Directorate, AFRL...REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Damping Resonant Current In A Spark- Gap Trigger Circuit To Reduce Noise 5a...thereby triggering 2 after delay 0, is 1. Each of the two rail- gaps (represented by 2) is trig- gered to close after the spark- gap (1) in the

  3. Digital signal processing with kernel methods

    CERN Document Server

    Rojo-Alvarez, José Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustavo

    2018-01-01

    A realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors. * Presents the necess...

  4. Data-variant kernel analysis

    CERN Document Server

    Motai, Yuichi

    2015-01-01

    Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include

  5. Deep kernel learning method for SAR image target recognition

    Science.gov (United States)

    Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao

    2017-10-01

    With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.

  6. SparkText: Biomedical Text Mining on Big Data Framework

    Science.gov (United States)

    He, Karen Y.; Wang, Kai

    2016-01-01

    Background Many new biomedical research articles are published every day, accumulating rich information, such as genetic variants, genes, diseases, and treatments. Rapid yet accurate text mining on large-scale scientific literature can discover novel knowledge to better understand human diseases and to improve the quality of disease diagnosis, prevention, and treatment. Results In this study, we designed and developed an efficient text mining framework called SparkText on a Big Data infrastructure, which is composed of Apache Spark data streaming and machine learning methods, combined with a Cassandra NoSQL database. To demonstrate its performance for classifying cancer types, we extracted information (e.g., breast, prostate, and lung cancers) from tens of thousands of articles downloaded from PubMed, and then employed Naïve Bayes, Support Vector Machine (SVM), and Logistic Regression to build prediction models to mine the articles. The accuracy of predicting a cancer type by SVM using the 29,437 full-text articles was 93.81%. While competing text-mining tools took more than 11 hours, SparkText mined the dataset in approximately 6 minutes. Conclusions This study demonstrates the potential for mining large-scale scientific articles on a Big Data infrastructure, with real-time update from new articles published daily. SparkText can be extended to other areas of biomedical research. PMID:27685652

  7. SparkText: Biomedical Text Mining on Big Data Framework.

    Directory of Open Access Journals (Sweden)

    Zhan Ye

    Full Text Available Many new biomedical research articles are published every day, accumulating rich information, such as genetic variants, genes, diseases, and treatments. Rapid yet accurate text mining on large-scale scientific literature can discover novel knowledge to better understand human diseases and to improve the quality of disease diagnosis, prevention, and treatment.In this study, we designed and developed an efficient text mining framework called SparkText on a Big Data infrastructure, which is composed of Apache Spark data streaming and machine learning methods, combined with a Cassandra NoSQL database. To demonstrate its performance for classifying cancer types, we extracted information (e.g., breast, prostate, and lung cancers from tens of thousands of articles downloaded from PubMed, and then employed Naïve Bayes, Support Vector Machine (SVM, and Logistic Regression to build prediction models to mine the articles. The accuracy of predicting a cancer type by SVM using the 29,437 full-text articles was 93.81%. While competing text-mining tools took more than 11 hours, SparkText mined the dataset in approximately 6 minutes.This study demonstrates the potential for mining large-scale scientific articles on a Big Data infrastructure, with real-time update from new articles published daily. SparkText can be extended to other areas of biomedical research.

  8. Ultra Fast, High Rep Rate, High Voltage Spark Gap Pulser

    Science.gov (United States)

    1995-07-01

    current rise time. The spark gap was designed to have a coaxial geometry reducing its inductance. Provisions were made to pass flowing gas between the...ULTRA FAST, HIGH REP RATE, HIGH VOLTAGE SPARK GAP PULSER Robert A. Pastore Jr., Lawrence E. Kingsley, Kevin Fonda, Erik Lenzing Electrophysics and...Modeling Branch AMSRL-PS-EA Tel.: (908)-532-0271 FAX: (908)-542-3348 U.S. Army Research Laboratory Physical Sciences Directorate Ft. Monmouth

  9. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Ling-Yu Duan

    2010-01-01

    Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  10. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Tian Yonghong

    2010-01-01

    Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  11. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Pressure dependence of the spark constant

    Energy Technology Data Exchange (ETDEWEB)

    Hess, H; Radtke, R; Deparade, W [Akademie der Wissenschaften der DDR, Berlin. Zentralinstitut fuer Elektronenphysik

    1978-02-21

    The author's theory on the development of LTE plasmas in low-inductance spark discharges has proved to be a useful tool in predicting the electric behaviour of such sparks. Their earlier experimental work was restricted to only one initial pressure, and in this paper they extend the examined pressure range to obtain some general conclusions on the pressure dependence of the spark behaviour.

  13. Efficiency improvement of a spark-ignition engine at full load conditions using exhaust gas recirculation and variable geometry turbocharger – Numerical study

    International Nuclear Information System (INIS)

    Sjerić, Momir; Taritaš, Ivan; Tomić, Rudolf; Blažić, Mislav; Kozarac, Darko; Lulić, Zoran

    2016-01-01

    Highlights: • A cylinder model was calibrated according to experimental results. • A full cycle simulation model of turbocharged spark-ignition engine was made. • Engine performance with high pressure exhaust gas recirculation was studied. • Cooled exhaust gas recirculation lowers exhaust temperature and knock occurrence. • Leaner mixtures enable fuel consumption improvement of up to 11.2%. - Abstract: The numerical analysis of performance of a four cylinder highly boosted spark-ignition engine at full load is described in this paper, with the research focused on introducing high pressure exhaust gas recirculation for control of engine limiting factors such as knock, turbine inlet temperature and cyclic variability. For this analysis the cycle-simulation model which includes modeling of the entire engine flow path, early flame kernel growth, mixture stratification, turbulent combustion, in-cylinder turbulence, knock and cyclic variability was applied. The cylinder sub-models such as ignition, turbulence and combustion were validated by using the experimental results of a naturally aspirated multi cylinder spark-ignition engine. The high load operation, which served as a benchmark value, was obtained by a standard procedure used in calibration of engines, i.e. operation with fuel enrichment and without exhaust gas recirculation. By introducing exhaust gas recirculation and by optimizing other engine operating parameters, the influence of exhaust gas recirculation on engine performance is obtained. The optimum operating parameters, such as spark advance, intake pressure, air to fuel ratio, were found to meet the imposed requirements in terms of fuel consumption, knock occurrence, exhaust gas temperature and variation of indicated mean effective pressure. By comparing the results of the base point with the results that used exhaust gas recirculation the improvement in fuel consumption of 8.7%, 11.2% and 1.5% at engine speeds of 2000 rpm, 3500 rpm and 5000

  14. Classification With Truncated Distance Kernel.

    Science.gov (United States)

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  15. DeepSpark: A Spark-Based Distributed Deep Learning Framework for Commodity Clusters

    OpenAIRE

    Kim, Hanjoo; Park, Jaehong; Jang, Jaehee; Yoon, Sungroh

    2016-01-01

    The increasing complexity of deep neural networks (DNNs) has made it challenging to exploit existing large-scale data processing pipelines for handling massive data and parameters involved in DNN training. Distributed computing platforms and GPGPU-based acceleration provide a mainstream solution to this computational challenge. In this paper, we propose DeepSpark, a distributed and parallel deep learning framework that exploits Apache Spark on commodity clusters. To support parallel operation...

  16. Exact Heat Kernel on a Hypersphere and Its Applications in Kernel SVM

    Directory of Open Access Journals (Sweden)

    Chenchao Zhao

    2018-01-01

    Full Text Available Many contemporary statistical learning methods assume a Euclidean feature space. This paper presents a method for defining similarity based on hyperspherical geometry and shows that it often improves the performance of support vector machine compared to other competing similarity measures. Specifically, the idea of using heat diffusion on a hypersphere to measure similarity has been previously proposed and tested by Lafferty and Lebanon [1], demonstrating promising results based on a heuristic heat kernel obtained from the zeroth order parametrix expansion; however, how well this heuristic kernel agrees with the exact hyperspherical heat kernel remains unknown. This paper presents a higher order parametrix expansion of the heat kernel on a unit hypersphere and discusses several problems associated with this expansion method. We then compare the heuristic kernel with an exact form of the heat kernel expressed in terms of a uniformly and absolutely convergent series in high-dimensional angular momentum eigenmodes. Being a natural measure of similarity between sample points dwelling on a hypersphere, the exact kernel often shows superior performance in kernel SVM classifications applied to text mining, tumor somatic mutation imputation, and stock market analysis.

  17. Laser spark distribution and ignition system

    Science.gov (United States)

    Woodruff, Steven [Morgantown, WV; McIntyre, Dustin L [Morgantown, WV

    2008-09-02

    A laser spark distribution and ignition system that reduces the high power optical requirements for use in a laser ignition and distribution system allowing for the use of optical fibers for delivering the low peak energy pumping pulses to a laser amplifier or laser oscillator. An optical distributor distributes and delivers optical pumping energy from an optical pumping source to multiple combustion chambers incorporating laser oscillators or laser amplifiers for inducing a laser spark within a combustion chamber. The optical distributor preferably includes a single rotating mirror or lens which deflects the optical pumping energy from the axis of rotation and into a plurality of distinct optical fibers each connected to a respective laser media or amplifier coupled to an associated combustion chamber. The laser spark generators preferably produce a high peak power laser spark, from a single low power pulse. The laser spark distribution and ignition system has application in natural gas fueled reciprocating engines, turbine combustors, explosives and laser induced breakdown spectroscopy diagnostic sensors.

  18. Spark channel propagation in a microbubble liquid

    Energy Technology Data Exchange (ETDEWEB)

    Panov, V. A.; Vasilyak, L. M., E-mail: vasilyak@ihed.ras.ru; Vetchinin, S. P.; Pecherkin, V. Ya.; Son, E. E. [Russian Academy of Sciences, Joint Institute for High Temperatures (Russian Federation)

    2016-11-15

    Experimental study on the development of the spark channel from the anode needle under pulsed electrical breakdown of isopropyl alcohol solution in water with air microbubbles has been performed. The presence of the microbubbles increases the velocity of the spark channel propagation and increases the current in the discharge gap circuit. The observed rate of spark channel propagation in microbubble liquid ranges from 4 to 12 m/s, indicating the thermal mechanism of the spark channel development in a microbubble liquid.

  19. Bright Sparks of Our Future!

    Science.gov (United States)

    Riordan, Naoimh

    2016-04-01

    My name is Naoimh Riordan and I am the Vice Principal of Rockboro Primary School in Cork City, South of Ireland. I am a full time class primary teacher and I teach 4th class, my students are aged between 9-10 years. My passion for education has developed over the years and grown towards STEM (Science, Technology, Engineering and Mathematics) subjects. I believe these subjects are the way forward for our future. My passion and beliefs are driven by the unique after school programme that I have developed. It is titled "Sparks" coming from the term Bright Sparks. "Sparks" is an after school programme with a difference where the STEM subjects are concentrated on through lessons such as Science, Veterinary Science Computer Animation /Coding, Eco engineering, Robotics, Magical Maths, Chess and Creative Writing. All these subjects are taught through activity based learning and are one-hour long each week for a ten-week term. "Sparks" is fully inclusive and non-selective which gives all students of any level of ability an opportunity to engage into these subjects. "Sparks" is open to all primary students in County Cork. The "Sparks" after school programme is taught by tutors from the different Universities and Colleges in Cork City. It works very well because the tutor brings their knowledge, skills and specialised equipment from their respective universities and in turn the tutor gains invaluable teaching practise, can trial a pilot programme in a chosen STEM subject and gain an insight into what works in the physical classroom.

  20. Tool grinding and spark testing

    Science.gov (United States)

    Widener, Edward L.

    1993-01-01

    The objectives were the following: (1) to revive the neglected art of metal-sparking; (2) to promote quality-assurance in the workplace; (3) to avoid spark-ignited explosions of dusts or volatiles; (4) to facilitate the salvage of scrap metals; and (5) to summarize important references.

  1. Subsampling Realised Kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    In a recent paper we have introduced the class of realised kernel estimators of the increments of quadratic variation in the presence of noise. We showed that this estimator is consistent and derived its limit distribution under various assumptions on the kernel weights. In this paper we extend our...... that subsampling is impotent, in the sense that subsampling has no effect on the asymptotic distribution. Perhaps surprisingly, for the efficient smooth kernels, such as the Parzen kernel, we show that subsampling is harmful as it increases the asymptotic variance. We also study the performance of subsampled...

  2. OH PLIF measurement in a spark ignition engine with a tumble flow

    Science.gov (United States)

    Kumar, Siddhartha; Moronuki, Tatsuya; Shimura, Masayasu; Minamoto, Yuki; Yokomori, Takeshi; Tanahashi, Mamoru; Strategic Innovation Program (SIP) Team

    2017-11-01

    Under lean conditions, high compression ratio and strong tumble flow; cycle-to-cycle variations of combustion in spark ignition (SI) engines is prominent, therefore, relation between flame propagation characteristics and increase of pressure needs to be clarified. The present study is aimed at exploring the spatial and temporal development of the flame kernel using OH planar laser-induced fluorescence (OH PLIF) in an optical SI engine. Equivalence ratio is changed at a fixed indicated mean effective pressure of 400 kPa. From the measurements taken at different crank angle degrees (CAD) after ignition, characteristics of flame behavior were investigated considering temporal evolution of in-cylinder pressure, and factors causing cycle-to-cycle variations are discussed. In addition, the effects of tumble flow intensity on flame propagation behavior were also investigated. This work is supported by the Cross-ministerial Strategic Innovation Program (SIP), `Innovative Combustion Technology'.

  3. Research on personalized recommendation algorithm based on spark

    Science.gov (United States)

    Li, Zeng; Liu, Yu

    2018-04-01

    With the increasing amount of data in the past years, the traditional recommendation algorithm has been unable to meet people's needs. Therefore, how to better recommend their products to users of interest, become the opportunities and challenges of the era of big data development. At present, each platform enterprise has its own recommendation algorithm, but how to make efficient and accurate push information is still an urgent problem for personalized recommendation system. In this paper, a hybrid algorithm based on user collaborative filtering and content-based recommendation algorithm is proposed on Spark to improve the efficiency and accuracy of recommendation by weighted processing. The experiment shows that the recommendation under this scheme is more efficient and accurate.

  4. Primary Science Interview: Science Sparks

    Science.gov (United States)

    Bianchi, Lynne

    2016-01-01

    In this "Primary Science" interview, Lynne Bianchi talks with Emma Vanstone about "Science Sparks," which is a website full of creative, fun, and exciting science activity ideas for children of primary-school age. "Science Sparks" started with the aim of inspiring more parents to do science at home with their…

  5. Kernel abortion in maize. II. Distribution of 14C among kernel carboydrates

    International Nuclear Information System (INIS)

    Hanft, J.M.; Jones, R.J.

    1986-01-01

    This study was designed to compare the uptake and distribution of 14 C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35 0 C were transferred to [ 14 C]sucrose media 10 days after pollination. Kernels cultured at 35 0 C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on [ 14 C]sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35 0 C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35 0 C compared to kernels cultured at 30 0 C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35 0 C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30 0 C (89%). Kernels cultured at 35 0 C had a correspondingly higher proportion of 14 C in endosperm fructose, glucose, and sucrose

  6. Big Data Analytics with Datalog Queries on Spark.

    Science.gov (United States)

    Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo

    2016-01-01

    There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.

  7. Bubbles, sparks, and the postwar laboratory

    International Nuclear Information System (INIS)

    Galison, P.

    1989-01-01

    The development and use of bubble chambers and spark chambers in the 1950s form the main thrust of this article, the bubble chamber as an example of ''image-producing'' instruments and the spark chamber as a ''logic'' device. Work on a cloud chamber by Glaser led to the development of the bubble chamber detector using liquid hydrogen, which was later linked to a computer for accurate automatic track analysis. It made possible demonstrations of the existence of a particle or interaction. Spark chambers were easier to build and so soon became common, various types being developed across the world. The development of spark chambers originated in the need for timing devices for the Manhattan Project, but work on their design occurred in a number of units worldwide. (UK)

  8. Optimized Kernel Entropy Components.

    Science.gov (United States)

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  9. Sample preparations for spark source mass spectrography

    International Nuclear Information System (INIS)

    Catlett, C.W.; Rollins, M.B.; Griffin, E.B.; Dorsey, J.G.

    1977-10-01

    Methods have been developed for the preparation of various materials for spark source mass spectrography. The essential features of these preparations (all which can provide adequate precision in a cost-effective manner) consist in obtaining spark-stable electrode sample pieces, a common matrix, a reduction of anomolous effects in the spark, the incorporation of a suitable internal standard for plate response normalization, and a reduction in time

  10. The pressure dependence of the spark constant

    International Nuclear Information System (INIS)

    Hess, H.; Radtke, R.; Deparade, W.

    1978-01-01

    The author's theory on the development of LTE plasmas in low-inductance spark discharges has proved to be a useful tool in predicting the electric behaviour of such sparks. Their earlier experimental work was restricted to only one initial pressure, and in this paper they extend the examined pressure range to obtain some general conclusions on the pressure dependence of the spark behaviour. (author)

  11. A novel adaptive kernel method with kernel centers determined by a support vector regression approach

    NARCIS (Netherlands)

    Sun, L.G.; De Visser, C.C.; Chu, Q.P.; Mulder, J.A.

    2012-01-01

    The optimality of the kernel number and kernel centers plays a significant role in determining the approximation power of nearly all kernel methods. However, the process of choosing optimal kernels is always formulated as a global optimization task, which is hard to accomplish. Recently, an

  12. Fast data processing with Spark

    CERN Document Server

    Sankar, Krishna

    2015-01-01

    Fast Data Processing with Spark - Second Edition is for software developers who want to learn how to write distributed programs with Spark. It will help developers who have had problems that were too big to be dealt with on a single computer. No previous experience with distributed programming is necessary. This book assumes knowledge of either Java, Scala, or Python.

  13. High pressure gas-filled cermet spark gaps

    International Nuclear Information System (INIS)

    Avilov, Eh.A.; Yur'ev, A.L.

    2000-01-01

    The results of modernization of the R-48 and R-49 spark gaps making it possible to improve their electrical characteristics are presented. The design is described and characteristics of gas-filled cermet spark gaps are presented. By the voltage rise time of 5-6 μs in the Marx generator scheme they provide for the pulse break-through voltage of 120 and 150 kV. By the voltage rise time of 0.5-1 μs the break-through voltage of these spark gaps may be increased up to 130 and 220 kV. The proper commutation time is equal to ≤ 0.5 ns. Practical recommendations relative to designing cermet spark gaps are given [ru

  14. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    Science.gov (United States)

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  15. The suppression of destructive sparks in parallel plate proportional counters

    Energy Technology Data Exchange (ETDEWEB)

    Cockshott, R.A.; Mason, I.M.

    1984-02-01

    The authors find that high energy background events produce localised sparks in parallel plate counters when operated in the proportional mode. These sparks increase dead-time and lead to degradation ranging from electrode damage to spurious pulsing and continuous breakdown. The problem is particularly serious in low energy photon detectors for X-ray astronomy which are required to have lifetimes of several years in the high radiation environment of space. For the parallel plate imaging detector developed for the European X-ray Observatory Satellite (EXOSAT) they investigate quantitatively the spark thresholds, spark rates and degradation processes. They discuss the spark mechanism, pointing out differences from the situation in spark chambers and counters. They show that the time profile of the sparks allows them to devise a spark suppression system which reduces the degradation rate by a factor of ''200.

  16. 7 CFR 981.7 - Edible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  17. New spark test device for material characterization

    CERN Document Server

    Kildemo, Morten

    2004-01-01

    An automated spark test system based on combining field emission and spark measurements, exploiting a discharging capacitor is investigated. In particular, the remaining charge on the capacitor is analytically solved assuming the field emitted current to follow the Fowler Nordheim expression. The latter allows for field emission measurements from pA to A currents, and spark detection by complete discharge of the capacitor. The measurement theory and experiments on Cu and W are discussed.

  18. Liquid-Arc/Spark-Excitation Atomic-Emission Spectroscopy

    Science.gov (United States)

    Schlagen, Kenneth J.

    1992-01-01

    Constituents of solutions identified in situ. Liquid-arc/spark-excitation atomic-emission spectroscopy (LAES) is experimental variant of atomic-emission spectroscopy in which electric arc or spark established in liquid and spectrum of light from arc or spark analyzed to identify chemical elements in liquid. Observations encourage development of LAES equipment for online monitoring of process streams in such industries as metal plating, electronics, and steel, and for online monitoring of streams affecting environment.

  19. Kernel versions of some orthogonal transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    Kernel versions of orthogonal transformations such as principal components are based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced...... by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel...... function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA) and kernel minimum noise fraction (MNF) analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function...

  20. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  1. Penetuan Bilangan Iodin pada Hydrogenated Palm Kernel Oil (HPKO) dan Refined Bleached Deodorized Palm Kernel Oil (RBDPKO)

    OpenAIRE

    Sitompul, Monica Angelina

    2015-01-01

    Have been conducted Determination of Iodin Value by method titration to some Hydrogenated Palm Kernel Oil (HPKO) and Refined Bleached Deodorized Palm Kernel Oil (RBDPKO). The result of analysis obtained the Iodin Value in Hydrogenated Palm Kernel Oil (A) = 0,16 gr I2/100gr, Hydrogenated Palm Kernel Oil (B) = 0,20 gr I2/100gr, Hydrogenated Palm Kernel Oil (C) = 0,24 gr I2/100gr. And in Refined Bleached Deodorized Palm Kernel Oil (A) = 17,51 gr I2/100gr, Refined Bleached Deodorized Palm Kernel ...

  2. Development And Testing Of Biogas-Petrol Blend As An Alternative Fuel For Spark Ignition Engine

    Directory of Open Access Journals (Sweden)

    Awogbemi

    2015-08-01

    Full Text Available Abstract This research is on the development and testing of a biogas-petrol blend to run a spark ignition engine. A2080 ratio biogaspetrol blend was developed as an alternative fuel for spark ignition engine test bed. Petrol and biogas-petrol blend were comparatively tested on the test bed to determine the effectiveness of the fuels. The results of the tests showed that biogas petrol blend generated higher torque brake power indicated power brake thermal efficiency and brake mean effective pressure but lower fuel consumption and exhaust temperature than petrol. The research concluded that a spark ignition engine powered by biogas-petrol blend was found to be economical consumed less fuel and contributes to sanitation and production of fertilizer.

  3. 7 CFR 981.8 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  4. Preliminary investigation into the simulation of a laser-induced plasma by means of a floating object in a spark gap

    CSIR Research Space (South Africa)

    West, NJ

    2007-08-01

    Full Text Available In this research, an orthogonally laser-triggered spark gap is investigated. The laser beam is directed in the region of a 30mm spark gap at 90 degrees to the gap and focused on the axis. The influence of plasma position within the spark gap...

  5. 7 CFR 981.408 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  6. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  7. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    Science.gov (United States)

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  8. Viscosity kernel of molecular fluids

    DEFF Research Database (Denmark)

    Puscasu, Ruslan; Todd, Billy; Daivis, Peter

    2010-01-01

    , temperature, and chain length dependencies of the reciprocal and real-space viscosity kernels are presented. We find that the density has a major effect on the shape of the kernel. The temperature range and chain lengths considered here have by contrast less impact on the overall normalized shape. Functional...... forms that fit the wave-vector-dependent kernel data over a large density and wave-vector range have also been tested. Finally, a structural normalization of the kernels in physical space is considered. Overall, the real-space viscosity kernel has a width of roughly 3–6 atomic diameters, which means...

  9. Are Crab nanoshots Schwinger sparks?

    Energy Technology Data Exchange (ETDEWEB)

    Stebbins, Albert [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Yoo, Hojin [Univ. of Wisconsin, Madison, WI (United States); Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States)

    2015-05-21

    The highest brightness temperature ever observed are from "nanoshots" from the Crab pulsar which we argue could be the signature of bursts of vacuum e± pair production. If so this would be the first time the astronomical Schwinger effect has been observed. These "Schwinger sparks" would be an intermittent but extremely powerful, ~103 L, 10 PeV e± accelerator in the heart of the Crab. These nanosecond duration sparks are generated in a volume less than 1 m3 and the existence of such sparks has implications for the small scale structure of the magnetic field of young pulsars such as the Crab. As a result, this mechanism may also play a role in producing other enigmatic bright short radio transients such as fast radio bursts.

  10. Experimental Investigation of Augmented Spark Ignition of a LO2/LCH4 Reaction Control Engine at Altitude Conditions

    Science.gov (United States)

    Kleinhenz, Julie; Sarmiento, Charles; Marshall, William

    2012-01-01

    The use of nontoxic propellants in future exploration vehicles would enable safer, more cost-effective mission scenarios. One promising green alternative to existing hypergols is liquid methane (LCH4) with liquid oxygen (LO2). A 100 lbf LO2/LCH4 engine was developed under the NASA Propulsion and Cryogenic Advanced Development project and tested at the NASA Glenn Research Center Altitude Combustion Stand in a low pressure environment. High ignition energy is a perceived drawback of this propellant combination; so this ignition margin test program examined ignition performance versus delivered spark energy. Sensitivity of ignition to spark timing and repetition rate was also explored. Three different exciter units were used with the engine s augmented (torch) igniter. Captured waveforms indicated spark behavior in hot fire conditions was inconsistent compared to the well-behaved dry sparks. This suggests that rising pressure and flow rate increase spark impedance and may at some point compromise an exciter s ability to complete each spark. The reduced spark energies of such quenched deliveries resulted in more erratic ignitions, decreasing ignition probability. The timing of the sparks relative to the pressure/flow conditions also impacted the probability of ignition. Sparks occurring early in the flow could trigger ignition with energies as low as 1 to 6 mJ, though multiple, similarly timed sparks of 55 to 75 mJ were required for reliable ignition. Delayed spark application and reduced spark repetition rate both correlated with late and occasional failed ignitions. An optimum time interval for spark application and ignition therefore coincides with propellant introduction to the igniter.

  11. Efficiency calibration of solid track spark auto counter

    International Nuclear Information System (INIS)

    Wang Mei; Wen Zhongwei; Lin Jufang; Liu Rong; Jiang Li; Lu Xinxin; Zhu Tonghua

    2008-01-01

    The factors influencing detection efficiency of solid track spark auto counter were analyzed, and the best etch condition and parameters of charge were also reconfirmed. With small plate fission ionization chamber, the efficiency of solid track spark auto counter at various experiment assemblies was re-calibrated. The efficiency of solid track spark auto counter at various experimental conditions was obtained. (authors)

  12. A survey of kernel-type estimators for copula and their applications

    Science.gov (United States)

    Sumarjaya, I. W.

    2017-10-01

    Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.

  13. Partial Deconvolution with Inaccurate Blur Kernel.

    Science.gov (United States)

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning

  14. Kernel methods for deep learning

    OpenAIRE

    Cho, Youngmin

    2012-01-01

    We introduce a new family of positive-definite kernels that mimic the computation in large neural networks. We derive the different members of this family by considering neural networks with different activation functions. Using these kernels as building blocks, we also show how to construct other positive-definite kernels by operations such as composition, multiplication, and averaging. We explore the use of these kernels in standard models of supervised learning, such as support vector mach...

  15. Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.

    Science.gov (United States)

    Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit

    2018-02-13

    Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Weighted Feature Gaussian Kernel SVM for Emotion Recognition.

    Science.gov (United States)

    Wei, Wei; Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods.

  17. Genetic Analysis of Kernel Traits in Maize-Teosinte Introgression Populations

    Directory of Open Access Journals (Sweden)

    Zhengbin Liu

    2016-08-01

    Full Text Available Seed traits have been targeted by human selection during the domestication of crop species as a way to increase the caloric and nutritional content of food during the transition from hunter-gather to early farming societies. The primary seed trait under selection was likely seed size/weight as it is most directly related to overall grain yield. Additional seed traits involved in seed shape may have also contributed to larger grain. Maize (Zea mays ssp. mays kernel weight has increased more than 10-fold in the 9000 years since domestication from its wild ancestor, teosinte (Z. mays ssp. parviglumis. In order to study how size and shape affect kernel weight, we analyzed kernel morphometric traits in a set of 10 maize-teosinte introgression populations using digital imaging software. We identified quantitative trait loci (QTL for kernel area and length with moderate allelic effects that colocalize with kernel weight QTL. Several genomic regions with strong effects during maize domestication were detected, and a genetic framework for kernel traits was characterized by complex pleiotropic interactions. Our results both confirm prior reports of kernel domestication loci and identify previously uncharacterized QTL with a range of allelic effects, enabling future research into the genetic basis of these traits.

  18. SPARK: Adapting Keyword Query to Semantic Search

    Science.gov (United States)

    Zhou, Qi; Wang, Chong; Xiong, Miao; Wang, Haofen; Yu, Yong

    Semantic search promises to provide more accurate result than present-day keyword search. However, progress with semantic search has been delayed due to the complexity of its query languages. In this paper, we explore a novel approach of adapting keywords to querying the semantic web: the approach automatically translates keyword queries into formal logic queries so that end users can use familiar keywords to perform semantic search. A prototype system named 'SPARK' has been implemented in light of this approach. Given a keyword query, SPARK outputs a ranked list of SPARQL queries as the translation result. The translation in SPARK consists of three major steps: term mapping, query graph construction and query ranking. Specifically, a probabilistic query ranking model is proposed to select the most likely SPARQL query. In the experiment, SPARK achieved an encouraging translation result.

  19. Optical spark chamber

    CERN Multimedia

    CERN PhotoLab

    1971-01-01

    An optical spark chamber developed for use in the Omega spectrometer. On the left the supporting frame is exceptionally thin to allow low momentum particles to escape and be detected outside the magnetic field.

  20. 7 CFR 981.9 - Kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  1. Organizing for ontological change: The kernel of an AIDS research infrastructure

    Science.gov (United States)

    Polk, Jessica Beth

    2015-01-01

    Is it possible to prepare and plan for emergent and changing objects of research? Members of the Multicenter AIDS Cohort Study have been investigating AIDS for over 30 years, and in that time, the disease has been repeatedly transformed. Over the years and across many changes, members have continued to study HIV disease while in the process regenerating an adaptable research organization. The key to sustaining this technoscientific flexibility has been what we call the kernel of a research infrastructure: ongoing efforts to maintain the availability of resources and services that may be brought to bear in the investigation of new objects. In the case of the Multicenter AIDS Cohort Study, these resources are as follows: specimens and data, calibrated instruments, heterogeneous experts, and participating cohorts of gay and bisexual men. We track three ontological transformations, examining how members prepared for and responded to changes: the discovery of a novel retroviral agent (HIV), the ability to test for that agent, and the transition of the disease from fatal to chronic through pharmaceutical intervention. Respectively, we call the work, ‘technologies’, and techniques of adapting to these changes, ‘repurposing’, ‘elaborating’, and ‘extending the kernel’. PMID:26477206

  2. Protection of neutral-beam accelerator electrodes from spark discharges

    International Nuclear Information System (INIS)

    Praeg, W.F.

    1977-01-01

    The high-voltage (HV) electrodes of neutral beam sources (NBS's) must be protected from occasional sparks to ground. Spark currents can be limited with special transformers and reactors which introduce time delays that are long enough to quench the spark or to disconnect the energy source. A saturated time delay transformer (STDT) connected in series with the HV power supply detects spark faults and limits the current supplied by the power supply and its capacitance to ground; it also initiates spark quenching. Nonsaturated, longitudinal reactors limit the discharge current supplied by the energy stored in the circuit capacitance of the NBS filament and arc power supplies long enough to discharge this capacitance into a resistor. The design principles of these protective circuits are presented

  3. Protection of neutral-beam-accelerator electrodes from spark discharges

    International Nuclear Information System (INIS)

    Praeg, W.F.

    1977-01-01

    The high-voltage (HV) electrodes of neutral beam sources (NBS's) must be protected from occasional sparks to ground. Spark currents can be limited with special transformers and reactors which introduce time delays that are long enough to quench the spark or to disconnect the energy source. A saturated time delay transformer (STDT) connected in series with the HV power supply detects spark faults and limits the current supplied by the power supply and its capacitance to ground; it also initiates spark quenching. Nonsaturated, longitudinal reactors limit the discharge current supplied by the energy stored in the circuit capacitance of the NBS filament and arc power supplies long enough to discharge this capacitance into a resistor. The design principles of these protective circuits are presented in this paper

  4. Protection of neutral-beam-accelerator electrodes from spark discharges

    International Nuclear Information System (INIS)

    Praeg, W.F.

    1978-01-01

    The high-voltage (HV) electrodes of neutral beam sources (NBS's) must be protected from occasional sparks to ground. Spark currents can be limited with special transformers and reactors which introduce time delays that are long enough to quench the spark or to disconnect the energy source. A saturated time delay transformer (STDT) connected in series with the HV power supply detects spark faults and limits the current supplied by the power supply and its capacitance to ground; it also initiates spark quenching. Nonsaturated, longitudinal reactors limit the discharge current supplied by the energy stored in the circuit capacitance of the NBS filament and arc power supplies long enough to discharge this capacitance into a resistor. The design principles of these protective circuits are presented in this paper

  5. Veto-Consensus Multiple Kernel Learning

    NARCIS (Netherlands)

    Zhou, Y.; Hu, N.; Spanos, C.J.

    2016-01-01

    We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The

  6. 7 CFR 51.2295 - Half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  7. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  8. Iterative software kernels

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  9. Viscozyme L pretreatment on palm kernels improved the aroma of palm kernel oil after kernel roasting.

    Science.gov (United States)

    Zhang, Wencan; Leong, Siew Mun; Zhao, Feifei; Zhao, Fangju; Yang, Tiankui; Liu, Shaoquan

    2018-05-01

    With an interest to enhance the aroma of palm kernel oil (PKO), Viscozyme L, an enzyme complex containing a wide range of carbohydrases, was applied to alter the carbohydrates in palm kernels (PK) to modulate the formation of volatiles upon kernel roasting. After Viscozyme treatment, the content of simple sugars and free amino acids in PK increased by 4.4-fold and 4.5-fold, respectively. After kernel roasting and oil extraction, significantly more 2,5-dimethylfuran, 2-[(methylthio)methyl]-furan, 1-(2-furanyl)-ethanone, 1-(2-furyl)-2-propanone, 5-methyl-2-furancarboxaldehyde and 2-acetyl-5-methylfuran but less 2-furanmethanol and 2-furanmethanol acetate were found in treated PKO; the correlation between their formation and simple sugar profile was estimated by using partial least square regression (PLS1). Obvious differences in pyrroles and Strecker aldehydes were also found between the control and treated PKOs. Principal component analysis (PCA) clearly discriminated the treated PKOs from that of control PKOs on the basis of all volatile compounds. Such changes in volatiles translated into distinct sensory attributes, whereby treated PKO was more caramelic and burnt after aqueous extraction and more nutty, roasty, caramelic and smoky after solvent extraction. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  11. Accuracy of approximations of solutions to Fredholm equations by kernel methods

    Czech Academy of Sciences Publication Activity Database

    Gnecco, G.; Kůrková, Věra; Sanguineti, M.

    2012-01-01

    Roč. 218, č. 14 (2012), s. 7481-7497 ISSN 0096-3003 R&D Projects: GA ČR GAP202/11/1368; GA MŠk OC10047 Grant - others:CNR-AV ČR(CZ-IT) Project 2010–2012 “Complexity of Neural -Network and Kernel Computational Models Institutional research plan: CEZ:AV0Z10300504 Keywords : approximate solutions to integral equations * radial and kernel-based networks * Gaussian kernels * model complexity * analysis of algorithms Subject RIV: IN - Informatics, Computer Science Impact factor: 1.349, year: 2012

  12. 7 CFR 51.1441 - Half-kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  13. Automatic spark counting of alpha-tracks in plastic foils

    International Nuclear Information System (INIS)

    Somogyi, G.; Medveczky, L.; Hunyadi, I.; Nyako, B.

    1976-01-01

    The possibility of alpha-track counting by jumping spark counter in cellulose acetate and polycarbonate nuclear track detectors was studied. A theoretical treatment is presented which predicts the optimum residual thickness of the etched foils in which completely through-etched tracks (i.e. holes) can be obtained for alpha-particles of various energies and angles of incidence. In agreement with the theoretical prediction it is shown that a successful spark counting of alpha-tracks can be performed even in polycarbonate foils. Some counting characteristics, such as counting efficiency vs particle energy at various etched foil thicknesses, surface spark density produced by electric breakdowns in unexposed foils vs foil thickness, etc. have been determined. Special attention was given to the spark counting of alpha-tracks entering thin detectors at right angle. The applicability of the spark counting technique is demonstrated in angular distribution measurements of the 27 Al(p,α 0 ) 24 Mg nuclear reaction at Ep = 1899 keV resonance energy. For this study 15 μm thick Makrofol-G foils and a jumping spark counter of improved construction were used. (orig.) [de

  14. Local Observed-Score Kernel Equating

    Science.gov (United States)

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  15. Credit scoring analysis using kernel discriminant

    Science.gov (United States)

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  16. Kernel parameter dependence in spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....

  17. Multiple Kernel Learning with Data Augmentation

    Science.gov (United States)

    2016-11-22

    JMLR: Workshop and Conference Proceedings 63:49–64, 2016 ACML 2016 Multiple Kernel Learning with Data Augmentation Khanh Nguyen nkhanh@deakin.edu.au...University, Australia Editors: Robert J. Durrant and Kee-Eung Kim Abstract The motivations of multiple kernel learning (MKL) approach are to increase... kernel expres- siveness capacity and to avoid the expensive grid search over a wide spectrum of kernels . A large amount of work has been proposed to

  18. Dual Spark Plugs For Stratified-Charge Rotary Engine

    Science.gov (United States)

    Abraham, John; Bracco, Frediano V.

    1996-01-01

    Fuel efficiency of stratified-charge, rotary, internal-combustion engine increased by improved design featuring dual spark plugs. Second spark plug ignites fuel on upstream side of main fuel injector; enabling faster burning and more nearly complete utilization of fuel.

  19. OS X and iOS Kernel Programming

    CERN Document Server

    Halvorsen, Ole Henry

    2011-01-01

    OS X and iOS Kernel Programming combines essential operating system and kernel architecture knowledge with a highly practical approach that will help you write effective kernel-level code. You'll learn fundamental concepts such as memory management and thread synchronization, as well as the I/O Kit framework. You'll also learn how to write your own kernel-level extensions, such as device drivers for USB and Thunderbolt devices, including networking, storage and audio drivers. OS X and iOS Kernel Programming provides an incisive and complete introduction to the XNU kernel, which runs iPhones, i

  20. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  1. Theoretical investigation of a photoconductively switched high-voltage spark gap

    NARCIS (Netherlands)

    Broks, B.H.P.; Hendriks, J.; Brok, W.J.M.; Brussaard, G.J.H.; Mullen, van der J.J.A.M.

    2006-01-01

    In this contribution, a photoconductively switched high-voltage spark gap with an emphasis on theswitching behavior is modeled. It is known experimentally that not all of the voltage that is present at the input of the spark gap is switched, but rather a fraction of it drops across the spark gap.

  2. Combustion and operating characteristics of spark-ignition engines

    Science.gov (United States)

    Heywood, J. B.; Keck, J. C.; Beretta, G. P.; Watts, P. A.

    1980-01-01

    The spark-ignition engine turbulent flame propagation process was investigated. Then, using a spark-ignition engine cycle simulation and combustion model, the impact of turbocharging and heat transfer variations or engine power, efficiency, and NO sub x emissions was examined.

  3. Paramecium: An Extensible Object-Based Kernel

    NARCIS (Netherlands)

    van Doorn, L.; Homburg, P.; Tanenbaum, A.S.

    1995-01-01

    In this paper we describe the design of an extensible kernel, called Paramecium. This kernel uses an object-based software architecture which together with instance naming, late binding and explicit overrides enables easy reconfiguration. Determining which components reside in the kernel protection

  4. ClimateSpark: An In-memory Distributed Computing Framework for Big Climate Data Analytics

    Science.gov (United States)

    Hu, F.; Yang, C. P.; Duffy, D.; Schnase, J. L.; Li, Z.

    2016-12-01

    Massive array-based climate data is being generated from global surveillance systems and model simulations. They are widely used to analyze the environment problems, such as climate changes, natural hazards, and public health. However, knowing the underlying information from these big climate datasets is challenging due to both data- and computing- intensive issues in data processing and analyzing. To tackle the challenges, this paper proposes ClimateSpark, an in-memory distributed computing framework to support big climate data processing. In ClimateSpark, the spatiotemporal index is developed to enable Apache Spark to treat the array-based climate data (e.g. netCDF4, HDF4) as native formats, which are stored in Hadoop Distributed File System (HDFS) without any preprocessing. Based on the index, the spatiotemporal query services are provided to retrieve dataset according to a defined geospatial and temporal bounding box. The data subsets will be read out, and a data partition strategy will be applied to equally split the queried data to each computing node, and store them in memory as climateRDDs for processing. By leveraging Spark SQL and User Defined Function (UDFs), the climate data analysis operations can be conducted by the intuitive SQL language. ClimateSpark is evaluated by two use cases using the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. One use case is to conduct the spatiotemporal query and visualize the subset results in animation; the other one is to compare different climate model outputs using Taylor-diagram service. Experimental results show that ClimateSpark can significantly accelerate data query and processing, and enable the complex analysis services served in the SQL-style fashion.

  5. Theory of reproducing kernels and applications

    CERN Document Server

    Saitoh, Saburou

    2016-01-01

    This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...

  6. Kernels for structured data

    CERN Document Server

    Gärtner, Thomas

    2009-01-01

    This book provides a unique treatment of an important area of machine learning and answers the question of how kernel methods can be applied to structured data. Kernel methods are a class of state-of-the-art learning algorithms that exhibit excellent learning results in several application domains. Originally, kernel methods were developed with data in mind that can easily be embedded in a Euclidean vector space. Much real-world data does not have this property but is inherently structured. An example of such data, often consulted in the book, is the (2D) graph structure of molecules formed by

  7. Notes on a storage manager for the Clouds kernel

    Science.gov (United States)

    Pitts, David V.; Spafford, Eugene H.

    1986-01-01

    The Clouds project is research directed towards producing a reliable distributed computing system. The initial goal is to produce a kernel which provides a reliable environment with which a distributed operating system can be built. The Clouds kernal consists of a set of replicated subkernels, each of which runs on a machine in the Clouds system. Each subkernel is responsible for the management of resources on its machine; the subkernal components communicate to provide the cooperation necessary to meld the various machines into one kernel. The implementation of a kernel-level storage manager that supports reliability is documented. The storage manager is a part of each subkernel and maintains the secondary storage residing at each machine in the distributed system. In addition to providing the usual data transfer services, the storage manager ensures that data being stored survives machine and system crashes, and that the secondary storage of a failed machine is recovered (made consistent) automatically when the machine is restarted. Since the storage manager is part of the Clouds kernel, efficiency of operation is also a concern.

  8. 7 CFR 981.401 - Adjusted kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel weight... kernels in excess of five percent; less shells, if applicable; less processing loss of one percent for...

  9. TOWARDS FINDING A NEW KERNELIZED FUZZY C-MEANS CLUSTERING ALGORITHM

    Directory of Open Access Journals (Sweden)

    Samarjit Das

    2014-04-01

    Full Text Available Kernelized Fuzzy C-Means clustering technique is an attempt to improve the performance of the conventional Fuzzy C-Means clustering technique. Recently this technique where a kernel-induced distance function is used as a similarity measure instead of a Euclidean distance which is used in the conventional Fuzzy C-Means clustering technique, has earned popularity among research community. Like the conventional Fuzzy C-Means clustering technique this technique also suffers from inconsistency in its performance due to the fact that here also the initial centroids are obtained based on the randomly initialized membership values of the objects. Our present work proposes a new method where we have applied the Subtractive clustering technique of Chiu as a preprocessor to Kernelized Fuzzy CMeans clustering technique. With this new method we have tried not only to remove the inconsistency of Kernelized Fuzzy C-Means clustering technique but also to deal with the situations where the number of clusters is not predetermined. We have also provided a comparison of our method with the Subtractive clustering technique of Chiu and Kernelized Fuzzy C-Means clustering technique using two validity measures namely Partition Coefficient and Clustering Entropy.

  10. Automated qualification and analysis of protective spark gaps for DC accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, Srutarshi; Rajan, Rehim N.; Dewangan, S.; Sharma, D.K.; Patel, Rupesh; Bakhtsingh, R.I.; Gond, Seema; Waghmare, Abhay; Thakur, Nitin; Mittal, K.C. [Accelerator and Pulse Power Division, Bhabha Atomic Research Centre, Mumbai (India)

    2014-07-01

    Protective spark gaps are used in the high voltage multiplier column of a 3 MeV DC Accelerator to prevent excessive voltage build-ups. Precise gap of 5 mm is maintained between the electrodes in these spark gaps for obtaining 120 kV± 5 kV in 6 kg/cm{sup 2} SF{sub 6} environment which is the dielectric medium. There are 74 such spark gaps used in the multiplier. Each spark gap has to be qualified for electrical performance before fitting in the accelerator to ensure reliable operation. As the breakdown voltage stabilizes after a large number of sparks between the electrodes, the qualification process becomes time consuming and cumbersome. For qualifying large number of spark gaps an automatic breakdown analysis setup has been developed. This setup operates in air, a dielectric medium. The setup consists of a flyback topology based high voltage power supply with maximum rating of 25 kV. This setup works in conjunction with spark detection and automated shutdown circuit. The breakdown voltage is sensed using a peak detector circuit. The voltage breakdown data is recorded and statistical distribution of the breakdown voltage has been analyzed. This paper describes details of the diagnostics and the spark gap qualification process based on the experimental data. (author)

  11. Spark - a modern approach for distributed analytics

    CERN Multimedia

    CERN. Geneva; Kothuri, Prasanth

    2016-01-01

    The Hadoop ecosystem is the leading opensource platform for distributed storing and processing big data. It is a very popular system for implementing data warehouses and data lakes. Spark has also emerged to be one of the leading engines for data analytics. The Hadoop platform is available at CERN as a central service provided by the IT department. By attending the session, a participant will acquire knowledge of the essential concepts need to benefit from the parallel data processing offered by Spark framework. The session is structured around practical examples and tutorials. Main topics: Architecture overview - work distribution, concepts of a worker and a driver Computing concepts of transformations and actions Data processing APIs - RDD, DataFrame, and SparkSQL

  12. Spark counting technique with an aluminium oxide film

    International Nuclear Information System (INIS)

    Kawai, H.; Koga, T.; Morishima, H.; Niwa, T.; Nishiwaki, Y.

    1980-01-01

    Automatic spark counting of etch-pits on a polycarbonate film produced by nuclear fission fragments is now used for neutron monitoring in several countries. A method was developed using an aluminium oxide film instead of a polycarbonate as the neutron detector. Aluminium oxide films were prepared as follows: A cleaned aluminium plate as an anode and a nickel plate as a cathode were immersed in dilute sulfuric acid solution and electric current flowed between the electrodes at 12degC for 10-30 minutes. Electric current density was about 10 mA/cm 2 . The aluminium plate was then kept in boiling water for 10-30 minutes for sealing. The thickness of the aluminium oxide layer formed was about 1μm. The aluminium plate attached to a plate of suitable fissionable material, such as uranium or thorium, was irradiated with neutrons and set in a usual spark counter for fission track counting. One electrode was the aluminium plate and the other was an aluminized polyester sheet. Sparked pulses were counted with a usual scaler. The advantage of using spark counting with an aluminium oxide film for neutron monitoring is rapid measurement of neutron exposure, since chemical etching which is indispensable for spark counting with a polycarbonate detector film, is not needed. (H.K.)

  13. 7 CFR 51.1403 - Kernel color classification.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  14. Structure and characteristics of functional powder composite materials obtained by spark plasma sintering

    Science.gov (United States)

    Oglezneva, S. A.; Kachenyuk, M. N.; Kulmeteva, V. B.; Ogleznev, N. B.

    2017-07-01

    The article describes the results of spark plasma sintering of ceramic materials based on titanium carbide, titanium carbosilicide, ceramic composite materials based on zirconium oxide, strengthened by carbon nanostructures and composite materials of electrotechnical purpose based on copper with addition of carbon structures and titanium carbosilicide. The research shows that the spark plasma sintering can achieve relative density of the material up to 98%. The effect of sintering temperature on the phase composition, density and porosity of the final product has been studied. It was found that with addition of carbon nanostructures the relative density and hardness decrease, but the fracture strength of ZrO2 increases up to times 2. The relative erosion resistance of the electrodes made of composite copper-based powder materials, obtained by spark plasma sintering during electroerosion treatment of tool steel exceeds that parameter of pure copper up to times 15.

  15. The definition of kernel Oz

    OpenAIRE

    Smolka, Gert

    1994-01-01

    Oz is a concurrent language providing for functional, object-oriented, and constraint programming. This paper defines Kernel Oz, a semantically complete sublanguage of Oz. It was an important design requirement that Oz be definable by reduction to a lean kernel language. The definition of Kernel Oz introduces three essential abstractions: the Oz universe, the Oz calculus, and the actor model. The Oz universe is a first-order structure defining the values and constraints Oz computes with. The ...

  16. Fabrication of Uranium Oxycarbide Kernels for HTR Fuel

    International Nuclear Information System (INIS)

    Barnes, Charles; Richardson, Clay; Nagley, Scott; Hunn, John; Shaber, Eric

    2010-01-01

    Babcock and Wilcox (B and W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-(micro)m, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B and W produced 425-(micro)m, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B and W also produced 500-(micro)m, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B and W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.

  17. Accelerated dynamic cardiac MRI exploiting sparse-Kalman-smoother self-calibration and reconstruction (k  −  t SPARKS)

    International Nuclear Information System (INIS)

    Park, Suhyung; Park, Jaeseok

    2015-01-01

    Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k  −  t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k  −  t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k  −  t SPARKS incorporates Kalman-smoother self-calibration in k  −  t space and sparse signal recovery in x  −   f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k  −  t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k  −  t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors. (paper)

  18. Chaotic combustion in spark ignition engines

    International Nuclear Information System (INIS)

    Wendeker, Miroslaw; Czarnigowski, Jacek; Litak, Grzegorz; Szabelski, Kazimierz

    2003-01-01

    We analyse the combustion process in a spark ignition engine using the experimental data of an internal pressure during the combustion process and show that the system can be driven to chaotic behaviour. Our conclusion is based on the observation of unperiodicity in the time series, suitable stroboscopic maps and a complex structure of a reconstructed strange attractor. This analysis can explain that in some circumstances the level of noise in spark ignition engines increases considerably due to nonlinear dynamics of a combustion process

  19. Anisotropic hydrodynamics with a scalar collisional kernel

    Science.gov (United States)

    Almaalol, Dekrayat; Strickland, Michael

    2018-04-01

    Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.

  20. Object classification and detection with context kernel descriptors

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2014-01-01

    Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...... consistency of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature selection, Kernel Entropy Component Analysis (KECA) is exploited to learn a subset of discriminative CKD. Different from Kernel Principal Component...

  1. Ranking Support Vector Machine with Kernel Approximation.

    Science.gov (United States)

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  2. Ranking Support Vector Machine with Kernel Approximation

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2017-01-01

    Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  3. Target fabrication using laser and spark erosion machining

    International Nuclear Information System (INIS)

    Clement, X.; Coudeville, A.; Eyharts, P.; Perrine, J.P.; Rouillard, R.

    1982-01-01

    Fabrication of laser fusion targets requires a number of special techniques. We have developed both laser and spark erosion machining to produce minute parts of complex targets. A high repetition rate YAG laser at double frequency is used to etch various materials. For example, marks or patterns are often necessary on structured or advanced targets. The laser is also used to thin down plastic coated stalks. A spark erosion system has proved to be a versatile tool and we describe current fabrication processes like cutting, drilling, and ultra precise machining. Spark erosion has interesting features for target fabrication: it is a highly controllable and reproducible technique as well as relatively inexpensive

  4. Dose point kernels for beta-emitting radioisotopes

    International Nuclear Information System (INIS)

    Prestwich, W.V.; Chan, L.B.; Kwok, C.S.; Wilson, B.

    1986-01-01

    Knowledge of the dose point kernel corresponding to a specific radionuclide is required to calculate the spatial dose distribution produced in a homogeneous medium by a distributed source. Dose point kernels for commonly used radionuclides have been calculated previously using as a basis monoenergetic dose point kernels derived by numerical integration of a model transport equation. The treatment neglects fluctuations in energy deposition, an effect which has been later incorporated in dose point kernels calculated using Monte Carlo methods. This work describes new calculations of dose point kernels using the Monte Carlo results as a basis. An analytic representation of the monoenergetic dose point kernels has been developed. This provides a convenient method both for calculating the dose point kernel associated with a given beta spectrum and for incorporating the effect of internal conversion. An algebraic expression for allowed beta spectra has been accomplished through an extension of the Bethe-Bacher approximation, and tested against the exact expression. Simplified expression for first-forbidden shape factors have also been developed. A comparison of the calculated dose point kernel for 32 P with experimental data indicates good agreement with a significant improvement over the earlier results in this respect. An analytic representation of the dose point kernel associated with the spectrum of a single beta group has been formulated. 9 references, 16 figures, 3 tables

  5. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT).

    Science.gov (United States)

    Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C

    Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.

  6. Formation of small sparks

    International Nuclear Information System (INIS)

    Barreto, E.; Jurenka, H.; Reynolds, S.I.

    1977-01-01

    The formation of a small incendiary spark at atmospheric pressure is identified with the transition from a weakly to a strongly ionized plasma. It is shown that initial gaseous ionization produced by avalanches and/or streamers always creates a high-temperature ideal electron gas that can shield the applied voltage difference and reduce ionization in the volume of the gas. The electron gas is collision dominated but able to maintain its high temperature, for times long compared to discharge events, through long-range Coulomb forces. In fact, electrons in the weakly ionized plasma constitute a collisionless independent fluid with a thermodynamic state that can be affected directly by field or density changes. Accordingly, with metal electrodes, cathode spot emission is always associated with the transition to a strongly ionized plasma. Neutral heating can be accomplished in two different ways. Effective dispersal of the electrons from the cathode leads to electron heating dominated by diffusion effects. Conversely, a fast rate of emission or rapid field changes can produce nonlinear wave propagation. It is shown that solitary waves are possible, and it is suggested that some spark transitions are associated with shock waves in the collisionless electron gas. In either the diffuse or nonlinear regime, neutral gas heating is controlled by collisions of ions with isotropic thermal electrons. This interaction is always subsequent to changes in state of the electron gas population. The basic results obtained should apply to all sparks

  7. Wigner functions defined with Laplace transform kernels.

    Science.gov (United States)

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America

  8. Metabolic network prediction through pairwise rational kernels.

    Science.gov (United States)

    Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian

    2014-09-26

    Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy

  9. Influence Function and Robust Variant of Kernel Canonical Correlation Analysis

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2017-01-01

    Many unsupervised kernel methods rely on the estimation of the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). Both kernel CO and kernel CCO are sensitive to contaminated data, even when bounded positive definite kernels are used. To the best of our knowledge, there are few well-founded robust kernel methods for statistical unsupervised learning. In addition, while the influence function (IF) of an estimator can characterize its robustness, asymptotic ...

  10. Sparking protection for MFTF-B Neutral Beam Power Supplies

    International Nuclear Information System (INIS)

    Cummings, D.B.

    1983-01-01

    This paper describes the upgrade of MFTF-B Neutral Beam Power Supplies for sparking protection. High performance ion sources spark repeatedly so ion source power supplies must be insensitive to sparking. The hot deck houses the series tetrode, arc and filament supplies, and controls. Hot deck shielding has been upgraded and a continuous shield around the arc, filament, gradient grid, and control cables now extends from the hot deck, through the core snubber, to the source. The shield carries accelerating current and connects only to the source. Shielded source cables go through an outer duct which now connects to a ground plane under the hot deck. This hybrid transmission line is a low inductance path for sparks discharging the stray capacitance of the hot deck and isolation transformers, reducing coupling to building steel. Parallel DC current return cables inside the duct lower inductance to reduce inductive turn-off transients. MOVs to ground further limit surges in the remote power supply return. Single point grounding is at the source. No control or rectifier components have been damaged nor are there any known malfunctions due to sparking up to 80 kV output

  11. Sparking protection for MFTF-B neutral beam power supplies

    International Nuclear Information System (INIS)

    Cummings, D.B.

    1983-01-01

    This paper describes the upgrade of MFTF-B Neutral Beam Power Supplies for sparking protection. High performance ion sources spark repeatedly so ion source power supplies must be insensitive to sparking. The hot deck houses the series tetrode, arc and filament supplies, and controls. Hot deck shielding has been upgraded and a continuous shield around the arc, filament, gradient grid, and control cables now extends from the hot deck, through the core snubber, to the source. The shield carries accelerating current and connects only to the source. Shielded source cables go through an outer duct which now connects to a ground plane under the hot deck. This hybrid transmission line is a low inductance path for sparks discharging the stray capacitance of the hot deck and isolation transformers, reducing coupling to building steel. Parallel dc current return cables inside the duct lower inductance to reduce inductive turn-off transients. MOVs to ground further limit surges in the remote power supply return. Single point grounding is at the source. No control or rectifier components have been damaged nor are there any known malfunctions due to sparking up to 80 kV output

  12. The Linux kernel as flexible product-line architecture

    NARCIS (Netherlands)

    M. de Jonge (Merijn)

    2002-01-01

    textabstractThe Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what

  13. Spark Ignition Characteristics of a L02/LCH4 Engine at Altitude Conditions

    Science.gov (United States)

    Kleinhenz, Julie; Sarmiento, Charles; Marshall, William

    2012-01-01

    The use of non-toxic propellants in future exploration vehicles would enable safer, more cost effective mission scenarios. One promising "green" alternative to existing hypergols is liquid methane/liquid oxygen. To demonstrate performance and prove feasibility of this propellant combination, a 100lbf LO2/LCH4 engine was developed and tested under the NASA Propulsion and Cryogenic Advanced Development (PCAD) project. Since high ignition energy is a perceived drawback of this propellant combination, a test program was performed to explore ignition performance and reliability versus delivered spark energy. The sensitivity of ignition to spark timing and repetition rate was also examined. Three different exciter units were used with the engine s augmented (torch) igniter. Propellant temperature was also varied within the liquid range. Captured waveforms indicated spark behavior in hot fire conditions was inconsistent compared to the well-behaved dry sparks (in quiescent, room air). The escalating pressure and flow environment increases spark impedance and may at some point compromise an exciter s ability to deliver a spark. Reduced spark energies of these sparks result in more erratic ignitions and adversely affect ignition probability. The timing of the sparks relative to the pressure/flow conditions also impacted the probability of ignition. Sparks occurring early in the flow could trigger ignition with energies as low as 1-6mJ, though multiple, similarly timed sparks of 55-75mJ were required for reliable ignition. An optimum time interval for spark application and ignition coincided with propellant introduction to the igniter and engine. Shifts of ignition timing were manifested by changes in the characteristics of the resulting ignition.

  14. Spark Ignition Characteristics of a LO2/LCH4 Engine at Altitude Conditions

    Science.gov (United States)

    Kleinhenz, Julie; Sarmiento, Charles; Marshall, William

    2012-01-01

    The use of non-toxic propellants in future exploration vehicles would enable safer, more cost effective mission scenarios. One promising "green" alternative to existing hypergols is liquid methane/liquid oxygen. To demonstrate performance and prove feasibility of this propellant combination, a 100lbf LO2/LCH4 engine was developed and tested under the NASA Propulsion and Cryogenic Advanced Development (PCAD) project. Since high ignition energy is a perceived drawback of this propellant combination, a test program was performed to explore ignition performance and reliability versus delivered spark energy. The sensitivity of ignition to spark timing and repetition rate was also examined. Three different exciter units were used with the engine's augmented (torch) igniter. Propellant temperature was also varied within the liquid range. Captured waveforms indicated spark behavior in hot fire conditions was inconsistent compared to the well-behaved dry sparks (in quiescent, room air). The escalating pressure and flow environment increases spark impedance and may at some point compromise an exciter.s ability to deliver a spark. Reduced spark energies of these sparks result in more erratic ignitions and adversely affect ignition probability. The timing of the sparks relative to the pressure/flow conditions also impacted the probability of ignition. Sparks occurring early in the flow could trigger ignition with energies as low as 1-6mJ, though multiple, similarly timed sparks of 55-75mJ were required for reliable ignition. An optimum time interval for spark application and ignition coincided with propellant introduction to the igniter and engine. Shifts of ignition timing were manifested by changes in the characteristics of the resulting ignition.

  15. Using SPARK as a Solver for Modelica

    Energy Technology Data Exchange (ETDEWEB)

    Wetter, Michael; Wetter, Michael; Haves, Philip; Moshier, Michael A.; Sowell, Edward F.

    2008-06-30

    Modelica is an object-oriented acausal modeling language that is well positioned to become a de-facto standard for expressing models of complex physical systems. To simulate a model expressed in Modelica, it needs to be translated into executable code. For generating run-time efficient code, such a translation needs to employ algebraic formula manipulations. As the SPARK solver has been shown to be competitive for generating such code but currently cannot be used with the Modelica language, we report in this paper how SPARK's symbolic and numerical algorithms can be implemented in OpenModelica, an open-source implementation of a Modelica modeling and simulation environment. We also report benchmark results that show that for our air flow network simulation benchmark, the SPARK solver is competitive with Dymola, which is believed to provide the best solver for Modelica.

  16. Exploiting graph kernels for high performance biomedical relation extraction.

    Science.gov (United States)

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM

  17. SciSpark's SRDD : A Scientific Resilient Distributed Dataset for Multidimensional Data

    Science.gov (United States)

    Palamuttam, R. S.; Wilson, B. D.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; McGibbney, L. J.; Ramirez, P.

    2015-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We have developed SciSpark, a robust Big Data framework, that extends ApacheTM Spark for scaling scientific computations. Apache Spark improves the map-reduce implementation in ApacheTM Hadoop for parallel computing on a cluster, by emphasizing in-memory computation, "spilling" to disk only as needed, and relying on lazy evaluation. Central to Spark is the Resilient Distributed Dataset (RDD), an in-memory distributed data structure that extends the functional paradigm provided by the Scala programming language. However, RDDs are ideal for tabular or unstructured data, and not for highly dimensional data. The SciSpark project introduces the Scientific Resilient Distributed Dataset (sRDD), a distributed-computing array structure which supports iterative scientific algorithms for multidimensional data. SciSpark processes data stored in NetCDF and HDF files by partitioning them across time or space and distributing the partitions among a cluster of compute nodes. We show usability and extensibility of SciSpark by implementing distributed algorithms for geospatial operations on large collections of multi-dimensional grids. In particular we address the problem of scaling an automated method for finding Mesoscale Convective Complexes. SciSpark provides a tensor interface to support the pluggability of different matrix libraries. We evaluate performance of the various matrix libraries in distributed pipelines, such as Nd4jTM and BreezeTM. We detail the architecture and design of SciSpark, our efforts to integrate climate science algorithms, parallel ingest and partitioning (sharding) of A-Train satellite observations from model grids. These

  18. Scattering profiles of sparks and combustibility of filter against hot sparks

    International Nuclear Information System (INIS)

    Tobita, Noriyuki; Okada, Takashi; Kashiro, Kashio

    2004-12-01

    An event that a pre-filter burned on fire took place in the glove box dismantlement facility of Plutonium Production Facility, on April 21, 2003. The direct cause of this event was considered to be sparks generated by an abrasive wheel cutter, some of which reached the pre-filter and eventually burned the pre-filter. Further investigation revealed that there exist other deficiencies those of which formed indirect causes of the event, i.e., the wheel cutter was used without protective cover and adequate shield against sparks was not installed during the operation. To prevent similar event in the future, following corrective actions were introduced. Wheel cutter will not be used without protective cover; Incombustible pre-filter will be used; Shield will be place at the front of the pre-filter. We have conducted series of experimental tests in order to evaluate and confirm the validity of these corrective actions as well as determine the cause of the fire. This report present the results of these tests. (author)

  19. Sintering, consolidation, reaction and crystal growth by the spark plasma system (SPS)

    Energy Technology Data Exchange (ETDEWEB)

    Omori, M. [Tohoku Univ., Sendai (Japan). Inst. for Materials Research

    2000-08-15

    The graphite die set in spark plasma system (SPS) is heated by a pulse direct current. Weak plasma, discharge impact, electric field and electric current, which are based on this current, induce good effects on materials in the die. The surface films of aluminum and pure WC powders are ruptured by the spark plasma. Pure AlN powder is sintered without sintering additives in the electric field. The spark plasma leaves discharge patterns on insulators. Organic fibers are etched by the spark plasma. Thermosetting polyimide is consolidated by the spark plasma. Insoluble polymonomethylsilane is rearranged into the soluble one by the spark plasma. A single crystal of CoSb{sub 3} is grown from the compound powders in the electric field by slow heating. Coupled crystals of eutectic powder are connected with each other in the electric field. (orig.)

  20. GRIM : Leveraging GPUs for Kernel integrity monitoring

    NARCIS (Netherlands)

    Koromilas, Lazaros; Vasiliadis, Giorgos; Athanasopoulos, Ilias; Ioannidis, Sotiris

    2016-01-01

    Kernel rootkits can exploit an operating system and enable future accessibility and control, despite all recent advances in software protection. A promising defense mechanism against rootkits is Kernel Integrity Monitor (KIM) systems, which inspect the kernel text and data to discover any malicious

  1. 7 CFR 51.2296 - Three-fourths half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  2. SPARK Version 1.1 user manual

    International Nuclear Information System (INIS)

    Weissenburger, D.W.

    1988-01-01

    This manual describes the input required to use Version 1.1 of the SPARK computer code. SPARK 1.1 is a library of FORTRAN main programs and subprograms designed to calculate eddy currents on conducting surfaces where current flow is assumed zero in the direction normal to the surface. Surfaces are modeled with triangular and/or quadrilateral elements. Lorentz forces produced by the interaction of eddy currents with background magnetic fields can be output at element nodes in a form compatible with most structural analysis codes. In addition, magnetic fields due to eddy currents can be determined at points off the surface. Version 1.1 features eddy current streamline plotting with optional hidden-surface-removal graphics and topological enhancements that allow essentially any orientable surface to be modeled. SPARK also has extensive symmetry specification options. In order to make the manual as self-contained as possible, six appendices are included that present summaries of the symmetry options, topological options, coil options and code algorithms, with input and output examples. An edition of SPARK 1.1 is available on the Cray computers at the National Magnetic Fusion Energy Computer Center at Livermore, California. Another more generic edition is operational on the VAX computers at the Princeton Plasma Physics Laboratory and is available on magnetic tape by request. The generic edition requires either the GKS or PLOT10 graphics package and the IMSL or NAG mathematical package. Requests from outside the United States will be subject to applicable federal regulations regarding dissemination of computer programs. 22 refs

  3. SPARK Version 1. 1 user manual

    Energy Technology Data Exchange (ETDEWEB)

    Weissenburger, D.W.

    1988-01-01

    This manual describes the input required to use Version 1.1 of the SPARK computer code. SPARK 1.1 is a library of FORTRAN main programs and subprograms designed to calculate eddy currents on conducting surfaces where current flow is assumed zero in the direction normal to the surface. Surfaces are modeled with triangular and/or quadrilateral elements. Lorentz forces produced by the interaction of eddy currents with background magnetic fields can be output at element nodes in a form compatible with most structural analysis codes. In addition, magnetic fields due to eddy currents can be determined at points off the surface. Version 1.1 features eddy current streamline plotting with optional hidden-surface-removal graphics and topological enhancements that allow essentially any orientable surface to be modeled. SPARK also has extensive symmetry specification options. In order to make the manual as self-contained as possible, six appendices are included that present summaries of the symmetry options, topological options, coil options and code algorithms, with input and output examples. An edition of SPARK 1.1 is available on the Cray computers at the National Magnetic Fusion Energy Computer Center at Livermore, California. Another more generic edition is operational on the VAX computers at the Princeton Plasma Physics Laboratory and is available on magnetic tape by request. The generic edition requires either the GKS or PLOT10 graphics package and the IMSL or NAG mathematical package. Requests from outside the United States will be subject to applicable federal regulations regarding dissemination of computer programs. 22 refs.

  4. Examining Potential Boundary Bias Effects in Kernel Smoothing on Equating: An Introduction for the Adaptive and Epanechnikov Kernels.

    Science.gov (United States)

    Cid, Jaime A; von Davier, Alina A

    2015-05-01

    Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.

  5. Adaptive Kernel in Meshsize Boosting Algorithm in KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a meshsize boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  6. A kernel version of multivariate alteration detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2013-01-01

    Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....

  7. SciDB versus Spark: A Preliminary Comparison Based on an Earth Science Use Case

    Science.gov (United States)

    Clune, T.; Kuo, K. S.; Doan, K.; Oloso, A.

    2015-12-01

    We compare two Big Data technologies, SciDB and Spark, for performance, usability, and extensibility, when applied to a representative Earth science use case. SciDB is a new-generation parallel distributed database management system (DBMS) based on the array data model that is capable of handling multidimensional arrays efficiently but requires lengthy data ingest prior to analysis, whereas Spark is a fast and general engine for large scale data processing that can immediately process raw data files and thereby avoid the ingest process. Once data have been ingested, SciDB is very efficient in database operations such as subsetting. Spark, on the other hand, provides greater flexibility by supporting a wide variety of high-level tools including DBMS's. For the performance aspect of this preliminary comparison, we configure Spark to operate directly on text or binary data files and thereby limit the need for additional tools. Arguably, a more appropriate comparison would involve exploring other configurations of Spark which exploit supported high-level tools, but that is beyond our current resources. To make the comparison as "fair" as possible, we export the arrays produced by SciDB into text files (or converting them to binary files) for the intake by Spark and thereby avoid any additional file processing penalties. The Earth science use case selected for this comparison is the identification and tracking of snowstorms in the NASA Modern Era Retrospective-analysis for Research and Applications (MERRA) reanalysis data. The identification portion of the use case is to flag all grid cells of the MERRA high-resolution hourly data that satisfies our criteria for snowstorm, whereas the tracking portion connects flagged cells adjacent in time and space to form a snowstorm episode. We will report the results of our comparisons at this presentation.

  8. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    Science.gov (United States)

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  9. Uranium kernel formation via internal gelation

    International Nuclear Information System (INIS)

    Hunt, R.D.; Collins, J.L.

    2004-01-01

    In the 1970s and 1980s, U.S. Department of Energy (DOE) conducted numerous studies on the fabrication of nuclear fuel particles using the internal gelation process. These amorphous kernels were prone to flaking or breaking when gases tried to escape from the kernels during calcination and sintering. These earlier kernels would not meet today's proposed specifications for reactor fuel. In the interim, the internal gelation process has been used to create hydrous metal oxide microspheres for the treatment of nuclear waste. With the renewed interest in advanced nuclear fuel by the DOE, the lessons learned from the nuclear waste studies were recently applied to the fabrication of uranium kernels, which will become tri-isotropic (TRISO) fuel particles. These process improvements included equipment modifications, small changes to the feed formulations, and a new temperature profile for the calcination and sintering. The modifications to the laboratory-scale equipment and its operation as well as small changes to the feed composition increased the product yield from 60% to 80%-99%. The new kernels were substantially less glassy, and no evidence of flaking was found. Finally, key process parameters were identified, and their effects on the uranium microspheres and kernels are discussed. (orig.)

  10. Kernel learning at the first level of inference.

    Science.gov (United States)

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Global Polynomial Kernel Hazard Estimation

    DEFF Research Database (Denmark)

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  12. Spark chamber used for the visualization of the 125I labeled thyroid

    International Nuclear Information System (INIS)

    Morucci, Jean-Pierre; Seigneur, Alain; Lansiart, Alain

    1971-03-01

    This spark chamber is a stationary detector used for the visualization of the 125 I labeled thyroid; it is sensitive to X and low energy gamma rays. This device is filled mainly with pressurized xenon (1.5 kg/cm 2 ) and behaves as an X-ray image intensifier: the incident radiation is detected and initiates a spark. The energy dissipated by the spark is reduced and controlled by a double coated anode, while an electronic circuit triggered by the initiation of the spark discharges the detector capacitance. The sparks are recorded on a photographic plate during the examination. X ray optics are used for collimation between the thyroid and the detector. A modulation transfer function was measured for 125 I. Communication theory was used to determine the best way of combining the collimator and spark chamber. This device is being used in the Service Hospitalier Frederic Joliot at Orsay. Its performance is superior to that of conventional scintigraphs. Further applications are envisaged [fr

  13. Quantum tomography, phase-space observables and generalized Markov kernels

    International Nuclear Information System (INIS)

    Pellonpaeae, Juha-Pekka

    2009-01-01

    We construct a generalized Markov kernel which transforms the observable associated with the homodyne tomography into a covariant phase-space observable with a regular kernel state. Illustrative examples are given in the cases of a 'Schroedinger cat' kernel state and the Cahill-Glauber s-parametrized distributions. Also we consider an example of a kernel state when the generalized Markov kernel cannot be constructed.

  14. Single pass kernel k-means clustering method

    Indian Academy of Sciences (India)

    paper proposes a simple and faster version of the kernel k-means clustering ... It has been considered as an important tool ... On the other hand, kernel-based clustering methods, like kernel k-means clus- ..... able at the UCI machine learning repository (Murphy 1994). ... All the data sets have only numeric valued features.

  15. A miniature spark counter for public communication and education

    International Nuclear Information System (INIS)

    Mao, C.H.; Weng, P.S.

    1987-01-01

    The fabrication of a miniature spark counter for public communication and education using naturally occurring radon as a radioactive source without involving any man-made radioactivity is described. The battery-powered miniature spark counter weighs 2.07 kg with a volume of 4.844 x 10/sup -4/ m/sup 3/. The circuitry consists of seven major components: timer, high-voltage power supply, attenuator, noninverting amplifier, low-pass filter, one-shot generator, and counter. Cellulose nitrate films irradiated with alpha particles from radon emanating from soil were etched and counted. The visible sparks during counting are rather heuristic, which can be used to demonstrate naturally occurring radioactivity in classrooms or showplaces

  16. Formation and properties of two-phase bulk metallic glasses by spark plasma sintering

    Energy Technology Data Exchange (ETDEWEB)

    Xie Guoqiang, E-mail: xiegq@imr.tohoku.ac.jp [Institute for Materials Research, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577 (Japan); Louzguine-Luzgin, D.V. [WPI Advanced Institute for Materials Research, Tohoku University, Sendai 980-8577 (Japan); Inoue, Akihisa [Institute for Materials Research, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577 (Japan); WPI Advanced Institute for Materials Research, Tohoku University, Sendai 980-8577 (Japan)

    2011-06-15

    Research highlights: > Two-phase bulk metallic glasses with high strength and good soft magnetic properties as well as satisfying large-size requirements were produced by spark plasma sintering. > Effects of sintering temperature on thermal stability, microstructure, mechanical and magnetic properties were investigated. > Densified samples were obtained by the spark plasma sintering at above 773 K. - Abstract: Using a mixture of the gas-atomized Ni{sub 52.5}Nb{sub 10}Zr{sub 15}Ti{sub 15}Pt{sub 7.5} and Fe{sub 73}Si{sub 7}B{sub 17}Nb{sub 3} glassy alloy powders, we produced the two-phase bulk metallic glass (BMG) with high strength and good soft magnetic properties as well as satisfying large-size requirements by the spark plasma sintering (SPS) process. Two kinds of glassy particulates were homogeneously dispersed each other. With an increase in sintering temperature, density of the produced samples increased, and densified samples were obtained by the SPS process at above 773 K. Good bonding state among the Ni- and Fe-based glassy particulates was achieved.

  17. Relationship between attenuation coefficients and dose-spread kernels

    International Nuclear Information System (INIS)

    Boyer, A.L.

    1988-01-01

    Dose-spread kernels can be used to calculate the dose distribution in a photon beam by convolving the kernel with the primary fluence distribution. The theoretical relationships between various types and components of dose-spread kernels relative to photon attenuation coefficients are explored. These relations can be valuable as checks on the conservation of energy by dose-spread kernels calculated by analytic or Monte Carlo methods

  18. Mixture Density Mercer Kernels: A Method to Learn Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...

  19. The performance of a hybrid spark chamber beta-ray camera

    International Nuclear Information System (INIS)

    Aoyama, Takahiko; Watanabe, Tamaki

    1978-01-01

    This paper describes the performance of a hybrid spark chamber for measuring β-ray emitting radionuclide distribution on a plane source, which was developed to improve the instability of usual self-triggering spark chambers. The chamber consists of a parallel plate spark chamber gap and a parallel plate proportional chamber gap composed of mesh electrodes in the same gas space, and is operated by flowing gas, a mixture of argon and ethanol saturated vapor at 0 0 C, continuously through it. Instability is due to the occurrence of spurious sparks not caused by incident particles and it became conspicuous in the small intensity of incident particles. The hybrid spark chamber enabled us to obtain good counting plateau, that is, good stability for especially small intensity of β-rays and even for the background by setting up gas multiplication in the proportional chamber gap moderately high. Good spatial resolution less than 1 mm was obtained for 3 H and 14 C by keeping the distance between the chamber cathode and the source less than 1 mm. In order to obtain good spatial resolution, it is desirable to keep the overvoltage as small as possible while small overvoltage results in the deterioration of the uniformity of sensitivity. It was found by theoretical estimation and experiment that for a given large overvoltage the spatial resolution was improved by increasing the gas multiplication in the proportional chamber gap. The hybrid spark chamber has a relatively long dead time. When there being a number of active spots having different activities in a detection area, the sparking efficiency of a weak active spot also decreases by large counting loss due to the total strong activity. (auth.)

  20. Integral equations with contrasting kernels

    Directory of Open Access Journals (Sweden)

    Theodore Burton

    2008-01-01

    Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.

  1. Kernel methods in orthogonalization of multi- and hypervariate data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    A kernel version of maximum autocorrelation factor (MAF) analysis is described very briefly and applied to change detection in remotely sensed hyperspectral image (HyMap) data. The kernel version is based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis...... via inner products in the Gram matrix only. In the kernel version the inner products are replaced by inner products between nonlinear mappings into higher dimensional feature space of the original data. Via kernel substitution also known as the kernel trick these inner products between the mappings...... are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MAF analysis handle nonlinearities by implicitly transforming data into high (even infinite...

  2. Experimental investigations of argon spark gap recovery times by developing a high voltage double pulse generator.

    Science.gov (United States)

    Reddy, C S; Patel, A S; Naresh, P; Sharma, Archana; Mittal, K C

    2014-06-01

    The voltage recovery in a spark gap for repetitive switching has been a long research interest. A two-pulse technique is used to determine the voltage recovery times of gas spark gap switch with argon gas. First pulse is applied to the spark gap to over-volt the gap and initiate the breakdown and second pulse is used to determine the recovery voltage of the gap. A pulse transformer based double pulse generator capable of generating 40 kV peak pulses with rise time of 300 ns and 1.5 μs FWHM and with a delay of 10 μs-1 s was developed. A matrix transformer topology is used to get fast rise times by reducing L(l)C(d) product in the circuit. Recovery Experiments have been conducted for 2 mm, 3 mm, and 4 mm gap length with 0-2 bars pressure for argon gas. Electrodes of a sparkgap chamber are of rogowsky profile type, made up of stainless steel material, and thickness of 15 mm are used in the recovery study. The variation in the distance and pressure effects the recovery rate of the spark gap. An intermediate plateu is observed in the spark gap recovery curves. Recovery time decreases with increase in pressure and shorter gaps in length are recovering faster than longer gaps.

  3. Exploring the Performance of Spark for a Scientific Use Case

    Energy Technology Data Exchange (ETDEWEB)

    Sehrish, Saba [Fermilab; Kowalkowski, Jim [Fermilab; Paterno, Marc [Fermilab

    2016-01-01

    We present an evaluation of the performance of a Spark implementation of a classification algorithm in the domain of High Energy Physics (HEP). Spark is a general engine for in-memory, large-scale data processing, and is designed for applications where similar repeated analysis is performed on the same large data sets. Classification problems are one of the most common and critical data processing tasks across many domains. Many of these data processing tasks are both computation- and data-intensive, involving complex numerical computations employing extremely large data sets. We evaluated the performance of the Spark implementation on Cori, a NERSC resource, and compared the results to an untuned MPI implementation of the same algorithm. While the Spark implementation scaled well, it is not competitive in speed to our MPI implementation, even when using significantly greater computational resources.

  4. Kernel based subspace projection of near infrared hyperspectral images of maize kernels

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben

    2009-01-01

    In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods ......- tor transform outperform the linear methods as well as kernel principal components in producing interesting projections of the data.......In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...... including principal component analysis and maximum autocorrelation factor analysis. The latter utilizes the fact that interesting phenomena in images exhibit spatial autocorrelation. However, linear projections often fail to grasp the underlying variability on the data. Therefore we propose to use so...

  5. The Classification of Diabetes Mellitus Using Kernel k-means

    Science.gov (United States)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  6. Evaluating the Application of Tissue-Specific Dose Kernels Instead of Water Dose Kernels in Internal Dosimetry : A Monte Carlo Study

    NARCIS (Netherlands)

    Moghadam, Maryam Khazaee; Asl, Alireza Kamali; Geramifar, Parham; Zaidi, Habib

    2016-01-01

    Purpose: The aim of this work is to evaluate the application of tissue-specific dose kernels instead of water dose kernels to improve the accuracy of patient-specific dosimetry by taking tissue heterogeneities into consideration. Materials and Methods: Tissue-specific dose point kernels (DPKs) and

  7. Parsimonious Wavelet Kernel Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Wang Qin

    2015-11-01

    Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.

  8. Straight-chain halocarbon forming fluids for TRISO fuel kernel production – Tests with yttria-stabilized zirconia microspheres

    Energy Technology Data Exchange (ETDEWEB)

    Baker, M.P. [Nuclear Science and Engineering Program, Metallurgical and Materials Engineering Department, Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); King, J.C., E-mail: kingjc@mines.edu [Nuclear Science and Engineering Program, Metallurgical and Materials Engineering Department, Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); Gorman, B.P. [Metallurgical and Materials Engineering Department, Colorado Center for Advanced Ceramics, Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); Braley, J.C. [Nuclear Science and Engineering Program, Chemistry and Geochemistry Department, Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States)

    2015-03-15

    Highlights: • YSZ TRISO kernels formed in three alternative, non-hazardous forming fluids. • Kernels characterized for size, shape, pore/grain size, density, and composition. • Bromotetradecane is suitable for further investigation with uranium-based precursor. - Abstract: Current methods of TRISO fuel kernel production in the United States use a sol–gel process with trichloroethylene (TCE) as the forming fluid. After contact with radioactive materials, the spent TCE becomes a mixed hazardous waste, and high costs are associated with its recycling or disposal. Reducing or eliminating this mixed waste stream would not only benefit the environment, but would also enhance the economics of kernel production. Previous research yielded three candidates for testing as alternatives to TCE: 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane. This study considers the production of yttria-stabilized zirconia (YSZ) kernels in silicone oil and the three chosen alternative formation fluids, with subsequent characterization of the produced kernels and used forming fluid. Kernels formed in silicone oil and bromotetradecane were comparable to those produced by previous kernel production efforts, while those produced in chlorooctadecane and iodododecane experienced gelation issues leading to poor kernel formation and geometry.

  9. On- and off-axis spectral emission features from laser-produced gas breakdown plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Harilal, S. S.; Skrodzki, P. J.; Miloshevsky, A.; Brumfield, B. E.; Phillips, M. C.; Miloshevsky, G.

    2017-06-01

    Laser-heated gas breakdown plasmas or sparks emit profoundly in the ultraviolet and visible region of the electromagnetic spectrum with contributions from ionic, atomic, and molecular species. Laser created kernels expand into a cold ambient with high velocities during its early lifetime followed by confinement of the plasma kernel and eventually collapse. However, the plasma kernels produced during laser breakdown of gases are also capable of exciting and ionizing the surrounding ambient medium. Two mechanisms can be responsible for excitation and ionization of surrounding ambient: viz. photoexcitation and ionization by intense ultraviolet emission from the sparks produced during the early times of its creation and/or heating by strong shocks generated by the kernel during its expansion into the ambient. In this study, an investigation is made on the spectral features of on- and off-axis emission features of laser-induced plasma breakdown kernels generated in atmospheric pressure conditions with an aim to elucidate the mechanisms leading to ambient excitation and emission. Pulses from an Nd:YAG laser emitting at 1064 nm with 6 ns pulse duration are used to generate plasma kernels. Laser sparks were generated in air, argon, and helium gases to provide different physical properties of expansion dynamics and plasma chemistry considering the differences in laser absorption properties, mass density and speciation. Point shadowgraphy and time-resolved imaging were used to evaluate the shock wave and spark self-emission morphology at early and late times while space and time resolved spectroscopy is used for evaluating the emission features as well as for inferring plasma fundaments at on- and off-axis. Structure and dynamics of the plasma kernel obtained using imaging techniques are also compared to numerical simulations using computational fluid dynamics code. The emission from the kernel showed that spectral features from ions, atoms and molecules are separated in

  10. Difference between standard and quasi-conformal BFKL kernels

    International Nuclear Information System (INIS)

    Fadin, V.S.; Fiore, R.; Papa, A.

    2012-01-01

    As it was recently shown, the colour singlet BFKL kernel, taken in Möbius representation in the space of impact parameters, can be written in quasi-conformal shape, which is unbelievably simple compared with the conventional form of the BFKL kernel in momentum space. It was also proved that the total kernel is completely defined by its Möbius representation. In this paper we calculated the difference between standard and quasi-conformal BFKL kernels in momentum space and discovered that it is rather simple. Therefore we come to the conclusion that the simplicity of the quasi-conformal kernel is caused mainly by using the impact parameter space.

  11. Kernel-based whole-genome prediction of complex traits: a review.

    Science.gov (United States)

    Morota, Gota; Gianola, Daniel

    2014-01-01

    Prediction of genetic values has been a focus of applied quantitative genetics since the beginning of the 20th century, with renewed interest following the advent of the era of whole genome-enabled prediction. Opportunities offered by the emergence of high-dimensional genomic data fueled by post-Sanger sequencing technologies, especially molecular markers, have driven researchers to extend Ronald Fisher and Sewall Wright's models to confront new challenges. In particular, kernel methods are gaining consideration as a regression method of choice for genome-enabled prediction. Complex traits are presumably influenced by many genomic regions working in concert with others (clearly so when considering pathways), thus generating interactions. Motivated by this view, a growing number of statistical approaches based on kernels attempt to capture non-additive effects, either parametrically or non-parametrically. This review centers on whole-genome regression using kernel methods applied to a wide range of quantitative traits of agricultural importance in animals and plants. We discuss various kernel-based approaches tailored to capturing total genetic variation, with the aim of arriving at an enhanced predictive performance in the light of available genome annotation information. Connections between prediction machines born in animal breeding, statistics, and machine learning are revisited, and their empirical prediction performance is discussed. Overall, while some encouraging results have been obtained with non-parametric kernels, recovering non-additive genetic variation in a validation dataset remains a challenge in quantitative genetics.

  12. Kernel-based whole-genome prediction of complex traits: a review

    Directory of Open Access Journals (Sweden)

    Gota eMorota

    2014-10-01

    Full Text Available Prediction of genetic values has been a focus of applied quantitative genetics since the beginning of the 20th century, with renewed interest following the advent of the era of whole genome-enabled prediction. Opportunities offered by the emergence of high-dimensional genomic data fueled by post-Sanger sequencing technologies, especially molecular markers, have driven researchers to extend Ronald Fisher and Sewall Wright's models to confront new challenges. In particular, kernel methods are gaining consideration as a regression method of choice for genome-enabled prediction. Complex traits are presumably influenced by many genomic regions working in concert with others (clearly so when considering pathways, thus generating interactions. Motivated by this view, a growing number of statistical approaches based on kernels attempt to capture non-additive effects, either parametrically or non-parametrically. This review centers on whole-genome regression using kernel methods applied to a wide range of quantitative traits of agricultural importance in animals and plants. We discuss various kernel-based approaches tailored to capturing total genetic variation, with the aim of arriving at an enhanced predictive performance in the light of available genome annotation information. Connections between prediction machines born in animal breeding, statistics, and machine learning are revisited, and their empirical prediction performance is discussed. Overall, while some encouraging results have been obtained with non-parametric kernels, recovering non-additive genetic variation in a validation dataset remains a challenge in quantitative genetics.

  13. Measurements of Radon Concentration in Yemen Using Spark Counter

    International Nuclear Information System (INIS)

    Arafa, W.; Abou-Leila, M.; Hafiz, M.E.; Al-Glal, N.

    2011-01-01

    Spark counter has been designed and realized and the optimum applied voltage was found to be 600 V. Excellent consistent agreements was observed between counted number of tracks by spark counter and reading by optical microscope. Radon concentration in some houses in Sana'a and Hodeidah cities in Yemen had been performed using LR-115 SSNTD and spark counter system. The average radon concentration in both cities was far lower the alert value. The results showed that radon concentration in the metropolitan area Sana'a was higher than that in Hodeidah city. Also, it was observed that old residential houses had higher levels of radon concentrations have compared to newly built houses in the metropolitan area Sana'a

  14. A laser optical method for detecting corn kernel defects

    Energy Technology Data Exchange (ETDEWEB)

    Gunasekaran, S.; Paulsen, M. R.; Shove, G. C.

    1984-01-01

    An opto-electronic instrument was developed to examine individual corn kernels and detect various kernel defects according to reflectance differences. A low power helium-neon (He-Ne) laser (632.8 nm, red light) was used as the light source in the instrument. Reflectance from good and defective parts of corn kernel surfaces differed by approximately 40%. Broken, chipped, and starch-cracked kernels were detected with nearly 100% accuracy; while surface-split kernels were detected with about 80% accuracy. (author)

  15. Neutron bursts from long laboratory sparks

    Science.gov (United States)

    Kochkin, P.; Lehtinen, N. G.; Montanya, J.; Van Deursen, A.; Ostgaard, N.

    2016-12-01

    Neutron emission in association with thunderstorms and lightning discharges was reported by different investigators from ground-based observation platforms. In both cases such emission is explained by photonuclear reaction, since high-energy gamma-rays in sufficient fluxes are routinely detected from both, lightning and thunderclouds. The required gamma-rays are presumably generated by high-energy electrons in Bremsstrahlung process after their acceleration via cold and/or relativistic runaway mechanisms. This phenomenon attracted moderate scientific attention until fast neutron bursts (up to 10 MeV) from long 1 MV laboratory sparks have been reported. Clearly, with such relatively low applied voltage the electrons are unable to accelerate to the energies required for photo/electro disintegration. Moreover, all known elementary neutron generation processes are not capable to explain this emission right away. We performed an independent laboratory experiment on long sparks with the aim to confirm or disprove the neutron emission from them. The experimental setup was assembled at High-Voltage Laboratory in Barcelona and contained a Marx generator in a cone-cone spark gap configuration. The applied voltage was as low as 800 kV and the gap distance was only 60 cm. Two ns-fast cameras were located near the gap capturing short-exposure images of the pre-breakdown phenomenon at the expected neutron generation time. A plastic scintillation detector sensitive to neutrons was covered in 11 cm of lead and placed near the spark gap. The detector was calibrated and showed good performance in neutron detection. Apart of it, voltage, currents through both electrodes, and three X-ray detectors were also monitored in sophisticated measuring system. We will give an overview of the previous experimental and theoretical work in this topic, and present the results of our new experimental campaign. The conclusions are based on good signal-to-noise ratio measurements and are

  16. Kernel maximum autocorrelation factor and minimum noise fraction transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt...

  17. Identification of Fusarium damaged wheat kernels using image analysis

    Directory of Open Access Journals (Sweden)

    Ondřej Jirsa

    2011-01-01

    Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.

  18. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    Science.gov (United States)

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Higher-Order Hybrid Gaussian Kernel in Meshsize Boosting Algorithm

    African Journals Online (AJOL)

    In this paper, we shall use higher-order hybrid Gaussian kernel in a meshsize boosting algorithm in kernel density estimation. Bias reduction is guaranteed in this scheme like other existing schemes but uses the higher-order hybrid Gaussian kernel instead of the regular fixed kernels. A numerical verification of this scheme ...

  20. Adaptive Kernel In The Bootstrap Boosting Algorithm In KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a bootstrap boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  1. Windows Vista Kernel-Mode: Functions, Security Enhancements and Flaws

    Directory of Open Access Journals (Sweden)

    Mohammed D. ABDULMALIK

    2008-06-01

    Full Text Available Microsoft has made substantial enhancements to the kernel of the Microsoft Windows Vista operating system. Kernel improvements are significant because the kernel provides low-level operating system functions, including thread scheduling, interrupt and exception dispatching, multiprocessor synchronization, and a set of routines and basic objects.This paper describes some of the kernel security enhancements for 64-bit edition of Windows Vista. We also point out some weakness areas (flaws that can be attacked by malicious leading to compromising the kernel.

  2. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    Science.gov (United States)

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  3. Contribution to the study of 'Pseudo-spark' discharges applied to the realisation of latch devices

    International Nuclear Information System (INIS)

    Bauville, Gerard

    1994-01-01

    The objective of this research thesis is to study discharges growing from a hollow geometry of electrodes for pressures on the left side of the Paschen minimum. The study characterises the main conduction phase by experimentally determining the discharge voltage and current. Based on a numerical analysis, the author deduces some macroscopic characteristics such as voltage mean value, dissipated energy, with respect to the variation of various parameters such as gas pressure and nature, discharge duration, and electrode cavity geometries. After a first part on switches (technological applications, switches, pseudo-spark breakers), the author addresses the discharges (presentation of a 'pseudo-spark'-type discharge, involved physical mechanisms, methods of initiation of pseudo-spark discharges, triggering by a magnetic field pulse). The next part describes the test bench in a detailed way (electrodes, triggering system, electric configurations), and the last part reports the experimental study. It addresses the following issues: distribution of magnetic field lines, voltage drop, conjunction phase, discharge footprints on the surfaces, propagation rate, disjunction [fr

  4. Multineuron spike train analysis with R-convolution linear combination kernel.

    Science.gov (United States)

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. An analysis of 1-D smoothed particle hydrodynamics kernels

    International Nuclear Information System (INIS)

    Fulk, D.A.; Quinn, D.W.

    1996-01-01

    In this paper, the smoothed particle hydrodynamics (SPH) kernel is analyzed, resulting in measures of merit for one-dimensional SPH. Various methods of obtaining an objective measure of the quality and accuracy of the SPH kernel are addressed. Since the kernel is the key element in the SPH methodology, this should be of primary concern to any user of SPH. The results of this work are two measures of merit, one for smooth data and one near shocks. The measure of merit for smooth data is shown to be quite accurate and a useful delineator of better and poorer kernels. The measure of merit for non-smooth data is not quite as accurate, but results indicate the kernel is much less important for these types of problems. In addition to the theory, 20 kernels are analyzed using the measure of merit demonstrating the general usefulness of the measure of merit and the individual kernels. In general, it was decided that bell-shaped kernels perform better than other shapes. 12 refs., 16 figs., 7 tabs

  6. Putting Priors in Mixture Density Mercer Kernels

    Science.gov (United States)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  7. Controlling spark timing for consecutive cycles to reduce the cyclic variations of SI engines

    International Nuclear Information System (INIS)

    Kaleli, Alirıza; Ceviz, Mehmet Akif; Erenturk, Köksal

    2015-01-01

    Minimization of the cyclic variations is one of the most important design goal for spark-ignited engines. Primary motivation of this study is to reduce the cyclic variations in spark ignition engines by controlling the spark timing for consecutive cycles. A stochastic model was performed between spark timing and in–cylinder maximum pressure by using the system identification techniques. The incylinder maximum pressure of the next cycle was predicted with this model. Minimum variance and generalized minimum variance controllers were designed to regulate the in–cylinder maximum pressure by changing the spark timing for consecutive cycles of the test engine. The produced control algorithms were built in LabView environment and installed to the Field Programmable Gate Arrays (FPGA) chassis. According to the test results, the in–cylinder maximum pressure of the next pressure cycle can be predicted fairly well, and the spark timing can be regulated to keep the in–cylinder maximum pressure in a desired band to reduce the cyclic variations. At fixed spark timing experiments, the COV Pmax and COV imep were 3.764 and 0.677%, whereas they decreased to 3.208 and 0.533% when GMV controller was applied, respectively. - Highlights: • Cycle per cycle spark timing control was carried out. • A stochastic process model was described between P max and the spark timing. • The cyclic variations in P max was decreased by keeping it in a desired band. • Different controllers were used to adjust spark timing signal of the next cycle. • COV Pmax was decreased by about 15% by using GMV controller

  8. Effects of tetracaine on voltage-activated calcium sparks in frog intact skeletal muscle fibers.

    Science.gov (United States)

    Hollingworth, Stephen; Chandler, W Knox; Baylor, Stephen M

    2006-03-01

    The properties of Ca(2+) sparks in frog intact skeletal muscle fibers depolarized with 13 mM [K(+)] Ringer's are well described by a computational model with a Ca(2+) source flux of amplitude 2.5 pA (units of current) and duration 4.6 ms (18 degrees C; Model 2 of Baylor et al., 2002). This result, in combination with the values of single-channel Ca(2+) current reported for ryanodine receptors (RyRs) in bilayers under physiological ion conditions, 0.5 pA (Kettlun et al., 2003) to 2 pA (Tinker et al., 1993), suggests that 1-5 RyR Ca(2+) release channels open during a voltage-activated Ca(2+) spark in an intact fiber. To distinguish between one and greater than one channel per spark, sparks were measured in 8 mM [K(+)] Ringer's in the absence and presence of tetracaine, an inhibitor of RyR channel openings in bilayers. The most prominent effect of 75-100 microM tetracaine was an approximately sixfold reduction in spark frequency. The remaining sparks showed significant reductions in the mean values of peak amplitude, decay time constant, full duration at half maximum (FDHM), full width at half maximum (FWHM), and mass, but not in the mean value of rise time. Spark properties in tetracaine were simulated with an updated spark model that differed in minor ways from our previous model. The simulations show that (a) the properties of sparks in tetracaine are those expected if tetracaine reduces the number of active RyR Ca(2+) channels per spark, and (b) the single-channel Ca(2+) current of an RyR channel is normal voltage-activated sparks (i.e., in the absence of tetracaine) are produced by two or more active RyR Ca(2+) channels. The question of how the activation of multiple RyRs is coordinated is discussed.

  9. Interpolation of Missing Precipitation Data Using Kernel Estimations for Hydrologic Modeling

    Directory of Open Access Journals (Sweden)

    Hyojin Lee

    2015-01-01

    Full Text Available Precipitation is the main factor that drives hydrologic modeling; therefore, missing precipitation data can cause malfunctions in hydrologic modeling. Although interpolation of missing precipitation data is recognized as an important research topic, only a few methods follow a regression approach. In this study, daily precipitation data were interpolated using five different kernel functions, namely, Epanechnikov, Quartic, Triweight, Tricube, and Cosine, to estimate missing precipitation data. This study also presents an assessment that compares estimation of missing precipitation data through Kth nearest neighborhood (KNN regression to the five different kernel estimations and their performance in simulating streamflow using the Soil Water Assessment Tool (SWAT hydrologic model. The results show that the kernel approaches provide higher quality interpolation of precipitation data compared with the KNN regression approach, in terms of both statistical data assessment and hydrologic modeling performance.

  10. NLO corrections to the Kernel of the BKP-equations

    Energy Technology Data Exchange (ETDEWEB)

    Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Fadin, V.S. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirskij Gosudarstvennyj Univ., Novosibirsk (Russian Federation); Lipatov, L.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg (Russian Federation); Vacca, G.P. [INFN, Sezione di Bologna (Italy)

    2012-10-02

    We present results for the NLO kernel of the BKP equations for composite states of three reggeized gluons in the Odderon channel, both in QCD and in N=4 SYM. The NLO kernel consists of the NLO BFKL kernel in the color octet representation and the connected 3{yields}3 kernel, computed in the tree approximation.

  11. A Fast and Simple Graph Kernel for RDF

    NARCIS (Netherlands)

    de Vries, G.K.D.; de Rooij, S.

    2013-01-01

    In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster

  12. An SVM model with hybrid kernels for hydrological time series

    Science.gov (United States)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  13. Kernel based eigenvalue-decomposition methods for analysing ham

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming

    2010-01-01

    methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...

  14. Reduced multiple empirical kernel learning machine.

    Science.gov (United States)

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  15. Erosion on spark plug electrodes; Funkenerosion an Zuendkerzenelektroden

    Energy Technology Data Exchange (ETDEWEB)

    Rager, J.

    2006-07-01

    Durability of spark plugs is mainly determined by spark gap widening, caused by electrode wear. Knowledge about the erosion mechanisms of spark plug materials is of fundamental interest for the development of materials with a high resistance against electrode erosion. It is therefore crucial to identify those parameters which significantly influence the erosion behaviour of a material. In this work, a reliable and reproducible testing method is presented which produces and characterizes electrode wear under well-defined conditions and which is capable of altering parameters specifically. Endurance tests were carried out to study the dependence of the wear behaviour of pure nickel and platinum on the electrode temperature, gas, electrode gap, electrode diameter, atmospheric pressure, and partial pressure of oxygen. It was shown that erosion under nitrogen is negligible, irrespective of the material. This disproves all common mechanism discussed in the literature explaining material loss of spark plug electrodes. Based on this observation and the variation of the mentioned parameters a new erosion model was deduced. This relies on an oxidation of the electrode material and describes the erosion of nickel and platinum separately. For nickel, electrode wear is caused by the removal of an oxide layer by the spark. In the case of platinum, material loss occurs due to the plasma-assisted formation and subsequent evaporation of volatile oxides in the cathode spot. On the basis of this mechanism a new composite material was developed whose erosion resistance is superior to pure platinum. Oxidation resistant metal oxide particles were added to a platinum matrix, thus leading to a higher erosion resistance of the composite. However, this can be decreased by a side reaction, the separation of oxygen from the metal oxides, which effectively assists the oxidation of the matrix. This reaction can be suppressed by using highly stable oxides, characterized by a large negative Gibbs

  16. Kernel principal component analysis for change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Morton, J.C.

    2008-01-01

    region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....

  17. Enhanced gluten properties in soft kernel durum wheat

    Science.gov (United States)

    Soft kernel durum wheat is a relatively recent development (Morris et al. 2011 Crop Sci. 51:114). The soft kernel trait exerts profound effects on kernel texture, flour milling including break flour yield, milling energy, and starch damage, and dough water absorption (DWA). With the caveat of reduce...

  18. 7 CFR 981.61 - Redetermination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds...

  19. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  20. Consistent Estimation of Pricing Kernels from Noisy Price Data

    OpenAIRE

    Vladislav Kargin

    2003-01-01

    If pricing kernels are assumed non-negative then the inverse problem of finding the pricing kernel is well-posed. The constrained least squares method provides a consistent estimate of the pricing kernel. When the data are limited, a new method is suggested: relaxed maximization of the relative entropy. This estimator is also consistent. Keywords: $\\epsilon$-entropy, non-parametric estimation, pricing kernel, inverse problems.

  1. Kernel Based Nonlinear Dimensionality Reduction and Classification for Genomic Microarray

    Directory of Open Access Journals (Sweden)

    Lan Shu

    2008-07-01

    Full Text Available Genomic microarrays are powerful research tools in bioinformatics and modern medicinal research because they enable massively-parallel assays and simultaneous monitoring of thousands of gene expression of biological samples. However, a simple microarray experiment often leads to very high-dimensional data and a huge amount of information, the vast amount of data challenges researchers into extracting the important features and reducing the high dimensionality. In this paper, a nonlinear dimensionality reduction kernel method based locally linear embedding(LLE is proposed, and fuzzy K-nearest neighbors algorithm which denoises datasets will be introduced as a replacement to the classical LLE’s KNN algorithm. In addition, kernel method based support vector machine (SVM will be used to classify genomic microarray data sets in this paper. We demonstrate the application of the techniques to two published DNA microarray data sets. The experimental results confirm the superiority and high success rates of the presented method.

  2. Stable Kernel Representations as Nonlinear Left Coprime Factorizations

    NARCIS (Netherlands)

    Paice, A.D.B.; Schaft, A.J. van der

    1994-01-01

    A representation of nonlinear systems based on the idea of representing the input-output pairs of the system as elements of the kernel of a stable operator has been recently introduced. This has been denoted the kernel representation of the system. In this paper it is demonstrated that the kernel

  3. 7 CFR 981.60 - Determination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  4. End-use quality of soft kernel durum wheat

    Science.gov (United States)

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...

  5. Discrete non-parametric kernel estimation for global sensitivity analysis

    International Nuclear Information System (INIS)

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  6. Deep Restricted Kernel Machines Using Conjugate Feature Duality.

    Science.gov (United States)

    Suykens, Johan A K

    2017-08-01

    The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.

  7. Improved modeling of clinical data with kernel methods.

    Science.gov (United States)

    Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart

    2012-02-01

    Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems

  8. Linear and kernel methods for multi- and hypervariate change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton J.

    2010-01-01

    . Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual...... formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution......, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component...

  9. Kernel based orthogonalization for change detection in hyperspectral images

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MNF analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via...... analysis all 126 spectral bands of the HyMap are included. Changes on the ground are most likely due to harvest having taken place between the two acquisitions and solar effects (both solar elevation and azimuth have changed). Both types of kernel analysis emphasize change and unlike kernel PCA, kernel MNF...

  10. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge; Schuster, Gerard T.

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently

  11. Semi-Supervised Kernel PCA

    DEFF Research Database (Denmark)

    Walder, Christian; Henao, Ricardo; Mørup, Morten

    We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least...... squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets....

  12. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  13. Welding of titanium and nickel alloy by combination of explosive welding and spark plasma sintering technologies

    Energy Technology Data Exchange (ETDEWEB)

    Malyutina, Yu. N., E-mail: iuliiamaliutina@gmail.com; Bataev, A. A., E-mail: bataev@adm.nstu.ru; Shevtsova, L. I., E-mail: edeliya2010@mail.ru [Novosibirsk State Technical University, Novosibirsk, 630073 (Russian Federation); Mali, V. I., E-mail: vmali@mail.ru; Anisimov, A. G., E-mail: anis@hydro.nsc.ru [Lavrentyev Institute of Hydrodynamics SB RAS, Novosibirsk, 630090 (Russian Federation)

    2015-10-27

    A possibility of titanium and nickel-based alloys composite materials formation using combination of explosive welding and spark plasma sintering technologies was demonstrated in the current research. An employment of interlayer consisting of copper and tantalum thin plates makes possible to eliminate a contact between metallurgical incompatible titanium and nickel that are susceptible to intermetallic compounds formation during their interaction. By the following spark plasma sintering process the bonding has been received between titanium and titanium alloy VT20 through the thin powder layer of pure titanium that is distinguished by low defectiveness and fine dispersive structure.

  14. Research of combustion in older generation spark-ignition engines in the condition of use leaded and unleaded petrol

    Directory of Open Access Journals (Sweden)

    Bulatović Željko M.

    2014-01-01

    Full Text Available This paper analyzes the potential problems in the exploitation of the older generation of spark-ignition engines with higher octane number of petrol (unleaded petrol BMB 95 than required (leaded petrol MB 86. Within the experimental tests on two different engines (STEYR-PUCH model 712 and GAZ 41 by applying piezoelectric pressure sensors integrated with the engine spark plugs, acceleration sensors (accelerometers and special electronic block connected with distributor, show that the cumulative first and second theoretical phase of combustion when petrol of higher octane number (BMB 95 is used lasts slightly longer than when the low-octane petrol MB 86 is used. For new petrol (BMB 95 higher optimal angles of pre-ignition have been determined by which better performances of the engine are achieved without a danger of the combustion with detonation (also called knocking.

  15. 21 CFR 176.350 - Tamarind seed kernel powder.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  16. Learning a peptide-protein binding affinity predictor with kernel ridge regression

    Science.gov (United States)

    2013-01-01

    peptide-protein binding affinities. The proposed approach is flexible and can be applied to predict any quantitative biological activity. Moreover, generating reliable peptide-protein binding affinities will also improve system biology modelling of interaction pathways. Lastly, the method should be of value to a large segment of the research community with the potential to accelerate the discovery of peptide-based drugs and facilitate vaccine development. The proposed kernel is freely available at http://graal.ift.ulaval.ca/downloads/gs-kernel/. PMID:23497081

  17. Dense Medium Machine Processing Method for Palm Kernel/ Shell ...

    African Journals Online (AJOL)

    ADOWIE PERE

    Cracked palm kernel is a mixture of kernels, broken shells, dusts and other impurities. In ... machine processing method using dense medium, a separator, a shell collector and a kernel .... efficiency, ease of maintenance and uniformity of.

  18. The Results of a Randomized Control Trial Evaluation of the SPARK Literacy Program

    Science.gov (United States)

    Jones, Curtis J.; Christian, Michael; Rice, Andrew

    2016-01-01

    The purpose of this report is to present the results of a two-year randomized control trial evaluation of the SPARK literacy program. SPARK is an early grade literacy program developed by Boys & Girls Clubs of Greater Milwaukee. In 2010, SPARK was awarded an Investing in Innovation (i3) Department of Education grant to further develop the…

  19. Multivariate and semiparametric kernel regression

    OpenAIRE

    Härdle, Wolfgang; Müller, Marlene

    1997-01-01

    The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...

  20. Notes on the gamma kernel

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole E.

    The density function of the gamma distribution is used as shift kernel in Brownian semistationary processes modelling the timewise behaviour of the velocity in turbulent regimes. This report presents exact and asymptotic properties of the second order structure function under such a model......, and relates these to results of von Karmann and Horwath. But first it is shown that the gamma kernel is interpretable as a Green’s function....

  1. Nanosecond repetitively pulsed discharges in air at atmospheric pressure-the spark regime

    International Nuclear Information System (INIS)

    Pai, David Z; Lacoste, Deanna A; Laux, Christophe O

    2010-01-01

    Nanosecond repetitively pulsed (NRP) spark discharges have been studied in atmospheric pressure air preheated to 1000 K. Measurements of spark initiation and stability, plasma dynamics, gas temperature and current-voltage characteristics of the spark regime are presented. Using 10 ns pulses applied repetitively at 30 kHz, we find that 2-400 pulses are required to initiate the spark, depending on the applied voltage. Furthermore, about 30-50 pulses are required for the spark discharge to reach steady state, following initiation. Based on space- and time-resolved optical emission spectroscopy, the spark discharge in steady state is found to ignite homogeneously in the discharge gap, without evidence of an initial streamer. Using measured emission from the N 2 (C-B) 0-0 band, it is found that the gas temperature rises by several thousand Kelvin in the span of about 30 ns following the application of the high-voltage pulse. Current-voltage measurements show that up to 20-40 A of conduction current is generated, which corresponds to an electron number density of up to 10 15 cm -3 towards the end of the high-voltage pulse. The discharge dynamics, gas temperature and electron number density are consistent with a streamer-less spark that develops homogeneously through avalanche ionization in volume. This occurs because the pre-ionization electron number density of about 10 11 cm -3 produced by the high frequency train of pulses is above the critical density for streamer-less discharge development, which is shown to be about 10 8 cm -3 .

  2. Nanosecond repetitively pulsed discharges in air at atmospheric pressure—the spark regime

    Science.gov (United States)

    Pai, David Z.; Lacoste, Deanna A.; Laux, Christophe O.

    2010-12-01

    Nanosecond repetitively pulsed (NRP) spark discharges have been studied in atmospheric pressure air preheated to 1000 K. Measurements of spark initiation and stability, plasma dynamics, gas temperature and current-voltage characteristics of the spark regime are presented. Using 10 ns pulses applied repetitively at 30 kHz, we find that 2-400 pulses are required to initiate the spark, depending on the applied voltage. Furthermore, about 30-50 pulses are required for the spark discharge to reach steady state, following initiation. Based on space- and time-resolved optical emission spectroscopy, the spark discharge in steady state is found to ignite homogeneously in the discharge gap, without evidence of an initial streamer. Using measured emission from the N2 (C-B) 0-0 band, it is found that the gas temperature rises by several thousand Kelvin in the span of about 30 ns following the application of the high-voltage pulse. Current-voltage measurements show that up to 20-40 A of conduction current is generated, which corresponds to an electron number density of up to 1015 cm-3 towards the end of the high-voltage pulse. The discharge dynamics, gas temperature and electron number density are consistent with a streamer-less spark that develops homogeneously through avalanche ionization in volume. This occurs because the pre-ionization electron number density of about 1011 cm-3 produced by the high frequency train of pulses is above the critical density for streamer-less discharge development, which is shown to be about 108 cm-3.

  3. Nanosecond repetitively pulsed discharges in air at atmospheric pressure-the spark regime

    Energy Technology Data Exchange (ETDEWEB)

    Pai, David Z; Lacoste, Deanna A; Laux, Christophe O [Laboratoire EM2C, CNRS UPR288, Ecole Centrale Paris, 92295 Chatenay-Malabry (France)

    2010-12-15

    Nanosecond repetitively pulsed (NRP) spark discharges have been studied in atmospheric pressure air preheated to 1000 K. Measurements of spark initiation and stability, plasma dynamics, gas temperature and current-voltage characteristics of the spark regime are presented. Using 10 ns pulses applied repetitively at 30 kHz, we find that 2-400 pulses are required to initiate the spark, depending on the applied voltage. Furthermore, about 30-50 pulses are required for the spark discharge to reach steady state, following initiation. Based on space- and time-resolved optical emission spectroscopy, the spark discharge in steady state is found to ignite homogeneously in the discharge gap, without evidence of an initial streamer. Using measured emission from the N{sub 2} (C-B) 0-0 band, it is found that the gas temperature rises by several thousand Kelvin in the span of about 30 ns following the application of the high-voltage pulse. Current-voltage measurements show that up to 20-40 A of conduction current is generated, which corresponds to an electron number density of up to 10{sup 15} cm{sup -3} towards the end of the high-voltage pulse. The discharge dynamics, gas temperature and electron number density are consistent with a streamer-less spark that develops homogeneously through avalanche ionization in volume. This occurs because the pre-ionization electron number density of about 10{sup 11} cm{sup -3} produced by the high frequency train of pulses is above the critical density for streamer-less discharge development, which is shown to be about 10{sup 8} cm{sup -3}.

  4. Calculation of the thermal neutron scattering kernel using the synthetic model. Pt. 2. Zero-order energy transfer kernel

    International Nuclear Information System (INIS)

    Drozdowicz, K.

    1995-01-01

    A comprehensive unified description of the application of Granada's Synthetic Model to the slow-neutron scattering by the molecular systems is continued. Detailed formulae for the zero-order energy transfer kernel are presented basing on the general formalism of the model. An explicit analytical formula for the total scattering cross section as a function of the incident neutron energy is also obtained. Expressions of the free gas model for the zero-order scattering kernel and for total scattering kernel are considered as a sub-case of the Synthetic Model. (author). 10 refs

  5. Convergence of barycentric coordinates to barycentric kernels

    KAUST Repository

    Kosinka, Jiří

    2016-02-12

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  6. Convergence of barycentric coordinates to barycentric kernels

    KAUST Repository

    Kosinka, Jiří ; Barton, Michael

    2016-01-01

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  7. Hadamard Kernel SVM with applications for breast cancer outcome predictions.

    Science.gov (United States)

    Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong

    2017-12-21

    Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.

  8. Aflatoxin contamination of developing corn kernels.

    Science.gov (United States)

    Amer, M A

    2005-01-01

    Preharvest of corn and its contamination with aflatoxin is a serious problem. Some environmental and cultural factors responsible for infection and subsequent aflatoxin production were investigated in this study. Stage of growth and location of kernels on corn ears were found to be one of the important factors in the process of kernel infection with A. flavus & A. parasiticus. The results showed positive correlation between the stage of growth and kernel infection. Treatment of corn with aflatoxin reduced germination, protein and total nitrogen contents. Total and reducing soluble sugar was increase in corn kernels as response to infection. Sucrose and protein content were reduced in case of both pathogens. Shoot system length, seeding fresh weigh and seedling dry weigh was also affected. Both pathogens induced reduction of starch content. Healthy corn seedlings treated with aflatoxin solution were badly affected. Their leaves became yellow then, turned brown with further incubation. Moreover, their total chlorophyll and protein contents showed pronounced decrease. On the other hand, total phenolic compounds were increased. Histopathological studies indicated that A. flavus & A. parasiticus could colonize corn silks and invade developing kernels. Germination of A. flavus spores was occurred and hyphae spread rapidly across the silk, producing extensive growth and lateral branching. Conidiophores and conidia had formed in and on the corn silk. Temperature and relative humidity greatly influenced the growth of A. flavus & A. parasiticus and aflatoxin production.

  9. A Generalized Pyramid Matching Kernel for Human Action Recognition in Realistic Videos

    Directory of Open Access Journals (Sweden)

    Wenjun Zhang

    2013-10-01

    Full Text Available Human action recognition is an increasingly important research topic in the fields of video sensing, analysis and understanding. Caused by unconstrained sensing conditions, there exist large intra-class variations and inter-class ambiguities in realistic videos, which hinder the improvement of recognition performance for recent vision-based action recognition systems. In this paper, we propose a generalized pyramid matching kernel (GPMK for recognizing human actions in realistic videos, based on a multi-channel “bag of words” representation constructed from local spatial-temporal features of video clips. As an extension to the spatial-temporal pyramid matching (STPM kernel, the GPMK leverages heterogeneous visual cues in multiple feature descriptor types and spatial-temporal grid granularity levels, to build a valid similarity metric between two video clips for kernel-based classification. Instead of the predefined and fixed weights used in STPM, we present a simple, yet effective, method to compute adaptive channel weights of GPMK based on the kernel target alignment from training data. It incorporates prior knowledge and the data-driven information of different channels in a principled way. The experimental results on three challenging video datasets (i.e., Hollywood2, Youtube and HMDB51 validate the superiority of our GPMK w.r.t. the traditional STPM kernel for realistic human action recognition and outperform the state-of-the-art results in the literature.

  10. Multi-spark discharge system for preparation of nutritious water

    Science.gov (United States)

    Nakaso, Tetsushi; Harigai, Toru; Kusumawan, Sholihatta Aziz; Shimomura, Tomoya; Tanimoto, Tsuyoshi; Suda, Yoshiyuki; Takikawa, Hirofumi

    2018-01-01

    The nitrogen compound concentration in water is increased by atmospheric-pressure plasma discharge treatment. A rod-to-water electrode discharge treatment system using plasma discharge has been developed by our group to obtain water with a high concentration of nitrogen compounds, and this plasma-treated water improves the growth of chrysanthemum roots. However, it is difficult to apply the system to the agriculture because the amount of treated water obtained by using the system too small. In this study, a multi-spark discharge system (MSDS) equipped multiple spark plugs is presented to obtain a large amount of plasma-treated water. The MSDS consisted of inexpensive parts in order to reduce the system introduction cost for agriculture. To suppress the temperature increase of the spark plugs, the 9 spark plugs were divided into 3 groups, which were discharged in order. The plasma-treated water with a NO3- concentration of 50 mg/L was prepared using the MSDS for 90 min, and the treatment efficiency was about 6 times higher than that of our previous system. It was confirmed that the NO2-, O3, and H2O2 concentrations in the water were also increased by treating the water using the MSDS.

  11. Detector for recoil nuclei stopping in the spark chamber gas

    International Nuclear Information System (INIS)

    Aleksanyan, A.S.; Asatiani, T.L.; Ivanov, V.I.; Mkrtchyan, G.G.; Pikhtelev, R.N.

    1974-01-01

    A detector consisting of the combination of a drift and a wide gap spark chambers and designed to detect recoil nuclei stopping in the spark chamber gas is described. It is shown, that by using an appropriate discrimination the detector allows to detect reliably the recoil nuclei in the presence of intensive electron and γ-quanta beams

  12. Kernel Korner : The Linux keyboard driver

    NARCIS (Netherlands)

    Brouwer, A.E.

    1995-01-01

    Our Kernel Korner series continues with an article describing the Linux keyboard driver. This article is not for "Kernel Hackers" only--in fact, it will be most useful to those who wish to use their own keyboard to its fullest potential, and those who want to write programs to take advantage of the

  13. The heating of UO_2 kernels in argon gas medium on the physical properties of sintered UO_2 kernels

    International Nuclear Information System (INIS)

    Damunir; Sri Rinanti Susilowati; Ariyani Kusuma Dewi

    2015-01-01

    The heating of UO_2 kernels in argon gas medium on the physical properties of sinter UO_2 kernels was conducted. The heated of the UO_2 kernels was conducted in a sinter reactor of a bed type. The sample used was the UO_2 kernels resulted from the reduction results at 800 °C temperature for 3 hours that had the density of 8.13 g/cm"3; porosity of 0.26; O/U ratio of 2.05; diameter of 1146 μm and sphericity of 1.05. The sample was put into a sinter reactor, then it was vacuumed by flowing the argon gas at 180 mmHg pressure to drain the air from the reactor. After that, the cooling water and argon gas were continuously flowed with the pressure of 5 mPa with 1.5 liter/minutes velocity. The reactor temperature was increased and variated at 1200-1500 °C temperature and for 1-4 hours. The sinters UO_2 kernels resulted from the study were analyzed in term of their physical properties including the density, porosity, diameter, sphericity, and specific surface area. The density was analyzed using pycnometer with CCl_4 solution. The porosity was determined using Haynes equation. The diameters and sphericity were showed using the Dino-lite microscope. The specific surface area was determined using surface area meter Nova-1000. The obtained products showed the the heating of UO_2 kernel in argon gas medium were influenced on the physical properties of sinters UO_2 kernel. The condition of best relatively at 1400 °C temperature and 2 hours time. The product resulted from the study was relatively at its best when heating was conducted at 1400 °C temperature and 2 hours time, produced sinters UO_2 kernel with density of 10.14 gr/ml; porosity of 7 %; diameters of 893 μm; sphericity of 1.07 and specific surface area of 4.68 m"2/g with solidify shrinkage of 22 %. (author)

  14. Characterization of Brazilian mango kernel fat before and after gamma irradiation

    International Nuclear Information System (INIS)

    Aquino, Fabiana da Silva; Ramos, Clecio Souza; Aquino, Katia Aparecida da Silva

    2013-01-01

    Mangifera indica Linn (family of Anacardiaceae) is a tree indigenous to India, whose both unripe and ripe fruits (mangoes) are widely used by the local population. After consumption or industrial processing of the fruits, considerable amounts of mango seeds are discarded as waste. The kernel inside the seed represents from 45% to 75% of the seed and about 20% of the whole fruit and lipid composition of mango seed kernels has attracted the attention of researches because of their unique physical and chemical characteristics. Our study showed that fat of the mango kernel obtained by Soxhlet extraction with hexane had a solid consistency at environmental temperature (27 deg C) because it is rich in saturated acid. The fat contents of the seed of Mangifera indica was calculated to 10% and are comparable to the ones for commercial vegetable oils like soybean (11-25%). One problem found in the storage of fast and oils is the attack by microorganisms and the sterilization process becomes necessary. Samples of kernel fat were irradiated with gamma radiation ( 60 Co) at room temperature and air atmosphere at 5 and 10 kGy (sterilization doses). The data of GC-MS analysis revealed the presence of four major fatty acids in the sample of mango kernel examined and that the chemical profile of the sample not altered after being irradiated. Moreover, analysis of Proton Nuclear Magnetic Resonance (NMR H 1 ) was used to obtain the mango kernel fat parameters before and after gamma irradiation. The data interpretation of RMN H 1 indicated that there are significant differences in the acidity and saponification indexes of fat. However, it was found an increase of 14% in iodine index of fat after irradiation. This result means that some double bonds were formed on the irradiation process of the fat. (author)

  15. Characterization of Brazilian mango kernel fat before and after gamma irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Aquino, Fabiana da Silva; Ramos, Clecio Souza, E-mail: fasiaquino@yahoo.com.br, E-mail: clecio@dcm.ufrpe.br [Universidade Federal Rural de Pernambuco (UFRPE), Recife, PE (Brazil); Aquino, Katia Aparecida da Silva, E-mail: aquino@ufpe.br [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil)

    2013-07-01

    Mangifera indica Linn (family of Anacardiaceae) is a tree indigenous to India, whose both unripe and ripe fruits (mangoes) are widely used by the local population. After consumption or industrial processing of the fruits, considerable amounts of mango seeds are discarded as waste. The kernel inside the seed represents from 45% to 75% of the seed and about 20% of the whole fruit and lipid composition of mango seed kernels has attracted the attention of researches because of their unique physical and chemical characteristics. Our study showed that fat of the mango kernel obtained by Soxhlet extraction with hexane had a solid consistency at environmental temperature (27 deg C) because it is rich in saturated acid. The fat contents of the seed of Mangifera indica was calculated to 10% and are comparable to the ones for commercial vegetable oils like soybean (11-25%). One problem found in the storage of fast and oils is the attack by microorganisms and the sterilization process becomes necessary. Samples of kernel fat were irradiated with gamma radiation ({sup 60}Co) at room temperature and air atmosphere at 5 and 10 kGy (sterilization doses). The data of GC-MS analysis revealed the presence of four major fatty acids in the sample of mango kernel examined and that the chemical profile of the sample not altered after being irradiated. Moreover, analysis of Proton Nuclear Magnetic Resonance (NMR H{sup 1}) was used to obtain the mango kernel fat parameters before and after gamma irradiation. The data interpretation of RMN H{sup 1} indicated that there are significant differences in the acidity and saponification indexes of fat. However, it was found an increase of 14% in iodine index of fat after irradiation. This result means that some double bonds were formed on the irradiation process of the fat. (author)

  16. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.

  17. 2-Methylfuran: A bio-derived octane booster for spark-ignition engines

    KAUST Repository

    Sarathy, Mani; Shankar, Vijai; Tripathi, Rupali; Pitsch, Heinz; Sarathy, Mani

    2018-01-01

    The efficiency of spark-ignition engines is limited by the phenomenon of knock, which is caused by auto-ignition of the fuel-air mixture ahead of the spark-initiated flame front. The resistance of a fuel to knock is quantified by its octane index

  18. A note on preserving the spark of a matrix

    Directory of Open Access Journals (Sweden)

    Marcin Skrzynski

    2015-05-01

    Full Text Available Let Mm× n(F be the vector space of all m× n matrices over a field F. In the case where m ≥ n, char (F ≠ 2 and F has at least five elements, we give a complete characterization of linear maps Φ : Mm× n(F → Mm× n(F such that spark(Φ (A = spark(A for any A ∈ Mm× n(F.

  19. Realized kernels in practice

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, P. Reinhard; Lunde, Asger

    2009-01-01

    and find a remarkable level of agreement. We identify some features of the high-frequency data, which are challenging for realized kernels. They are when there are local trends in the data, over periods of around 10 minutes, where the prices and quotes are driven up or down. These can be associated......Realized kernels use high-frequency data to estimate daily volatility of individual stock prices. They can be applied to either trade or quote data. Here we provide the details of how we suggest implementing them in practice. We compare the estimates based on trade and quote data for the same stock...

  20. Experimental study of a spark-gap

    International Nuclear Information System (INIS)

    Bruzzone, H.; Moreno, C.; Vieytes, R.

    1990-01-01

    Some experimental results concerning to the resistance of an atmospheric pressure spark-gap, operating in the self breakdown regime are presented. The influence of the energy discharging through the gap on this resistance is discussed. (Author)

  1. Anatomically-aided PET reconstruction using the kernel method.

    Science.gov (United States)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  2. On Convergence of Kernel Density Estimates in Particle Filtering

    Czech Academy of Sciences Publication Activity Database

    Coufal, David

    2016-01-01

    Roč. 52, č. 5 (2016), s. 735-756 ISSN 0023-5954 Grant - others:GA ČR(CZ) GA16-03708S; SVV(CZ) 260334/2016 Institutional support: RVO:67985807 Keywords : Fourier analysis * kernel methods * particle filter Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.379, year: 2016

  3. Measurements of some parameters of thermal sparks with respect to their ability to ignite aviation fuel/air mixtures

    Science.gov (United States)

    Haigh, S. J.; Hardwick, C. J.; Baldwin, R. E.

    1991-01-01

    A method used to generate thermal sparks for experimental purposes and methods by which parameters of the sparks, such as speed, size, and temperature, were measured are described. Values are given of the range of such parameters within these spark showers. Titanium sparks were used almost exclusively, since it is particles of this metal which are found to be ejected during simulation tests to carbon fiber composite (CFC) joints. Tests were then carried out in which titanium sparks and spark showers were injected into JP4/(AVTAG F40) mixtures with air. Single large sparks and dense showers of small sparks were found to be capable of causing ignition. Tests were then repeated using ethylene/air mixtures, which were found to be more easily ignited by thermal sparks than the JP4/ air mixtures.

  4. Embedded real-time operating system micro kernel design

    Science.gov (United States)

    Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng

    2005-12-01

    Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.

  5. Kernel Temporal Differences for Neural Decoding

    Science.gov (United States)

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  6. Collision kernels in the eikonal approximation for Lennard-Jones interaction potential

    International Nuclear Information System (INIS)

    Zielinska, S.

    1985-03-01

    The velocity changing collisions are conveniently described by collisional kernels. These kernels depend on an interaction potential and there is a necessity for evaluating them for realistic interatomic potentials. Using the collision kernels, we are able to investigate the redistribution of atomic population's caused by the laser light and velocity changing collisions. In this paper we present the method of evaluating the collision kernels in the eikonal approximation. We discuss the influence of the potential parameters Rsub(o)sup(i), epsilonsub(o)sup(i) on kernel width for a given atomic state. It turns out that unlike the collision kernel for the hard sphere model of scattering the Lennard-Jones kernel is not so sensitive to changes of Rsub(o)sup(i) as the previous one. Contrary to the general tendency of approximating collisional kernels by the Gaussian curve, kernels for the Lennard-Jones potential do not exhibit such a behaviour. (author)

  7. Genetic, Genomic, and Breeding Approaches to Further Explore Kernel Composition Traits and Grain Yield in Maize

    Science.gov (United States)

    Da Silva, Helena Sofia Pereira

    2009-01-01

    Maize ("Zea mays L.") is a model species well suited for the dissection of complex traits which are often of commercial value. The purpose of this research was to gain a deeper understanding of the genetic control of maize kernel composition traits starch, protein, and oil concentration, and also kernel weight and grain yield. Germplasm with…

  8. Classification of maize kernels using NIR hyperspectral imaging

    DEFF Research Database (Denmark)

    Williams, Paul; Kucheryavskiy, Sergey V.

    2016-01-01

    NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual...... and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale....

  9. Influence of wheat kernel physical properties on the pulverizing process.

    Science.gov (United States)

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  10. A Precise Drunk Driving Detection Using Weighted Kernel Based on Electrocardiogram.

    Science.gov (United States)

    Wu, Chung Kit; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei

    2016-05-09

    Globally, 1.2 million people die and 50 million people are injured annually due to traffic accidents. These traffic accidents cost $500 billion dollars. Drunk drivers are found in 40% of the traffic crashes. Existing drunk driving detection (DDD) systems do not provide accurate detection and pre-warning concurrently. Electrocardiogram (ECG) is a proven biosignal that accurately and simultaneously reflects human's biological status. In this letter, a classifier for DDD based on ECG is investigated in an attempt to reduce traffic accidents caused by drunk drivers. At this point, it appears that there is no known research or literature found on ECG classifier for DDD. To identify drunk syndromes, the ECG signals from drunk drivers are studied and analyzed. As such, a precise ECG-based DDD (ECG-DDD) using a weighted kernel is developed. From the measurements, 10 key features of ECG signals were identified. To incorporate the important features, the feature vectors are weighted in the customization of kernel functions. Four commonly adopted kernel functions are studied. Results reveal that weighted feature vectors improve the accuracy by 11% compared to the computation using the prime kernel. Evaluation shows that ECG-DDD improved the accuracy by 8% to 18% compared to prevailing methods.

  11. A Precise Drunk Driving Detection Using Weighted Kernel Based on Electrocardiogram

    Directory of Open Access Journals (Sweden)

    Chung Kit Wu

    2016-05-01

    Full Text Available Globally, 1.2 million people die and 50 million people are injured annually due to traffic accidents. These traffic accidents cost $500 billion dollars. Drunk drivers are found in 40% of the traffic crashes. Existing drunk driving detection (DDD systems do not provide accurate detection and pre-warning concurrently. Electrocardiogram (ECG is a proven biosignal that accurately and simultaneously reflects human’s biological status. In this letter, a classifier for DDD based on ECG is investigated in an attempt to reduce traffic accidents caused by drunk drivers. At this point, it appears that there is no known research or literature found on ECG classifier for DDD. To identify drunk syndromes, the ECG signals from drunk drivers are studied and analyzed. As such, a precise ECG-based DDD (ECG-DDD using a weighted kernel is developed. From the measurements, 10 key features of ECG signals were identified. To incorporate the important features, the feature vectors are weighted in the customization of kernel functions. Four commonly adopted kernel functions are studied. Results reveal that weighted feature vectors improve the accuracy by 11% compared to the computation using the prime kernel. Evaluation shows that ECG-DDD improved the accuracy by 8% to 18% compared to prevailing methods.

  12. Spark PRM: Using RRTs within PRMs to efficiently explore narrow passages

    KAUST Repository

    Shi, Kensen; Denny, Jory; Amato, Nancy M.

    2014-01-01

    © 2014 IEEE. Probabilistic RoadMaps (PRMs) have been successful for many high-dimensional motion planning problems. However, they encounter difficulties when mapping narrow passages. While many PRM sampling methods have been proposed to increase the proportion of samples within narrow passages, such difficult planning areas still pose many challenges. We introduce a novel algorithm, Spark PRM, that sparks the growth of Rapidly-expanding Random Trees (RRTs) from narrow passage samples generated by a PRM. The RRT rapidly generates further narrow passage samples, ideally until the passage is fully mapped. After reaching a terminating condition, the tree stops growing and is added to the roadmap. Spark PRM is a general method that can be applied to all PRM variants. We study the benefits of Spark PRM with a variety of sampling strategies in a wide array of environments. We show significant speedups in computation time over RRT, Sampling-based Roadmap of Trees (SRT), and various PRM variants.

  13. Spark PRM: Using RRTs within PRMs to efficiently explore narrow passages

    KAUST Repository

    Shi, Kensen

    2014-05-01

    © 2014 IEEE. Probabilistic RoadMaps (PRMs) have been successful for many high-dimensional motion planning problems. However, they encounter difficulties when mapping narrow passages. While many PRM sampling methods have been proposed to increase the proportion of samples within narrow passages, such difficult planning areas still pose many challenges. We introduce a novel algorithm, Spark PRM, that sparks the growth of Rapidly-expanding Random Trees (RRTs) from narrow passage samples generated by a PRM. The RRT rapidly generates further narrow passage samples, ideally until the passage is fully mapped. After reaching a terminating condition, the tree stops growing and is added to the roadmap. Spark PRM is a general method that can be applied to all PRM variants. We study the benefits of Spark PRM with a variety of sampling strategies in a wide array of environments. We show significant speedups in computation time over RRT, Sampling-based Roadmap of Trees (SRT), and various PRM variants.

  14. Evolution kernel for the Dirac field

    International Nuclear Information System (INIS)

    Baaquie, B.E.

    1982-06-01

    The evolution kernel for the free Dirac field is calculated using the Wilson lattice fermions. We discuss the difficulties due to which this calculation has not been previously performed in the continuum theory. The continuum limit is taken, and the complete energy eigenfunctions as well as the propagator are then evaluated in a new manner using the kernel. (author)

  15. Gradient-based adaptation of general gaussian kernels.

    Science.gov (United States)

    Glasmachers, Tobias; Igel, Christian

    2005-10-01

    Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.

  16. Phase characterisation in spark plasma sintered TiPt alloy

    CSIR Research Space (South Africa)

    Chikosha, S

    2011-12-01

    Full Text Available stream_source_info chikosha_2011.pdf.txt stream_content_type text/plain stream_size 4354 Content-Encoding UTF-8 stream_name chikosha_2011.pdf.txt Content-Type text/plain; charset=UTF-8 PHASE CHARACTERISATION IN SPARK... to form “necks”  Radiant Joule heat and pressure drives “neck” growth and material transfer © CSIR 2006 www.csir.co.za Page 6 Objective  Produce TiPt alloy compacts by Spark plasma sintering (SPS) of equiatomic...

  17. Analog forecasting with dynamics-adapted kernels

    Science.gov (United States)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  18. Effects of spark plug configuration on combustion and emission characteristics of a LPG fuelled lean burn SI engine

    Science.gov (United States)

    Ravi, K.; Khan, Manazir Ahmed; Pradeep Bhasker, J.; Porpatham, E.

    2017-11-01

    Introduction of technological innovation in automotive engines in reducing pollution and increasing efficiency have been under contemplation. Gaseous fuels have proved to be a promising way to reduce emissions in Spark Ignition (SI) engines. In particular, LPG settled to be a favourable fuel for SI engines because of their higher hydrogen to carbon ratio, octane rating and lower emissions. Wide ignition limits and efficient combustion characteristics make LPG suitable for lean burn operation. But lean combustion technology has certain drawbacks like poor flame propagation, cyclic variations etc. Based on copious research it was found that location, types and number of spark plug significantly influence in reducing cyclic variations. In this work the influence of single and dual spark plugs of conventional and surface discharge electrode type were analysed. Dual surface discharge electrode spark plug enhanced the brake thermal efficiency and greatly reduced the cyclic variations. The experimental results show that rate of heat release and pressure rise was more and combustion duration was shortened in this configuration. On the emissions front, the NOx emission has increased whereas HC and CO emissions were reduced under lean condition.

  19. Benchmarking NWP Kernels on Multi- and Many-core Processors

    Science.gov (United States)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  20. Open Problem: Kernel methods on manifolds and metric spaces

    DEFF Research Database (Denmark)

    Feragen, Aasa; Hauberg, Søren

    2016-01-01

    Radial kernels are well-suited for machine learning over general geodesic metric spaces, where pairwise distances are often the only computable quantity available. We have recently shown that geodesic exponential kernels are only positive definite for all bandwidths when the input space has strong...... linear properties. This negative result hints that radial kernel are perhaps not suitable over geodesic metric spaces after all. Here, however, we present evidence that large intervals of bandwidths exist where geodesic exponential kernels have high probability of being positive definite over finite...... datasets, while still having significant predictive power. From this we formulate conjectures on the probability of a positive definite kernel matrix for a finite random sample, depending on the geometry of the data space and the spread of the sample....

  1. Genetic dissection of the maize kernel development process via conditional QTL mapping for three developing kernel-related traits in an immortalized F2 population.

    Science.gov (United States)

    Zhang, Zhanhui; Wu, Xiangyuan; Shi, Chaonan; Wang, Rongna; Li, Shengfei; Wang, Zhaohui; Liu, Zonghua; Xue, Yadong; Tang, Guiliang; Tang, Jihua

    2016-02-01

    Kernel development is an important dynamic trait that determines the final grain yield in maize. To dissect the genetic basis of maize kernel development process, a conditional quantitative trait locus (QTL) analysis was conducted using an immortalized F2 (IF2) population comprising 243 single crosses at two locations over 2 years. Volume (KV) and density (KD) of dried developing kernels, together with kernel weight (KW) at different developmental stages, were used to describe dynamic changes during kernel development. Phenotypic analysis revealed that final KW and KD were determined at DAP22 and KV at DAP29. Unconditional QTL mapping for KW, KV and KD uncovered 97 QTLs at different kernel development stages, of which qKW6b, qKW7a, qKW7b, qKW10b, qKW10c, qKV10a, qKV10b and qKV7 were identified under multiple kernel developmental stages and environments. Among the 26 QTLs detected by conditional QTL mapping, conqKW7a, conqKV7a, conqKV10a, conqKD2, conqKD7 and conqKD8a were conserved between the two mapping methodologies. Furthermore, most of these QTLs were consistent with QTLs and genes for kernel development/grain filling reported in previous studies. These QTLs probably contain major genes associated with the kernel development process, and can be used to improve grain yield and quality through marker-assisted selection.

  2. Development of 2024 AA-Yttrium composites by Spark Plasma Sintering

    Science.gov (United States)

    Vidyasagar, CH S.; Karunakar, D. B.

    2018-04-01

    The method of fabrication of MMNCs is quite a challenge, which includes advanced processing techniques like Spark Plasma Sintering (SPS), etc. The objective of the present work is to fabricate aluminium based MMNCs with the addition of small amounts of yttrium using Spark Plasma Sintering and to evaluate their mechanical and microstructure properties. Samples of 2024 AA with yttrium ranging from 0.1% to 0.5 wt% are fabricated by Spark Plasma Sintering (SPS). Hardness of the samples is determined using Vickers hardness testing machine. The metallurgical characterization of the samples is evaluated by Optical Microscopy (OM), Field Emission Scanning Electron Microscopy (FE-SEM). Unreinforced 2024 AA sample is also fabricated as a benchmark to compare its properties with those of the composite developed. It is found that the yttrium addition increases the above mentioned properties by altering the precipitation kinetics and intermetallic formation to some extent and then decreases gradually when yttrium wt% increases beyond 0.3 wt%. High density (˂ 99.75) is achieved in the samples and highest hardness achieved is 114 Hv, fabricated by spark plasma sintering and uniform distribution of yttrium is observed.

  3. Kernel-based noise filtering of neutron detector signals

    International Nuclear Information System (INIS)

    Park, Moon Ghu; Shin, Ho Cheol; Lee, Eun Ki

    2007-01-01

    This paper describes recently developed techniques for effective filtering of neutron detector signal noise. In this paper, three kinds of noise filters are proposed and their performance is demonstrated for the estimation of reactivity. The tested filters are based on the unilateral kernel filter, unilateral kernel filter with adaptive bandwidth and bilateral filter to show their effectiveness in edge preservation. Filtering performance is compared with conventional low-pass and wavelet filters. The bilateral filter shows a remarkable improvement compared with unilateral kernel and wavelet filters. The effectiveness and simplicity of the unilateral kernel filter with adaptive bandwidth is also demonstrated by applying it to the reactivity measurement performed during reactor start-up physics tests

  4. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    Science.gov (United States)

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Predictive Model Equations for Palm Kernel (Elaeis guneensis J ...

    African Journals Online (AJOL)

    Estimated error of ± 0.18 and ± 0.2 are envisaged while applying the models for predicting palm kernel and sesame oil colours respectively. Keywords: Palm kernel, Sesame, Palm kernel, Oil Colour, Process Parameters, Model. Journal of Applied Science, Engineering and Technology Vol. 6 (1) 2006 pp. 34-38 ...

  6. Heat kernel analysis for Bessel operators on symmetric cones

    DEFF Research Database (Denmark)

    Möllers, Jan

    2014-01-01

    . The heat kernel is explicitly given in terms of a multivariable $I$-Bessel function on $Ω$. Its corresponding heat kernel transform defines a continuous linear operator between $L^p$-spaces. The unitary image of the $L^2$-space under the heat kernel transform is characterized as a weighted Bergmann space...

  7. A multi-scale kernel bundle for LDDMM

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Nielsen, Mads; Lauze, Francois Bernard

    2011-01-01

    The Large Deformation Diffeomorphic Metric Mapping framework constitutes a widely used and mathematically well-founded setup for registration in medical imaging. At its heart lies the notion of the regularization kernel, and the choice of kernel greatly affects the results of registrations...

  8. Experimental evaluation of a spark-ignited engine using biogas as fuel

    Directory of Open Access Journals (Sweden)

    Juan Miguel Mantilla González

    2008-05-01

    Full Text Available Different CH4 and CO2 mixtures were used as fuel in this work; they were fed into a spark-ignited engine equipped with devices allowing spark advance, gas delivery and gas consumption to be measured. Engine bench-tests re-vealed changes in the main operation parameters and emissions. The results showed that increasing CO2 percen-tage in the mixture increased the spark angle, reduced maximum power and torque and reduced exhaust emissions (by 90% in some cases when DAMA resolution 1015/2005 was applied. The main components to be considered when an engine of this type operates with gas fuel were also recognised.

  9. Generation of Nanoparticles by Spark Discharge

    NARCIS (Netherlands)

    Salman Tabrizi, N.

    2009-01-01

    Spark discharge is a method for producing nanoparticles from conductive materials. Besides the general advantages of nanoparticle synthesis in the gas phase, the method offers additional advantages like simplicity, compactness and versatility. The synthesis process is continuous and is performed at

  10. A comprehensive benchmark of kernel methods to extract protein-protein interactions from literature.

    Directory of Open Access Journals (Sweden)

    Domonkos Tikk

    Full Text Available The most important way of conveying new findings in biomedical research is scientific publication. Extraction of protein-protein interactions (PPIs reported in scientific publications is one of the core topics of text mining in the life sciences. Recently, a new class of such methods has been proposed - convolution kernels that identify PPIs using deep parses of sentences. However, comparing published results of different PPI extraction methods is impossible due to the use of different evaluation corpora, different evaluation metrics, different tuning procedures, etc. In this paper, we study whether the reported performance metrics are robust across different corpora and learning settings and whether the use of deep parsing actually leads to an increase in extraction quality. Our ultimate goal is to identify the one method that performs best in real-life scenarios, where information extraction is performed on unseen text and not on specifically prepared evaluation data. We performed a comprehensive benchmarking of nine different methods for PPI extraction that use convolution kernels on rich linguistic information. Methods were evaluated on five different public corpora using cross-validation, cross-learning, and cross-corpus evaluation. Our study confirms that kernels using dependency trees generally outperform kernels based on syntax trees. However, our study also shows that only the best kernel methods can compete with a simple rule-based approach when the evaluation prevents information leakage between training and test corpora. Our results further reveal that the F-score of many approaches drops significantly if no corpus-specific parameter optimization is applied and that methods reaching a good AUC score often perform much worse in terms of F-score. We conclude that for most kernels no sensible estimation of PPI extraction performance on new text is possible, given the current heterogeneity in evaluation data. Nevertheless, our study

  11. Training Lp norm multiple kernel learning in the primal.

    Science.gov (United States)

    Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei

    2013-10-01

    Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Predicting complex traits using a diffusion kernel on genetic markers with an application to dairy cattle and wheat data

    Science.gov (United States)

    2013-01-01

    Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755

  13. Spark Ignition LPG for Hydrogen Gas Combustion the Reduction Furnace ME-11 Process

    International Nuclear Information System (INIS)

    Achmad Suntoro

    2007-01-01

    Reverse engineering method for automatic spark-ignition system of LPG to burn hydrogen gaseous in the reducing process of ME-11 furnace has been successfully implemented using local materials. A qualitative study to the initial behaviour of the LPG flame system has created an idea by modification to install an automatic spark-ignition of the LPG on the reducing furnace ME-11. The automatic spark-ignition system has been tested and proved working well. (author)

  14. Fractional quantum integral operator with general kernels and applications

    Science.gov (United States)

    Babakhani, Azizollah; Neamaty, Abdolali; Yadollahzadeh, Milad; Agahi, Hamzeh

    In this paper, we first introduce the concept of fractional quantum integral with general kernels, which generalizes several types of fractional integrals known from the literature. Then we give more general versions of some integral inequalities for this operator, thus generalizing some previous results obtained by many researchers.2,8,25,29,30,36

  15. Stochastic subset selection for learning with kernel machines.

    Science.gov (United States)

    Rhinelander, Jason; Liu, Xiaoping P

    2012-06-01

    Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.

  16. RTOS kernel in portable electrocardiograph

    Science.gov (United States)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  17. RTOS kernel in portable electrocardiograph

    International Nuclear Information System (INIS)

    Centeno, C A; Voos, J A; Riva, G G; Zerbini, C; Gonzalez, E A

    2011-01-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  18. RKRD: Runtime Kernel Rootkit Detection

    Science.gov (United States)

    Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.

    In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.

  19. Denoising by semi-supervised kernel PCA preimaging

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai

    2014-01-01

    Kernel Principal Component Analysis (PCA) has proven a powerful tool for nonlinear feature extraction, and is often applied as a pre-processing step for classification algorithms. In denoising applications Kernel PCA provides the basis for dimensionality reduction, prior to the so-called pre-imag...

  20. Sentiment classification with interpolated information diffusion kernels

    NARCIS (Netherlands)

    Raaijmakers, S.

    2007-01-01

    Information diffusion kernels - similarity metrics in non-Euclidean information spaces - have been found to produce state of the art results for document classification. In this paper, we present a novel approach to global sentiment classification using these kernels. We carry out a large array of

  1. a Comparison Study of Different Kernel Functions for Svm-Based Classification of Multi-Temporal Polarimetry SAR Data

    Science.gov (United States)

    Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.

    2014-10-01

    In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.

  2. A new spark detection system for the electrostatic septa of the SPS North (experimental) Area

    CERN Document Server

    Barlow, R A; Borburgh, J; Carlier, E; Chanavat, C; Fowler, T; Pinget, B

    2014-01-01

    Electrostatic septa (ZS) are used in the extraction of the particle beams from the CERN SPS to the North Area experimental zone. These septa employ high electric fields, generated from a 300 kV power supply, and are particularly prone to internal sparking around the cathode structure. This sparking degrades the electric field quality, consequently affecting the extracted beam, vacuum and equipment performance. To mitigate these effects, a Spark Detection System (SDS) has been realised, which is based on an industrial SIEMENS S7-400 programmable logic controller and deported Boolean processor modules interfaced through a PROFINET fieldbus. The SDS interlock logic uses a moving average spark rate count to determine if the ZS performance is acceptable. Below a certain spark rate it is probable that the ZS septa tank vacuum can recover, thus avoiding transition into a state where rapid degradation would occur. Above this level an interlock is raised and the high voltage is switched off. Additionally, all spark si...

  3. Linear and kernel methods for multivariate change detection

    DEFF Research Database (Denmark)

    Canty, Morton J.; Nielsen, Allan Aasbjerg

    2012-01-01

    ), as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (nonlinear), may further enhance change signals relative to no-change background. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric...... normalization, and kernel PCA/MAF/MNF transformations are presented that function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. The train/test approach to kernel PCA is evaluated against a Hebbian learning procedure. Matlab code is also available...... that allows fast data exploration and experimentation with smaller datasets. New, multiresolution versions of IR-MAD that accelerate convergence and that further reduce no-change background noise are introduced. Computationally expensive matrix diagonalization and kernel image projections are programmed...

  4. Reproducing kernel method with Taylor expansion for linear Volterra integro-differential equations

    Directory of Open Access Journals (Sweden)

    Azizallah Alvandi

    2017-06-01

    Full Text Available This research aims of the present a new and single algorithm for linear integro-differential equations (LIDE. To apply the reproducing Hilbert kernel method, there is made an equivalent transformation by using Taylor series for solving LIDEs. Shown in series form is the analytical solution in the reproducing kernel space and the approximate solution $ u_{N} $ is constructed by truncating the series to $ N $ terms. It is easy to prove the convergence of $ u_{N} $ to the analytical solution. The numerical solutions from the proposed method indicate that this approach can be implemented easily which shows attractive features.

  5. Panel data specifications in nonparametric kernel regression

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  6. Scuba: scalable kernel-based gene prioritization.

    Science.gov (United States)

    Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio

    2018-01-25

    The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .

  7. MULTITASKER, Multitasking Kernel for C and FORTRAN Under UNIX

    International Nuclear Information System (INIS)

    Brooks, E.D. III

    1988-01-01

    1 - Description of program or function: MULTITASKER implements a multitasking kernel for the C and FORTRAN programming languages that runs under UNIX. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the development, debugging, and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessor hardware. The performance evaluation features require no changes in the application program source and are implemented as a set of compile- and run-time options in the kernel. 2 - Method of solution: The FORTRAN interface to the kernel is identical in function to the CRI multitasking package provided for the Cray XMP. This provides a migration path to high speed (but small N) multiprocessors once the application has been coded and debugged. With use of the UNIX m4 macro preprocessor, source compatibility can be achieved between the UNIX code development system and the target Cray multiprocessor. The kernel also provides a means of evaluating a program's performance on model multiprocessors. Execution traces may be obtained which allow the user to determine kernel overhead, memory conflicts between various tasks, and the average concurrency being exploited. The kernel may also be made to switch tasks every cpu instruction with a random execution ordering. This allows the user to look for unprotected critical regions in the program. These features, implemented as a set of compile- and run-time options, cause extra execution overhead which is not present in the standard production version of the kernel

  8. Multiple kernel boosting framework based on information measure for classification

    International Nuclear Information System (INIS)

    Qi, Chengming; Wang, Yuping; Tian, Wenjie; Wang, Qun

    2016-01-01

    The performance of kernel-based method, such as support vector machine (SVM), is greatly affected by the choice of kernel function. Multiple kernel learning (MKL) is a promising family of machine learning algorithms and has attracted many attentions in recent years. MKL combines multiple sub-kernels to seek better results compared to single kernel learning. In order to improve the efficiency of SVM and MKL, in this paper, the Kullback–Leibler kernel function is derived to develop SVM. The proposed method employs an improved ensemble learning framework, named KLMKB, which applies Adaboost to learning multiple kernel-based classifier. In the experiment for hyperspectral remote sensing image classification, we employ feature selected through Optional Index Factor (OIF) to classify the satellite image. We extensively examine the performance of our approach in comparison to some relevant and state-of-the-art algorithms on a number of benchmark classification data sets and hyperspectral remote sensing image data set. Experimental results show that our method has a stable behavior and a noticeable accuracy for different data set.

  9. Comparison Algorithm Kernels on Support Vector Machine (SVM To Compare The Trend Curves with Curves Online Forex Trading

    Directory of Open Access Journals (Sweden)

    irfan abbas

    2017-01-01

    Full Text Available At this time, the players Forex Trading generally still use the data exchange in the form of a Forex Trading figures from different sources. Thus they only receive or know the data rate of a Forex Trading prevailing at the time just so difficult to analyze or predict exchange rate movements future. Forex players usually use the indicators to enable them to analyze and memperdiksi future value. Indicator is a decision making tool. Trading forex is trading currency of a country, the other country's currency. Trading took place globally between the financial centers of the world with the involvement of the world's major banks as the major transaction. Trading Forex offers profitable investment type with a small capital and high profit, with relatively small capital can earn profits doubled. This is due to the forex trading systems exist leverage which the invested capital will be doubled if the predicted results of buy / sell is accurate, but Trading Forex having high risk level, but by knowing the right time to trade (buy or sell, the losses can be avoided. Traders who invest in the foreign exchange market is expected to have the ability to analyze the circumstances and situations in predicting the difference in currency exchange rates. Forex price movements that form the pattern (curve up and down greatly assist traders in making decisions. The movement of the curve used as an indicator in the decision to purchase (buy or sell (sell. This study compares (Comparation type algorithm kernel on Support Vector Machine (SVM to predict the movement of the curve in live time trading forex using the data GBPUSD, 1H. Results of research on the study of the results and discussion can be concluded that the Kernel Dot, Kernel Multiquaric, Kernel Neural inappropriately used for data is non-linear in the case of data forex to follow the pattern of trend curves, because curves generated curved linear (straight and then to type of kernel is the closest curve

  10. Kernel Methods for Mining Instance Data in Ontologies

    Science.gov (United States)

    Bloehdorn, Stephan; Sure, York

    The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.

  11. Turbulent spark-jet ignition in SI gas fuelled engine

    Directory of Open Access Journals (Sweden)

    Pielecha Ireneusz

    2017-01-01

    Full Text Available The article contains a thermodynamic analysis of a new combustion system that allows the combustion of stratified gas mixtures with mean air excess coefficient in the range 1.4-1.8. Spark ignition was used in the pre-chamber that has been mounted in the engine cylinder head and contained a rich mixture out of which a turbulent flow of ignited mixture is ejected. It allows spark-jet ignition and the turbulent combustion of the lean mixture in the main combustion chamber. This resulted in a two-stage combustion system for lean mixtures. The experimental study has been conducted using a single-cylinder test engine with a geometric compression ratio ε = 15.5 adapted for natural gas supply. The tests were performed at engine speed n = 2000 rpm under stationary engine load when the engine operating parameters and toxic compounds emissions have been recorded. Analysis of the results allowed to conclude that the evaluated combustion system offers large flexibility in the initiation of charge ignition through an appropriate control of the fuel quantities supplied into the pre-chamber and into the main combustion chamber. The research concluded with determining the charge ignition criterion for a suitably divided total fuel dose fed to the cylinder.

  12. Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations

    International Nuclear Information System (INIS)

    Carter, L.L.; Hendricks, J.S.

    1983-01-01

    The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays

  13. Spark counting technique of alpha tracks on an aluminium oxide film

    International Nuclear Information System (INIS)

    Morishima, Hiroshige; Koga, Taeko; Niwa, Takeo; Kawai, Hiroshi

    1984-01-01

    We have tried to use aluminium oxide film as a neutron detector film with a spark counter for neutron monitoring in the mixed field of neutron and gamma-rays near a reactor. The merits of this method are that (1) aluminium oxide is good electric insulator, (2) any desired thickness of the film can be prepared, (3) chemical etching of the thin film can be dispensed with. The relation between spark counts and numbers of alpha-particles which entered the aluminium oxide film 1 μm thick was linear in the range of 10 5 -10 7 alpha-particles. The sensitivity(ratio of the spark counts to irradiated numbers of alpha-particles) was approximately 10 -3 . (author)

  14. The Visualization and Analysis of POI Features under Network Space Supported by Kernel Density Estimation

    Directory of Open Access Journals (Sweden)

    YU Wenhao

    2015-01-01

    Full Text Available The distribution pattern and the distribution density of urban facility POIs are of great significance in the fields of infrastructure planning and urban spatial analysis. The kernel density estimation, which has been usually utilized for expressing these spatial characteristics, is superior to other density estimation methods (such as Quadrat analysis, Voronoi-based method, for that the Kernel density estimation considers the regional impact based on the first law of geography. However, the traditional kernel density estimation is mainly based on the Euclidean space, ignoring the fact that the service function and interrelation of urban feasibilities is carried out on the network path distance, neither than conventional Euclidean distance. Hence, this research proposed a computational model of network kernel density estimation, and the extension type of model in the case of adding constraints. This work also discussed the impacts of distance attenuation threshold and height extreme to the representation of kernel density. The large-scale actual data experiment for analyzing the different POIs' distribution patterns (random type, sparse type, regional-intensive type, linear-intensive type discusses the POI infrastructure in the city on the spatial distribution of characteristics, influence factors, and service functions.

  15. The integral first collision kernel method for gamma-ray skyshine analysis[Skyshine; Gamma-ray; First collision kernel; Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Sheu, R.-D.; Chui, C.-S.; Jiang, S.-H. E-mail: shjiang@mx.nthu.edu.tw

    2003-12-01

    A simplified method, based on the integral of the first collision kernel, is presented for performing gamma-ray skyshine calculations for the collimated sources. The first collision kernels were calculated in air for a reference air density by use of the EGS4 Monte Carlo code. These kernels can be applied to other air densities by applying density corrections. The integral first collision kernel (IFCK) method has been used to calculate two of the ANSI/ANS skyshine benchmark problems and the results were compared with a number of other commonly used codes. Our results were generally in good agreement with others but only spend a small fraction of the computation time required by the Monte Carlo calculations. The scheme of the IFCK method for dealing with lots of source collimation geometry is also presented in this study.

  16. Spark igniter having precious metal ground electrode inserts

    International Nuclear Information System (INIS)

    Ryan, N.A.

    1988-01-01

    This patent describes an igniter comprising a shell of a shell metal alloy which is resistant to spark erosion and corrosion, the shell having a firing end which terminates at its lower end in an annular ring, an insulator sealed within the metal shell and having a central bore and a surface extending inwardly toward the bore from the annular ring, a center electrode sealed within the bore of the insulator and having a firing end which is in spark gap relation with the annular ring of the shell and so positioned that a spark discharge between the firing end and the annular ring occurs along the inwardly extending surface of the insulator, and a plurality of oxidation and erosion resistant inserts, each of the inserts comprising a body of a metal selected from the group consisting of iridium, osmium, ruthenium, rhodium, platinum, and tungsten or an alloy or a ductile alloy of one of the foregoing metals, each of the bodies being embedded within a matching opening which extends from the exterior of the shell through the annular ring, being bonded to the shell

  17. [Significance of various implantate localizations of Sparks prostheses, experimental studies in rats].

    Science.gov (United States)

    Brieler, H S; Parwaresch, R; Thiede, A

    1976-01-01

    Our investigations show that Sparks prostheses after subcutaneous implantation are suitable for vascular grafting. At the end of the organization period the connective tissue becomes strong, and after the third and fourth weeks collagenous and elastic fibers can be seen. Ten weeks after s.c. implantation, collagenous fibers predominate. After this the Sparks prostheses can be used as a vascular graft. Intraperitoneal implantation, however, shows a histologically different picture with characteristic findings: only fat cells can be observed, a strong granulation tissue with elastic and collagenous fibers is not present. After intraperitoneal implantation Sparks prostheses are therefore unsuitable for vascular grafts.

  18. Analysis of Plant Breeding on Hadoop and Spark

    Directory of Open Access Journals (Sweden)

    Shuangxi Chen

    2016-01-01

    Full Text Available Analysis of crop breeding technology is one of the important means of computer-assisted breeding techniques which have huge data, high dimensions, and a lot of unstructured data. We propose a crop breeding data analysis platform on Spark. The platform consists of Hadoop distributed file system (HDFS and cluster based on memory iterative components. With this cluster, we achieve crop breeding large data analysis tasks in parallel through API provided by Spark. By experiments and tests of Indica and Japonica rice traits, plant breeding analysis platform can significantly improve the breeding of big data analysis speed, reducing the workload of concurrent programming.

  19. Development of laser-induced fluorescence for precombustion diagnostics in spark-ignition engines

    Energy Technology Data Exchange (ETDEWEB)

    Neij, H.

    1998-11-01

    Motivated by a desire to understand and optimize combustion in spark-ignition (SI) engines, laser techniques have been developed for measurement of fuel and residual gas, respectively, in the precombustion mixture of an operating SI engine. The primary objective was to obtain two-dimensional, quantitative data in the vicinity of the spark gap at the time of ignition. A laser-induced fluorescence (LIF) technique was developed for fuel visualization in engine environments. Since the fluorescence signal from any commercial gasoline fuel would be unknown to its origin, with an unpredictable dependence on collisional partners, pressure and temperature, a non-fluorescent base fuel - isooctane - was used. For LIF detection, a fluorescent species was added to the fuel. An additive not commonly used in this context - 3-pentanone - was chosen based on its suitable vaporization characteristics and fluorescent properties. The LIF technique was applied to an optically accessible research engine. By calibration, the fluorescence signal from the additive was converted to fuel-to-air equivalence ratio ({phi}). The accuracy and precision of the acquired data were assessed. A statistical evaluation revealed that the spatially averaged equivalence ratio around the spark plug had a significant impact on the combustion event. The strong correlation between these two quantities suggested that the early combustion was sensitive to large-scale inhomogeneities in the precombustion mixture. A similar LIF technique, using acetone as a fluorescent additive in methane, was applied to a combustion cell for ion current evaluation. The local equivalence ratio around the spark gap at the time of ignition was extracted from LIF data. Useful relations were identified between different ion current parameters and the local equivalence ratio, although the impact of the flow field, the fuel type, and the electrode geometry were identified as areas for future research. A novel fuel - dimethyl ether (DME

  20. A kernel adaptive algorithm for quaternion-valued inputs.

    Science.gov (United States)

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.

  1. Improving the Bandwidth Selection in Kernel Equating

    Science.gov (United States)

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  2. Optimization of process parameters for spark plasma sintering of nano structured SAF 2205 composite

    Directory of Open Access Journals (Sweden)

    Samuel Ranti Oke

    2018-04-01

    Full Text Available This research optimized spark plasma sintering (SPS process parameters in terms of sintering temperature, holding time and heating rate for the development of a nano-structured duplex stainless steel (SAF 2205 grade reinforced with titanium nitride (TiN. The mixed powders were sintered using an automated spark plasma sintering machine (model HHPD-25, FCT GmbH, Germany. Characterization was performed using X-ray diffraction and scanning electron microscopy. Density and hardness of the composites were investigated. The XRD result showed the formation of FeN0.068. SEM/EDS revealed the presence of nano ranged particles of TiN segregated at the grain boundaries of the duplex matrix. A decrease in hardness and densification was observed when sintering temperature and heating rate were 1200 °C and 150 °C/min respectively. The optimum properties were obtained in composites sintered at 1150 °C for 15 min and 100 °C/min. The composite grades irrespective of the process parameters exhibited similar shrinkage behavior, which is characterized by three distinctive peaks, which is an indication of good densification phenomena. Keywords: Spark plasma sintering, Duplex stainless steel (SAF 2205, Titanium nitride (TiN, Microstructure, Density, Hardness

  3. RHAGOLETIS COMPLETA (DIPTERA; TEPHRITIDAE DISTRIBUTION, FLIGHT DYNAMICS AND INFLUENCE ON WALNUT KERNEL QUALITY IN THE CONTINENTAL CROATIA

    Directory of Open Access Journals (Sweden)

    Božena Barić

    2015-06-01

    Full Text Available Walnut husk fly (WHF, Rhagoletis completa Cresson 1929 is an invasive species spreading quickly and damaging walnuts in Croatia and neighbouring countries. We researched distribution of this pest in the continental part of Croatia, flight dynamics in Međimurje County and its influence on quality of walnut kernels. CSALOMON®PALz traps were used for monitoring the spread and flight dynamics of R. completa. Weight and the protein content of kernels and the presence of mycotoxin contamination were measured. Walnut husk fly was found in six counties (Istria County: pest reconfirmation, Zagreb County, The City of Zagreb, Varaždin County, Međimurje County and Koprivnica-Križevci County. The presence of the fly was not confirmed on one site in Koprivnica-Križevci County (locality Ferdinandovac and in the eastern part of Croatia (Vukovar-Srijem County: Vinkovci locality. The flight dynamics showed rapid increase in number of adults only a year after the introduction into new area. The weight of infested kernels was 5.81% lower compared to not infested. Protein content was 14.04% in infested kernels and 17.31% in not infested kernels. There was no difference in mycotoxins levels. Additional researches on mycotoxin levels in stored nuts, ovipositional preferences of walnut husk fly and protection measures against this pest are suggested.

  4. Practice and Exploration of New Rural Construction in West Bank of Taiwan Strait Led by Spark Science and Technology

    OpenAIRE

    Li, Chaocan

    2013-01-01

    According to practice and exploration of spark program for 26 years in Quanzhou, the main model and their effects of new rural construction in west bank of Taiwan Strait led by spark science and technology were expounded. Six spark program systems were established, consisting of policy support guide, science and technology project lead, experts’ intelligence support, spark science and technology training, sci-tech information service and spark program demonstration. Five spark projects were...

  5. Research in organizational participation and cooperation

    DEFF Research Database (Denmark)

    Jeppesen, Hans Jeppe; Jønsson, Thomas; Rasmussen, Thomas

    2005-01-01

    This article discusses some different perspectives on organizational participation and presents conducted and ongoing research projects by the research unit SPARK at Department of Psychology, University of Aarhus.......This article discusses some different perspectives on organizational participation and presents conducted and ongoing research projects by the research unit SPARK at Department of Psychology, University of Aarhus....

  6. Point kernels and superposition methods for scatter dose calculations in brachytherapy

    International Nuclear Information System (INIS)

    Carlsson, A.K.

    2000-01-01

    Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)

  7. Online learning control using adaptive critic designs with sparse kernel machines.

    Science.gov (United States)

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  8. Apparatus and method for the spectrochemical analysis of liquids using the laser spark

    Science.gov (United States)

    Cremers, David A.; Radziemski, Leon J.; Loree, Thomas R.

    1990-01-01

    A method and apparatus for the qualitative and quantitative spectroscopic investigation of elements present in a liquid sample using the laser spark. A series of temporally closely spaced spark pairs is induced in the liquid sample utilizing pulsed electromagnetic radiation from a pair of lasers. The light pulses are not significantly absorbed by the sample so that the sparks occur inside of the liquid. The emitted light from the breakdown events is spectrally and temporally resolved, and the time period between the two laser pulses in each spark pair is adjusted to maximize the signal-to-noise ratio of the emitted signals. In comparison with the single pulse technique, a substantial reduction in the limits of detectability for many elements has been demonstrated. Narrowing of spectral features results in improved discrimination against interfering species.

  9. Wheat kernel dimensions: how do they contribute to kernel weight at ...

    Indian Academy of Sciences (India)

    2011-12-02

    Dec 2, 2011 ... yield components, is greatly influenced by kernel dimensions. (KD), such as ..... six linkage gaps, and it covered 3010.70 cM of the whole genome with an ...... Ersoz E. et al. 2009 The Genetic architecture of maize flowering.

  10. A multi-label learning based kernel automatic recommendation method for support vector machine.

    Science.gov (United States)

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  11. Using the Intel Math Kernel Library on Peregrine | High-Performance

    Science.gov (United States)

    Computing | NREL the Intel Math Kernel Library on Peregrine Using the Intel Math Kernel Library on Peregrine Learn how to use the Intel Math Kernel Library (MKL) with Peregrine system software. MKL architectures. Core math functions in MKL include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier

  12. Protein fold recognition using geometric kernel data fusion.

    Science.gov (United States)

    Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves

    2014-07-01

    Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.

  13. Unsupervised multiple kernel learning for heterogeneous data integration.

    Science.gov (United States)

    Mariette, Jérôme; Villa-Vialaneix, Nathalie

    2018-03-15

    Recent high-throughput sequencing advances have expanded the breadth of available omics datasets and the integrated analysis of multiple datasets obtained on the same samples has allowed to gain important insights in a wide range of applications. However, the integration of various sources of information remains a challenge for systems biology since produced datasets are often of heterogeneous types, with the need of developing generic methods to take their different specificities into account. We propose a multiple kernel framework that allows to integrate multiple datasets of various types into a single exploratory analysis. Several solutions are provided to learn either a consensus meta-kernel or a meta-kernel that preserves the original topology of the datasets. We applied our framework to analyse two public multi-omics datasets. First, the multiple metagenomic datasets, collected during the TARA Oceans expedition, was explored to demonstrate that our method is able to retrieve previous findings in a single kernel PCA as well as to provide a new image of the sample structures when a larger number of datasets are included in the analysis. To perform this analysis, a generic procedure is also proposed to improve the interpretability of the kernel PCA in regards with the original data. Second, the multi-omics breast cancer datasets, provided by The Cancer Genome Atlas, is analysed using a kernel Self-Organizing Maps with both single and multi-omics strategies. The comparison of these two approaches demonstrates the benefit of our integration method to improve the representation of the studied biological system. Proposed methods are available in the R package mixKernel, released on CRAN. It is fully compatible with the mixOmics package and a tutorial describing the approach can be found on mixOmics web site http://mixomics.org/mixkernel/. jerome.mariette@inra.fr or nathalie.villa-vialaneix@inra.fr. Supplementary data are available at Bioinformatics online.

  14. Kernel bundle EPDiff

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Lauze, Francois Bernard; Nielsen, Mads

    2011-01-01

    In the LDDMM framework, optimal warps for image registration are found as end-points of critical paths for an energy functional, and the EPDiff equations describe the evolution along such paths. The Large Deformation Diffeomorphic Kernel Bundle Mapping (LDDKBM) extension of LDDMM allows scale space...

  15. Reconstruction of data for an experiment using multi-gap spark chambers with six-camera optics

    International Nuclear Information System (INIS)

    Maybury, R.; Daley, H.M.

    1983-06-01

    A program has been developed to reconstruct spark positions in a pair of multi-gap optical spark chambers viewed by six cameras, which were used by a Rutherford Laboratory experiment. The procedure for correlating camera views to calculate spark positions is described. Calibration of the apparatus, and the application of time- and intensity-dependent corrections are discussed. (author)

  16. Proteome analysis of the almond kernel (Prunus dulcis).

    Science.gov (United States)

    Li, Shugang; Geng, Fang; Wang, Ping; Lu, Jiankang; Ma, Meihu

    2016-08-01

    Almond (Prunus dulcis) is a popular tree nut worldwide and offers many benefits to human health. However, the importance of almond kernel proteins in the nutrition and function in human health requires further evaluation. The present study presents a systematic evaluation of the proteins in the almond kernel using proteomic analysis. The nutrient and amino acid content in almond kernels from Xinjiang is similar to that of American varieties; however, Xinjiang varieties have a higher protein content. Two-dimensional electrophoresis analysis demonstrated a wide distribution of molecular weights and isoelectric points of almond kernel proteins. A total of 434 proteins were identified by LC-MS/MS, and most were proteins that were experimentally confirmed for the first time. Gene ontology (GO) analysis of the 434 proteins indicated that proteins involved in primary biological processes including metabolic processes (67.5%), cellular processes (54.1%), and single-organism processes (43.4%), the main molecular function of almond kernel proteins are in catalytic activity (48.0%), binding (45.4%) and structural molecule activity (11.9%), and proteins are primarily distributed in cell (59.9%), organelle (44.9%), and membrane (22.8%). Almond kernel is a source of a wide variety of proteins. This study provides important information contributing to the screening and identification of almond proteins, the understanding of almond protein function, and the development of almond protein products. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

  17. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  18. Control Transfer in Operating System Kernels

    Science.gov (United States)

    1994-05-13

    microkernel system that runs less code in the kernel address space. To realize the performance benefit of allocating stacks in unmapped kseg0 memory, the...review how I modified the Mach 3.0 kernel to use continuations. Because of Mach’s message-passing microkernel structure, interprocess communication was...critical control transfer paths, deeply- nested call chains are undesirable in any case because of the function call overhead. 4.1.3 Microkernel Operating

  19. Bivariate discrete beta Kernel graduation of mortality data.

    Science.gov (United States)

    Mazza, Angelo; Punzo, Antonio

    2015-07-01

    Various parametric/nonparametric techniques have been proposed in literature to graduate mortality data as a function of age. Nonparametric approaches, as for example kernel smoothing regression, are often preferred because they do not assume any particular mortality law. Among the existing kernel smoothing approaches, the recently proposed (univariate) discrete beta kernel smoother has been shown to provide some benefits. Bivariate graduation, over age and calendar years or durations, is common practice in demography and actuarial sciences. In this paper, we generalize the discrete beta kernel smoother to the bivariate case, and we introduce an adaptive bandwidth variant that may provide additional benefits when data on exposures to the risk of death are available; furthermore, we outline a cross-validation procedure for bandwidths selection. Using simulations studies, we compare the bivariate approach proposed here with its corresponding univariate formulation and with two popular nonparametric bivariate graduation techniques, based on Epanechnikov kernels and on P-splines. To make simulations realistic, a bivariate dataset, based on probabilities of dying recorded for the US males, is used. Simulations have confirmed the gain in performance of the new bivariate approach with respect to both the univariate and the bivariate competitors.

  20. ClimateSpark: An in-memory distributed computing framework for big climate data analytics

    Science.gov (United States)

    Hu, Fei; Yang, Chaowei; Schnase, John L.; Duffy, Daniel Q.; Xu, Mengchao; Bowen, Michael K.; Lee, Tsengdar; Song, Weiwei

    2018-06-01

    The unprecedented growth of climate data creates new opportunities for climate studies, and yet big climate data pose a grand challenge to climatologists to efficiently manage and analyze big data. The complexity of climate data content and analytical algorithms increases the difficulty of implementing algorithms on high performance computing systems. This paper proposes an in-memory, distributed computing framework, ClimateSpark, to facilitate complex big data analytics and time-consuming computational tasks. Chunking data structure improves parallel I/O efficiency, while a spatiotemporal index is built for the chunks to avoid unnecessary data reading and preprocessing. An integrated, multi-dimensional, array-based data model (ClimateRDD) and ETL operations are developed to address big climate data variety by integrating the processing components of the climate data lifecycle. ClimateSpark utilizes Spark SQL and Apache Zeppelin to develop a web portal to facilitate the interaction among climatologists, climate data, analytic operations and computing resources (e.g., using SQL query and Scala/Python notebook). Experimental results show that ClimateSpark conducts different spatiotemporal data queries/analytics with high efficiency and data locality. ClimateSpark is easily adaptable to other big multiple-dimensional, array-based datasets in various geoscience domains.

  1. A framework for optimal kernel-based manifold embedding of medical image data.

    Science.gov (United States)

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Measurement of Weight of Kernels in a Simulated Cylindrical Fuel Compact for HTGR

    International Nuclear Information System (INIS)

    Kim, Woong Ki; Lee, Young Woo; Kim, Young Min; Kim, Yeon Ku; Eom, Sung Ho; Jeong, Kyung Chai; Cho, Moon Sung; Cho, Hyo Jin; Kim, Joo Hee

    2011-01-01

    The TRISO-coated fuel particle for the high temperature gas-cooled reactor (HTGR) is composed of a nuclear fuel kernel and outer coating layers. The coated particles are mixed with graphite matrix to make HTGR fuel element. The weight of fuel kernels in an element is generally measured by the chemical analysis or a gamma-ray spectrometer. Although it is accurate to measure the weight of kernels by the chemical analysis, the samples used in the analysis cannot be put again in the fabrication process. Furthermore, radioactive wastes are generated during the inspection procedure. The gamma-ray spectrometer requires an elaborate reference sample to reduce measurement errors induced from the different geometric shape of test sample from that of reference sample. X-ray computed tomography (CT) is an alternative to measure the weight of kernels in a compact nondestructively. In this study, X-ray CT is applied to measure the weight of kernels in a cylindrical compact containing simulated TRISO-coated particles with ZrO 2 kernels. The volume of kernels as well as the number of kernels in the simulated compact is measured from the 3-D density information. The weight of kernels was calculated from the volume of kernels or the number of kernels. Also, the weight of kernels was measured by extracting the kernels from a compact to review the result of the X-ray CT application

  3. CMS Analysis and Data Reduction with Apache Spark

    Energy Technology Data Exchange (ETDEWEB)

    Gutsche, Oliver [Fermilab; Canali, Luca [CERN; Cremer, Illia [Magnetic Corp., Waltham; Cremonesi, Matteo [Fermilab; Elmer, Peter [Princeton U.; Fisk, Ian [Flatiron Inst., New York; Girone, Maria [CERN; Jayatilaka, Bo [Fermilab; Kowalkowski, Jim [Fermilab; Khristenko, Viktor [CERN; Motesnitsalis, Evangelos [CERN; Pivarski, Jim [Princeton U.; Sehrish, Saba [Fermilab; Surdy, Kacper [CERN; Svyatkovskiy, Alexey [Princeton U.

    2017-10-31

    Experimental Particle Physics has been at the forefront of analyzing the world's largest datasets for decades. The HEP community was among the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems for distributed data processing, collectively called "Big Data" technologies have emerged from industry and open source projects to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and tools, promising a fresh look at analysis of very large datasets that could potentially reduce the time-to-physics with increased interactivity. Moreover these new tools are typically actively developed by large communities, often profiting of industry resources, and under open source licensing. These factors result in a boost for adoption and maturity of the tools and for the communities supporting them, at the same time helping in reducing the cost of ownership for the end-users. In this talk, we are presenting studies of using Apache Spark for end user data analysis. We are studying the HEP analysis workflow separated into two thrusts: the reduction of centrally produced experiment datasets and the end-analysis up to the publication plot. Studying the first thrust, CMS is working together with CERN openlab and Intel on the CMS Big Data Reduction Facility. The goal is to reduce 1 PB of official CMS data to 1 TB of ntuple output for analysis. We are presenting the progress of this 2-year project with first results of scaling up Spark-based HEP analysis. Studying the second thrust, we are presenting studies on using Apache Spark for a CMS Dark Matter physics search, comparing Spark's feasibility, usability and performance to the ROOT-based analysis.

  4. SparkJet characterizations in quiescent and supersonic flowfields

    Science.gov (United States)

    Emerick, T.; Ali, M. Y.; Foster, C.; Alvi, F. S.; Popkin, S.

    2014-12-01

    The aerodynamic community has studied active flow control actuators for some time, and developments have led to a wide variety of devices with various features and operating mechanisms. The design requirements for a practical actuator used for active flow control include reliable operation, requisite frequency and amplitude modulation capabilities, and a reasonable lifespan while maintaining minimal cost and design complexity. An active flow control device called the SparkJet actuator has been developed for high-speed flight control and incorporates no mechanical/moving parts, zero net mass flux capabilities and the ability to tune the operating frequency and momentum throughput. This actuator utilizes electrical power to deliver high-momentum flow with a very fast response time. The SparkJet actuator was characterized on the benchtop using a laser-based microschlieren visualization technique and maximum blast wave and jet front velocities of ~400 and ~310 m/s were, respectively, measured in the flowfield. An increase in jet front velocity from 240 to 310 m/s during subatmospheric (60 kPa) testing reveals that the actuator may have greater control authority at lower ambient pressures, which correspond to high-altitude flight conditions for air vehicles. A SparkJet array was integrated into a flat plate and tested in a Mach 1.5 crossflow. Phase-conditioned shadowgraph results revealed a maximum flow deflection angle of 5° created by the SparkJet 275 µs after the actuator was triggered in single-shot mode. Burst mode operation of frequencies up to 700 Hz revealed similar results during wind tunnel testing. Following these tests, the actuator trigger mechanism was improved and the ability of the actuator to be discharged in burst mode at a frequency of 1 kHz was achieved.

  5. 3-D waveform tomography sensitivity kernels for anisotropic media

    KAUST Repository

    Djebbi, Ramzi

    2014-01-01

    The complications in anisotropic multi-parameter inversion lie in the trade-off between the different anisotropy parameters. We compute the tomographic waveform sensitivity kernels for a VTI acoustic medium perturbation as a tool to investigate this ambiguity between the different parameters. We use dynamic ray tracing to efficiently handle the expensive computational cost for 3-D anisotropic models. Ray tracing provides also the ray direction information necessary for conditioning the sensitivity kernels to handle anisotropy. The NMO velocity and η parameter kernels showed a maximum sensitivity for diving waves which results in a relevant choice of those parameters in wave equation tomography. The δ parameter kernel showed zero sensitivity; therefore it can serve as a secondary parameter to fit the amplitude in the acoustic anisotropic inversion. Considering the limited penetration depth of diving waves, migration velocity analysis based kernels are introduced to fix the depth ambiguity with reflections and compute sensitivity maps in the deeper parts of the model.

  6. A Fourier-series-based kernel-independent fast multipole method

    International Nuclear Information System (INIS)

    Zhang Bo; Huang Jingfang; Pitsianis, Nikos P.; Sun Xiaobai

    2011-01-01

    We present in this paper a new kernel-independent fast multipole method (FMM), named as FKI-FMM, for pairwise particle interactions with translation-invariant kernel functions. FKI-FMM creates, using numerical techniques, sufficiently accurate and compressive representations of a given kernel function over multi-scale interaction regions in the form of a truncated Fourier series. It provides also economic operators for the multipole-to-multipole, multipole-to-local, and local-to-local translations that are typical and essential in the FMM algorithms. The multipole-to-local translation operator, in particular, is readily diagonal and does not dominate in arithmetic operations. FKI-FMM provides an alternative and competitive option, among other kernel-independent FMM algorithms, for an efficient application of the FMM, especially for applications where the kernel function consists of multi-physics and multi-scale components as those arising in recent studies of biological systems. We present the complexity analysis and demonstrate with experimental results the FKI-FMM performance in accuracy and efficiency.

  7. A new spark detection system for the electrostatic septa of the SPS North (experimental) Area

    CERN Multimedia

    Barlow, R A; Borburgh, J; Carlier, E; Chanavat, C; Pinget, B

    2013-01-01

    Electrostatic septa (ZS) are used in the extraction of the particle beams from the CERN SPS to the North Area experimental zone. These septa employ high electric fields, generated from a 300 kV power supply, and are particularly prone to internal sparking around the cathode structure. This sparking degrades the electric field quality, consequently affecting the extracted beam, vacuum and equipment performance. To mitigate these effects, a Spark Detection System (SDS) has been realised, which is based on an industrial SIEMENS S7-400 programmable logic controller and deported Boolean processors modules interfaced through a PROFINET fieldbus. The SDS interlock logic uses a moving average spark rate count to determine if the ZS performance is acceptable. Below a certain spark rate it is probable that the ZS septa tank vacuum can recover, thus avoiding transition into a\

  8. Resummed memory kernels in generalized system-bath master equations

    International Nuclear Information System (INIS)

    Mavros, Michael G.; Van Voorhis, Troy

    2014-01-01

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics

  9. The dipole form of the gluon part of the BFKL kernel

    International Nuclear Information System (INIS)

    Fadin, V.S.; Fiore, R.; Grabovsky, A.V.; Papa, A.

    2007-01-01

    The dipole form of the gluon part of the color singlet BFKL kernel in the next-to-leading order (NLO) is obtained in the coordinate representation by direct transfer from the momentum representation, where the kernel was calculated before. With this paper the transformation of the NLO BFKL kernel to the dipole form, started a few months ago with the quark part of the kernel, is completed

  10. Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.

    Science.gov (United States)

    Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe

    2018-02-19

    Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.

  11. 100 kV reliable accurately-synchronized spark gap

    International Nuclear Information System (INIS)

    Bosamykin, V.S.; Gerasimov, A.I.; Zenkov, D.I.

    1987-01-01

    100 kV three-electrode spark gap filled with 40% SF 6 +60% N 2 mixture under the pressure of ∼ 1 MPa, which has spread Δt ≤ ± 5 ns of operating time delay in the range of 10 4 triggerings and commutation energy of 2.5 kJ, providing electric strength is 100%, is described; at 10 kJ Δt is less than ± 10 ns for 10 3 triggerings. Parallel connection of 16 groups, each consisting of 5 spark gaps with series connection, electric strength being 100%, in the pulse charging unit of Arkadiev-Marx generator being in operation for several years manifested their high efficiency; mutual group spread is ≤ ± 15 ns

  12. Meter of dynamics of restoring the electrical strength of spark gaps

    International Nuclear Information System (INIS)

    Kuznetsov, E.A.; Kravchenko, S.A.; Yagnov, V.A.; Shipuk, I.Ya.

    1997-01-01

    Method for diagnostics of the dynamics spark gap electric strength restoration and an electric device for its realization are described. The electric strength measurement error, conditioned by the breakdown current through electric probes or the contacts of a spark gap under investigation, is reduced to minimum due to fast switching off the probe voltage if the breakdown current exceeds some established value (1 mA). 1 ref

  13. A new discrete dipole kernel for quantitative susceptibility mapping.

    Science.gov (United States)

    Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian

    2018-09-01

    Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. SU-E-T-154: Calculation of Tissue Dose Point Kernels Using GATE Monte Carlo Simulation Toolkit to Compare with Water Dose Point Kernel

    Energy Technology Data Exchange (ETDEWEB)

    Khazaee, M [shahid beheshti university, Tehran, Tehran (Iran, Islamic Republic of); Asl, A Kamali [Shahid Beheshti University, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of); Geramifar, P [Shariati Hospital, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of)

    2015-06-15

    Purpose: the objective of this study was to assess utilizing water dose point kernel (DPK)instead of tissue dose point kernels in convolution algorithms.to the best of our knowledge, in providing 3D distribution of absorbed dose from a 3D distribution of the activity, the human body is considered equivalent to water. as a Result tissue variations are not considered in patient specific dosimetry. Methods: In this study Gate v7.0 was used to calculate tissue dose point kernel. the beta emitter radionuclides which have taken into consideration in this simulation include Y-90, Lu-177 and P-32 which are commonly used in nuclear medicine. the comparison has been performed for dose point kernels of adipose, bone, breast, heart, intestine, kidney, liver, lung and spleen versus water dose point kernel. Results: In order to validate the simulation the Result of 90Y DPK in water were compared with published results of Papadimitroulas et al (Med. Phys., 2012). The results represented that the mean differences between water DPK and other soft tissues DPKs range between 0.6 % and 1.96% for 90Y, except for lung and bone, where the observed discrepancies are 6.3% and 12.19% respectively. The range of DPK difference for 32P is between 1.74% for breast and 18.85% for bone. For 177Lu, the highest difference belongs to bone which is equal to 16.91%. For other soft tissues the least discrepancy is observed in kidney with 1.68%. Conclusion: In all tissues except for lung and bone, the results of GATE for dose point kernel were comparable to water dose point kernel which demonstrates the appropriateness of applying water dose point kernel instead of soft tissues in the field of nuclear medicine.

  15. SU-E-T-154: Calculation of Tissue Dose Point Kernels Using GATE Monte Carlo Simulation Toolkit to Compare with Water Dose Point Kernel

    International Nuclear Information System (INIS)

    Khazaee, M; Asl, A Kamali; Geramifar, P

    2015-01-01

    Purpose: the objective of this study was to assess utilizing water dose point kernel (DPK)instead of tissue dose point kernels in convolution algorithms.to the best of our knowledge, in providing 3D distribution of absorbed dose from a 3D distribution of the activity, the human body is considered equivalent to water. as a Result tissue variations are not considered in patient specific dosimetry. Methods: In this study Gate v7.0 was used to calculate tissue dose point kernel. the beta emitter radionuclides which have taken into consideration in this simulation include Y-90, Lu-177 and P-32 which are commonly used in nuclear medicine. the comparison has been performed for dose point kernels of adipose, bone, breast, heart, intestine, kidney, liver, lung and spleen versus water dose point kernel. Results: In order to validate the simulation the Result of 90Y DPK in water were compared with published results of Papadimitroulas et al (Med. Phys., 2012). The results represented that the mean differences between water DPK and other soft tissues DPKs range between 0.6 % and 1.96% for 90Y, except for lung and bone, where the observed discrepancies are 6.3% and 12.19% respectively. The range of DPK difference for 32P is between 1.74% for breast and 18.85% for bone. For 177Lu, the highest difference belongs to bone which is equal to 16.91%. For other soft tissues the least discrepancy is observed in kidney with 1.68%. Conclusion: In all tissues except for lung and bone, the results of GATE for dose point kernel were comparable to water dose point kernel which demonstrates the appropriateness of applying water dose point kernel instead of soft tissues in the field of nuclear medicine

  16. Optimization of the acceptance of prebiotic beverage made from cashew nut kernels and passion fruit juice.

    Science.gov (United States)

    Rebouças, Marina Cabral; Rodrigues, Maria do Carmo Passos; Afonso, Marcos Rodrigues Amorim

    2014-07-01

    The aim of this research was to develop a prebiotic beverage from a hydrosoluble extract of broken cashew nut kernels and passion fruit juice using response surface methodology in order to optimize acceptance of its sensory attributes. A 2(2) central composite rotatable design was used, which produced 9 formulations, which were then evaluated using different concentrations of hydrosoluble cashew nut kernel, passion fruit juice, oligofructose, and 3% sugar. The use of response surface methodology to interpret the sensory data made it possible to obtain a formulation with satisfactory acceptance which met the criteria of bifidogenic action and use of hydrosoluble cashew nut kernels by using 14% oligofructose and 33% passion fruit juice. As a result of this study, it was possible to obtain a new functional prebiotic product, which combined the nutritional and functional properties of cashew nut kernels and oligofructose with the sensory properties of passion fruit juice in a beverage with satisfactory sensory acceptance. This new product emerges as a new alternative for the industrial processing of broken cashew nut kernels, which have very low market value, enabling this sector to increase its profits. © 2014 Institute of Food Technologists®

  17. Scientific opinion on the acute health risks related to the presence of cyanogenic glycosides in raw apricot kernels and products derived from raw apricot kernels

    DEFF Research Database (Denmark)

    Petersen, Annette

    of kernels promoted (10 and 60 kernels/day for the general population and cancer patients, respectively), exposures exceeded the ARfD 17–413 and 3–71 times in toddlers and adults, respectively. The estimated maximum quantity of apricot kernels (or raw apricot material) that can be consumed without exceeding...

  18. Electro-spark machining of cadmium antimonide

    International Nuclear Information System (INIS)

    Ivanovskij, V.N.; Stepakhina, K.A.

    1975-01-01

    Experimental data on electrical erosion of the semiconductor material (cadmium antimonide) alloyed with tellurium are given. The potentialisies and expediency of using the electric-spark method of cutting cadmium antimonide ingots with the resistivity of 1 ohm is discussed. Cutting has been carried out in distilled water and in the air

  19. A Temperature Compensation Method for Piezo-Resistive Pressure Sensor Utilizing Chaotic Ions Motion Algorithm Optimized Hybrid Kernel LSSVM

    Directory of Open Access Journals (Sweden)

    Ji Li

    2016-10-01

    Full Text Available A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.

  20. A Temperature Compensation Method for Piezo-Resistive Pressure Sensor Utilizing Chaotic Ions Motion Algorithm Optimized Hybrid Kernel LSSVM.

    Science.gov (United States)

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2016-10-14

    A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.

  1. Gas temperature of capacitance spark discharge in air

    International Nuclear Information System (INIS)

    Ono, Ryo; Nifuku, Masaharu; Fujiwara, Shuzo; Horiguchi, Sadashige; Oda, Tetsuji

    2005-01-01

    Capacitance spark discharge has been widely used for studying the ignition of flammable gas caused by electrostatic discharge. In the present study, the gas temperature of capacitance spark discharge is measured. The gas temperature is an important factor in understanding the electrostatic ignition process because it influences the reaction rate of ignition. Spark discharge is generated in air with a pulse duration shorter than 100 ns. The discharge energy is set to 0.03-1 mJ. The rotational and vibrational temperatures of the N 2 molecule are measured using the emission spectrum of the N 2 second positive system. The rotational and vibrational temperatures are estimated to be 500 and 5000 K, respectively, which are independent of the discharge energy. This result indicates that most of the electron energy is consumed in the excitation of vibrational levels of molecules rather than the heating of the gas. The gas temperature after discharge is also measured by laser-induced fluorescence of OH radicals. It is shown that the gas temperature increases after discharge and reaches approximately 1000 K at 3 μs after discharge. Then the temperature decreases at a rate in the range of 8-35 K/μs depending on the discharge energy

  2. The Effects Foliar Application of Methanol at Different Growth Stages on Kernel Related Traits in Chickpea var. ILC 482

    Directory of Open Access Journals (Sweden)

    N. Naeimi,

    2013-12-01

    Full Text Available This research was conducted to evaluate the effects of foliar application of methanol on certain kernel related traits at different growth stages of pea var. ILC482 at the Research Station of Faculty of Agriculture in Islamic Azad University, Tabriz Branch in 2011. The study was conducted in split plot experiment based on Randomized Complete Block Design with three replications. Treatments were three levels methanol foliar application at different growth stages (vegetative, reproductive and foliar application at both two stages which considered as main factor, six levels of foliar application of methanol concentrations: (0 [control], 5, 10, 15, 20, 25, 30% as sub factor. Results showed that the interactions of methanol applications growth stages and its concentrations on grain number per plant, 100 kernel weight, grain yield, grain filing rate and harvest index were significantly different. Foliar application of methanol at reproductive stage decrease kernel related traits, but this application at both growth stages had positive effect on grain production and kernel related traits. This positive effect on number and 100 kernel weight were significant. The highest grain yield (2460 kg/ha was obtained by 20% concentration of methanol at both growth stages that increased grain yield above 13.5% compared to the control condition.

  3. Kernel Function Tuning for Single-Layer Neural Networks

    Czech Academy of Sciences Publication Activity Database

    Vidnerová, Petra; Neruda, Roman

    -, accepted 28.11. 2017 (2018) ISSN 2278-0149 R&D Projects: GA ČR GA15-18108S Institutional support: RVO:67985807 Keywords : single-layer neural networks * kernel methods * kernel function * optimisation Subject RIV: IN - Informatics, Computer Science http://www.ijmerr.com/

  4. Broken rice kernels and the kinetics of rice hydration and texture during cooking.

    Science.gov (United States)

    Saleh, Mohammed; Meullenet, Jean-Francois

    2013-05-01

    During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.

  5. Contextual Weisfeiler-Lehman Graph Kernel For Malware Detection

    OpenAIRE

    Narayanan, Annamalai; Meng, Guozhu; Yang, Liu; Liu, Jinliang; Chen, Lihui

    2016-01-01

    In this paper, we propose a novel graph kernel specifically to address a challenging problem in the field of cyber-security, namely, malware detection. Previous research has revealed the following: (1) Graph representations of programs are ideally suited for malware detection as they are robust against several attacks, (2) Besides capturing topological neighbourhoods (i.e., structural information) from these graphs it is important to capture the context under which the neighbourhoods are reac...

  6. Local coding based matching kernel method for image classification.

    Directory of Open Access Journals (Sweden)

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  7. Multivariate realised kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole; Hansen, Peter Reinhard; Lunde, Asger

    We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement noise of certain types and can also handle non-synchronous trading. It is the first estimator...

  8. Process for producing metal oxide kernels and kernels so obtained

    International Nuclear Information System (INIS)

    Lelievre, Bernard; Feugier, Andre.

    1974-01-01

    The process desbribed is for producing fissile or fertile metal oxide kernels used in the fabrication of fuels for high temperature nuclear reactors. This process consists in adding to an aqueous solution of at least one metallic salt, particularly actinide nitrates, at least one chemical compound capable of releasing ammonia, in dispersing drop by drop the solution thus obtained into a hot organic phase to gel the drops and transform them into solid particles. These particles are then washed, dried and treated to turn them into oxide kernels. The organic phase used for the gel reaction is formed of a mixture composed of two organic liquids, one acting as solvent and the other being a product capable of extracting the anions from the metallic salt of the drop at the time of gelling. Preferably an amine is used as product capable of extracting the anions. Additionally, an alcohol that causes a part dehydration of the drops can be employed as solvent, thus helping to increase the resistance of the particles [fr

  9. Ideal Gas Resonance Scattering Kernel Routine for the NJOY Code

    International Nuclear Information System (INIS)

    Rothenstein, W.

    1999-01-01

    In a recent publication an expression for the temperature-dependent double-differential ideal gas scattering kernel is derived for the case of scattering cross sections that are energy dependent. Some tabulations and graphical representations of the characteristics of these kernels are presented in Ref. 2. They demonstrate the increased probability that neutron scattering by a heavy nuclide near one of its pronounced resonances will bring the neutron energy nearer to the resonance peak. This enhances upscattering, when a neutron with energy just below that of the resonance peak collides with such a nuclide. A routine for using the new kernel has now been introduced into the NJOY code. Here, its principal features are described, followed by comparisons between scattering data obtained by the new kernel, and the standard ideal gas kernel, when such comparisons are meaningful (i.e., for constant values of the scattering cross section a 0 K). The new ideal gas kernel for variable σ s 0 (E) at 0 K leads to the correct Doppler-broadened σ s T (E) at temperature T

  10. Geodesic exponential kernels: When Curvature and Linearity Conflict

    DEFF Research Database (Denmark)

    Feragen, Aase; Lauze, François; Hauberg, Søren

    2015-01-01

    manifold, the geodesic Gaussian kernel is only positive definite if the Riemannian manifold is Euclidean. This implies that any attempt to design geodesic Gaussian kernels on curved Riemannian manifolds is futile. However, we show that for spaces with conditionally negative definite distances the geodesic...

  11. Real time kernel performance monitoring with SystemTap

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    SystemTap is a dynamic method of monitoring and tracing the operation of a running Linux kernel. In this talk I will present a few practical use cases where SystemTap allowed me to turn otherwise complex userland monitoring tasks in simple kernel probes.

  12. Biocompatibility assessment of spark plasma-sintered alumina-titanium cermets.

    Science.gov (United States)

    Guzman, Rodrigo; Fernandez-García, Elisa; Gutierrez-Gonzalez, Carlos F; Fernandez, Adolfo; Lopez-Lacomba, Jose Luis; Lopez-Esteban, Sonia

    2016-01-01

    Alumina-titanium materials (cermets) of enhanced mechanical properties have been lately developed. In this work, physical properties such as electrical conductivity and the crystalline phases in the bulk material are evaluated. As these new cermets manufactured by spark plasma sintering may have potential application for hard tissue replacements, their biocompatibility needs to be evaluated. Thus, this research aims to study the cytocompatibility of a novel alumina-titanium (25 vol. % Ti) cermet compared to its pure counterpart, the spark plasma sintered alumina. The influence of the particular surface properties (chemical composition, roughness and wettability) on the pre-osteoblastic cell response is also analyzed. The material electrical resistance revealed that this cermet may be machined to any shape by electroerosion. The investigated specimens had a slightly undulated topography, with a roughness pattern that had similar morphology in all orientations (isotropic roughness) and a sub-micrometric average roughness. Differences in skewness that implied valley-like structures in the cermet and predominance of peaks in alumina were found. The cermet presented a higher surface hydrophilicity than alumina. Any cytotoxicity risk associated with the new materials or with the innovative manufacturing methodology was rejected. Proliferation and early-differentiation stages of osteoblasts were statistically improved on the composite. Thus, our results suggest that this new multifunctional cermet could improve current alumina-based biomedical devices for applications such as hip joint replacements. © The Author(s) 2015.

  13. Comparative Analysis of Kernel Methods for Statistical Shape Learning

    National Research Council Canada - National Science Library

    Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen

    2006-01-01

    .... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...

  14. Semi-supervised learning for ordinal Kernel Discriminant Analysis.

    Science.gov (United States)

    Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C

    2016-12-01

    Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.

    Science.gov (United States)

    Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying

    2015-09-01

    Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. © 2015 WILEY PERIODICALS, INC.

  16. Experiences implementing the MPI standard on Sandia`s lightweight kernels

    Energy Technology Data Exchange (ETDEWEB)

    Brightwell, R.; Greenberg, D.S.

    1997-10-01

    This technical report describes some lessons learned from implementing the Message Passing Interface (MPI) standard, and some proposed extentions to MPI, at Sandia. The implementations were developed using Sandia-developed lightweight kernels running on the Intel Paragon and Intel TeraFLOPS platforms. The motivations for this research are discussed, and a detailed analysis of several implementation issues is presented.

  17. Ideal gas scattering kernel for energy dependent cross-sections

    International Nuclear Information System (INIS)

    Rothenstein, W.; Dagan, R.

    1998-01-01

    A third, and final, paper on the calculation of the joint kernel for neutron scattering by an ideal gas in thermal agitation is presented, when the scattering cross-section is energy dependent. The kernel is a function of the neutron energy after scattering, and of the cosine of the scattering angle, as in the case of the ideal gas kernel for a constant bound atom scattering cross-section. The final expression is suitable for numerical calculations

  18. Mapping and validation of major quantitative trait loci for kernel length in wild barley (Hordeum vulgare ssp. spontaneum).

    Science.gov (United States)

    Zhou, Hong; Liu, Shihang; Liu, Yujiao; Liu, Yaxi; You, Jing; Deng, Mei; Ma, Jian; Chen, Guangdeng; Wei, Yuming; Liu, Chunji; Zheng, Youliang

    2016-09-13

    Kernel length is an important target trait in barley (Hordeum vulgare L.) breeding programs. However, the number of known quantitative trait loci (QTLs) controlling kernel length is limited. In the present study, we aimed to identify major QTLs for kernel length, as well as putative candidate genes that might influence kernel length in wild barley. A recombinant inbred line (RIL) population derived from the barley cultivar Baudin (H. vulgare ssp. vulgare) and the long-kernel wild barley genotype Awcs276 (H.vulgare ssp. spontaneum) was evaluated at one location over three years. A high-density genetic linkage map was constructed using 1,832 genome-wide diversity array technology (DArT) markers, spanning a total of 927.07 cM with an average interval of approximately 0.49 cM. Two major QTLs for kernel length, LEN-3H and LEN-4H, were detected across environments and further validated in a second RIL population derived from Fleet (H. vulgare ssp. vulgare) and Awcs276. In addition, a systematic search of public databases identified four candidate genes and four categories of proteins related to LEN-3H and LEN-4H. This study establishes a fundamental research platform for genomic studies and marker-assisted selection, since LEN-3H and LEN-4H could be used for accelerating progress in barley breeding programs that aim to improve kernel length.

  19. Parameter optimization in the regularized kernel minimum noise fraction transformation

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2012-01-01

    Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....

  20. Improving the Tribological Properties of Spark-Anodized Titanium by Magnetron Sputtered Diamond-Like Carbon

    OpenAIRE

    Zhaoxiang Chen; Xipeng Ren; Limei Ren; Tengchao Wang; Xiaowen Qi; Yulin Yang

    2018-01-01

    Spark-anodization of titanium can produce adherent and wear-resistant TiO2 film on the surface, but the spark-anodized titanium has lots of surface micro-pores, resulting in an unstable and high friction coefficient against many counterparts. In this study, the diamond-like carbon (DLC) was introduced into the micro-pores of spark-anodized titanium by the magnetron sputtering technique and a TiO2/DLC composite coating was fabricated. The microstructure and tribological properties of TiO2/DLC ...

  1. Insights from Classifying Visual Concepts with Multiple Kernel Learning

    Science.gov (United States)

    Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki

    2012-01-01

    Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970

  2. Near wall combustion modeling in spark ignition engines. Part A: Flame–wall interaction

    International Nuclear Information System (INIS)

    Demesoukas, Sokratis; Caillol, Christian; Higelin, Pascal; Boiarciuc, Andrei; Floch, Alain

    2015-01-01

    Highlights: • A model for flame–wall interaction in addition to flame wrinkling by turbulence is proposed. • Two sparkplug positions and two lengths are used in a test engine for model validation. • Flame–wall interaction decreases the maximum values of cylinder pressure and heat release rates. • The impact of combustion chamber geometry is taken into account by the flame–wall interaction model. - Abstract: Research and design in the field of spark ignition engines seek to achieve high performance while conserving fuel economy and low pollutant emissions. For the evaluation of various engine configurations, numerical simulations are favored, since they are quick and less expensive than experiments. Various zero-dimensional combustion models are currently used. Both flame front reactions and post-flame processes contribute to the heat release rate. The first part of this study focuses on the role of the flame front on the heat release rate, by modeling the interaction of the flame front with the chamber wall. Post-flame reactions are dealt with in Part B of the study. The basic configurations of flame quenching in laminar flames are also applicable in turbulent flames, which is the case in spark ignition engines. A simplified geometric model of the combustion chamber was used to calculate the mean flame surface, the flame volume and the distribution of flame surface as a function of the distance from the wall. The flame–wall interaction took into account the geometry of the combustion chamber and of the flame, aerodynamic turbulence and the in-cylinder pressure and temperature conditions, through a phenomenological attenuation function of the wrinkling factor. A modified global wrinkling factor as a function of the mean surface distance distribution from the wall was calculated. The impact of flame–wall interaction was simulated for four configurations of the sparkplug position and length: centered and lateral position, and standard and projected

  3. A method for manufacturing kernels of metallic oxides and the thus obtained kernels

    International Nuclear Information System (INIS)

    Lelievre Bernard; Feugier, Andre.

    1973-01-01

    A method is described for manufacturing fissile or fertile metal oxide kernels, consisting in adding at least a chemical compound capable of releasing ammonia to an aqueous solution of actinide nitrates dispersing the thus obtained solution dropwise in a hot organic phase so as to gelify the drops and transform them into solid particles, washing drying and treating said particles so as to transform them into oxide kernels. Such a method is characterized in that the organic phase used in the gel-forming reactions comprises a mixture of two organic liquids, one of which acts as a solvent, whereas the other is a product capable of extracting the metal-salt anions from the drops while the gel forming reaction is taking place. This can be applied to the so-called high temperature nuclear reactors [fr

  4. New Fukui, dual and hyper-dual kernels as bond reactivity descriptors.

    Science.gov (United States)

    Franco-Pérez, Marco; Polanco-Ramírez, Carlos-A; Ayers, Paul W; Gázquez, José L; Vela, Alberto

    2017-06-21

    We define three new linear response indices with promising applications for bond reactivity using the mathematical framework of τ-CRT (finite temperature chemical reactivity theory). The τ-Fukui kernel is defined as the ratio between the fluctuations of the average electron density at two different points in the space and the fluctuations in the average electron number and is designed to integrate to the finite-temperature definition of the electronic Fukui function. When this kernel is condensed, it can be interpreted as a site-reactivity descriptor of the boundary region between two atoms. The τ-dual kernel corresponds to the first order response of the Fukui kernel and is designed to integrate to the finite temperature definition of the dual descriptor; it indicates the ambiphilic reactivity of a specific bond and enriches the traditional dual descriptor by allowing one to distinguish between the electron-accepting and electron-donating processes. Finally, the τ-hyper dual kernel is defined as the second-order derivative of the Fukui kernel and is proposed as a measure of the strength of ambiphilic bonding interactions. Although these quantities have never been proposed, our results for the τ-Fukui kernel and for τ-dual kernel can be derived in zero-temperature formulation of the chemical reactivity theory with, among other things, the widely-used parabolic interpolation model.

  5. Quasi-Dual-Packed-Kerneled Au49 (2,4-DMBT)27 Nanoclusters and the Influence of Kernel Packing on the Electrochemical Gap.

    Science.gov (United States)

    Liao, Lingwen; Zhuang, Shengli; Wang, Pu; Xu, Yanan; Yan, Nan; Dong, Hongwei; Wang, Chengming; Zhao, Yan; Xia, Nan; Li, Jin; Deng, Haiteng; Pei, Yong; Tian, Shi-Kai; Wu, Zhikun

    2017-10-02

    Although face-centered cubic (fcc), body-centered cubic (bcc), hexagonal close-packed (hcp), and other structured gold nanoclusters have been reported, it was unclear whether gold nanoclusters with mix-packed (fcc and non-fcc) kernels exist, and the correlation between kernel packing and the properties of gold nanoclusters is unknown. A Au 49 (2,4-DMBT) 27 nanocluster with a shell electron count of 22 has now been been synthesized and structurally resolved by single-crystal X-ray crystallography, which revealed that Au 49 (2,4-DMBT) 27 contains a unique Au 34 kernel consisting of one quasi-fcc-structured Au 21 and one non-fcc-structured Au 13 unit (where 2,4-DMBTH=2,4-dimethylbenzenethiol). Further experiments revealed that the kernel packing greatly influences the electrochemical gap (EG) and the fcc structure has a larger EG than the investigated non-fcc structure. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Optimal kernel shape and bandwidth for atomistic support of continuum stress

    International Nuclear Information System (INIS)

    Ulz, Manfred H; Moran, Sean J

    2013-01-01

    The treatment of atomistic scale interactions via molecular dynamics simulations has recently found favour for multiscale modelling within engineering. The estimation of stress at a continuum point on the atomistic scale requires a pre-defined kernel function. This kernel function derives the stress at a continuum point by averaging the contribution from atoms within a region surrounding the continuum point. This averaging volume, and therefore the associated stress at a continuum point, is highly dependent on the bandwidth and shape of the kernel. In this paper we propose an effective and entirely data-driven strategy for simultaneously computing the optimal shape and bandwidth for the kernel. We thoroughly evaluate our proposed approach on copper using three classical elasticity problems. Our evaluation yields three key findings: firstly, our technique can provide a physically meaningful estimation of kernel bandwidth; secondly, we show that a uniform kernel is preferred, thereby justifying the default selection of this kernel shape in future work; and thirdly, we can reliably estimate both of these attributes in a data-driven manner, obtaining values that lead to an accurate estimation of the stress at a continuum point. (paper)

  7. Multivariable Christoffel-Darboux Kernels and Characteristic Polynomials of Random Hermitian Matrices

    Directory of Open Access Journals (Sweden)

    Hjalmar Rosengren

    2006-12-01

    Full Text Available We study multivariable Christoffel-Darboux kernels, which may be viewed as reproducing kernels for antisymmetric orthogonal polynomials, and also as correlation functions for products of characteristic polynomials of random Hermitian matrices. Using their interpretation as reproducing kernels, we obtain simple proofs of Pfaffian and determinant formulas, as well as Schur polynomial expansions, for such kernels. In subsequent work, these results are applied in combinatorics (enumeration of marked shifted tableaux and number theory (representation of integers as sums of squares.

  8. Programmable spark counter of tracks

    International Nuclear Information System (INIS)

    Denisov, A.E.; Nikolaev, V.A.; Vorobjev, I.B.

    2005-01-01

    For the purpose, a new set-the programmable all-automatic spark counter AIST-4-has been developed and manufactured. Compared to our previous automated spark counter ISTRA, which was operated by the integrated fixed program, the new set is operated completely by a personal computer. The mechanism for pressing and pulling the aluminized foil is put into action by a step motor operated by a microcontroller. The step motor turns an axle. The axle has two eccentrics. One of them moves a pressing plate up and down. The second eccentric moves the aluminized foil by steps of ∼15mm after the end of each pulse counting. One turnover of the axle corresponds to one pulse count cycle. The step motor, the high-voltage block and the pulse count block are operated by the microcontroller PIC 16C84 (Microstar). The set can be operated either manually by keys on the front panel or by a PC using dialogue windows for radon or neutron measurements (for counting of alpha or fission fragment tracks). A number of algorithms are developed: the general procedures, the automatic stopping of the pulse counting, the calibration curve, determination of the count characteristics and elimination of the short circuit in a track

  9. A multi-resolution approach to heat kernels on discrete surfaces

    KAUST Repository

    Vaxman, Amir

    2010-07-26

    Studying the behavior of the heat diffusion process on a manifold is emerging as an important tool for analyzing the geometry of the manifold. Unfortunately, the high complexity of the computation of the heat kernel - the key to the diffusion process - limits this type of analysis to 3D models of modest resolution. We show how to use the unique properties of the heat kernel of a discrete two dimensional manifold to overcome these limitations. Combining a multi-resolution approach with a novel approximation method for the heat kernel at short times results in an efficient and robust algorithm for computing the heat kernels of detailed models. We show experimentally that our method can achieve good approximations in a fraction of the time required by traditional algorithms. Finally, we demonstrate how these heat kernels can be used to improve a diffusion-based feature extraction algorithm. © 2010 ACM.

  10. Compactly Supported Basis Functions as Support Vector Kernels for Classification.

    Science.gov (United States)

    Wittek, Peter; Tan, Chew Lim

    2011-10-01

    Wavelet kernels have been introduced for both support vector regression and classification. Most of these wavelet kernels do not use the inner product of the embedding space, but use wavelets in a similar fashion to radial basis function kernels. Wavelet analysis is typically carried out on data with a temporal or spatial relation between consecutive data points. We argue that it is possible to order the features of a general data set so that consecutive features are statistically related to each other, thus enabling us to interpret the vector representation of an object as a series of equally or randomly spaced observations of a hypothetical continuous signal. By approximating the signal with compactly supported basis functions and employing the inner product of the embedding L2 space, we gain a new family of wavelet kernels. Empirical results show a clear advantage in favor of these kernels.

  11. Effect of the SPARK Program on Physical Activity, Cardiorespiratory Endurance, and Motivation in Middle-School Students.

    Science.gov (United States)

    Fu, You; Gao, Zan; Hannon, James C; Burns, Ryan D; Brusseau, Timothy A

    2016-05-01

    This study aimed to examine the effect of a 9-week SPARK program on physical activity (PA), cardiorespiratory endurance (Progressive Aerobic Cardiovascular Endurance Run; PACER), and motivation in middle-school students. 174 students attended baseline and posttests and change scores computed for each outcome. A MANOVA was employed to examine change score differences using follow-up ANOVA and Bonferroni post hoc tests. MANOVA yielded a significant interaction for Grade × Gender × Group (Wilks's Λ = 0.89, P interactions with perceived competence differences between SPARK grades 6 and 8 (Mean Δ = 0.38, P < .05), Enjoyment differences between SPARK grades 6 and 7 (Mean Δ = 0.67, P < .001), and SPARK grades 6 and 8 (Mean Δ = 0.81, P < .001). Following the intervention, SPARK displayed greater increases on PA and motivation measures in younger students compared with the Traditional program.

  12. Osteoarthritis Severity Determination using Self Organizing Map Based Gabor Kernel

    Science.gov (United States)

    Anifah, L.; Purnomo, M. H.; Mengko, T. L. R.; Purnama, I. K. E.

    2018-02-01

    The number of osteoarthritis patients in Indonesia is enormous, so early action is needed in order for this disease to be handled. The aim of this paper to determine osteoarthritis severity based on x-ray image template based on gabor kernel. This research is divided into 3 stages, the first step is image processing that is using gabor kernel. The second stage is the learning stage, and the third stage is the testing phase. The image processing stage is by normalizing the image dimension to be template to 50 □ 200 image. Learning stage is done with parameters initial learning rate of 0.5 and the total number of iterations of 1000. The testing stage is performed using the weights generated at the learning stage. The testing phase has been done and the results were obtained. The result shows KL-Grade 0 has an accuracy of 36.21%, accuracy for KL-Grade 2 is 40,52%, while accuracy for KL-Grade 2 and KL-Grade 3 are 15,52%, and 25,86%. The implication of this research is expected that this research as decision support system for medical practitioners in determining KL-Grade on X-ray images of knee osteoarthritis.

  13. Genome-wide Association Analysis of Kernel Weight in Hard Winter Wheat

    Science.gov (United States)

    Wheat kernel weight is an important and heritable component of wheat grain yield and a key predictor of flour extraction. Genome-wide association analysis was conducted to identify genomic regions associated with kernel weight and kernel weight environmental response in 8 trials of 299 hard winter ...

  14. The collapsed cone algorithm for (192)Ir dosimetry using phantom-size adaptive multiple-scatter point kernels.

    Science.gov (United States)

    Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-07

    The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient

  15. THE EFFECT OF VARIABLE COMPRESSION RATIO ON FUEL CONSUMPTION IN SPARK IGNITION ENGINES

    Directory of Open Access Journals (Sweden)

    Yakup SEKMEN

    2002-02-01

    Full Text Available Due to lack of energy sources in the world, we are obliged to use our current energy sources in the most efficient way. Therefore, in the automotive industry, research works to manufacture more economic cars in terms of fuelconsumption and environmental friendly cars, at the same time satisfying the required performance have been intensively increasing. Some positive results have been obtained by the studies, aimed to change the compression ratio according to the operating conditions of engine. In spark ignition engines in order to improve the combustion efficiency, fuel economy and exhaust emission in the partial loads, the compression ratio must be increased; but, under the high load and low speed conditions to prevent probable knock and hard running compression ratio must be decreased slightly. In this paper, various research works on the variable compression ratio with spark ignition engines, the effects on fuel economy, power output and thermal efficiency have been investigated. According to the results of the experiments performed with engines having variable compression ratio under the partial and mid-load conditions, an increase in engine power, a decrease in fuel consumption, particularly in partial loads up to 30 percent of fuel economy, and also severe reductions of some exhaust emission values were determined.

  16. A Heterogeneous Multi-core Architecture with a Hardware Kernel for Control Systems

    DEFF Research Database (Denmark)

    Li, Gang; Guan, Wei; Sierszecki, Krzysztof

    2012-01-01

    Rapid industrialisation has resulted in a demand for improved embedded control systems with features such as predictability, high processing performance and low power consumption. Software kernel implementation on a single processor is becoming more difficult to satisfy those constraints. This pa......Rapid industrialisation has resulted in a demand for improved embedded control systems with features such as predictability, high processing performance and low power consumption. Software kernel implementation on a single processor is becoming more difficult to satisfy those constraints......). Second, a heterogeneous multi-core architecture is investigated, focusing on its performance in relation to hard real-time constraints and predictable behavior. Third, the hardware implementation of HARTEX is designated to support the heterogeneous multi-core architecture. This hardware kernel has...... several advantages over a similar kernel implemented in software: higher-speed processing capability, parallel computation, and separation between the kernel itself and the applications being run. A microbenchmark has been used to compare the hardware kernel with the software kernel, and compare...

  17. Synthesis and characterization of aluminium–alumina micro- and nano-composites by spark plasma sintering

    International Nuclear Information System (INIS)

    Dash, K.; Chaira, D.; Ray, B.C.

    2013-01-01

    Graphical abstract: The evolution of microstructure by varying the particle size of reinforcement in the matrix employing spark plasma sintering has been demonstrated here in Al–Al 2 O 3 system. An emphasis has been laid on varying the reinforcement particle size and evaluating the microstructural morphologies and their implications on mechanical performance of the composites. Nanocomposites of 0.5, 1, 3, 5, 7 volume % alumina (average size 2 O 3 micro- and nano-composites fabricated by spark plasma sintering. • Better matrix-reinforcement integrity in nanocomposites than microcomposites. • Spark plasma sintering method results in higher density and hardness values. • High density and hardness values of nanocomposites than microcomposites. • High dislocation density in spark plasma sintered Al–Al 2 O 3 composites. - Abstract: In the present study, an emphasis has been laid on evaluation of the microstructural morphologies and their implications on mechanical performance of the composites by varying the reinforcement particle size. Nanocomposites of 0.5, 1, 3, 5, 7 volume % alumina (average size 2 O 3 nancomposites respectively. Spark plasma sintering imparts enhanced densification and matrix-reinforcement proximity which have been corroborated with the experimental results

  18. Basic study of Eu.sup.2+./sup.-doped garnet ceramic scintillator produced by spark plasma sintering

    Czech Academy of Sciences Publication Activity Database

    Sugiyama, K.; Yanagida, T.; Fujimoto, Y.; Yokota, Y.; Ito, A.; Nikl, Martin; Goto, T.; Yoshikawa, A.

    2012-01-01

    Roč. 35, č. 2 (2012), s. 222-226 ISSN 0925-3467 R&D Projects: GA MŠk LH12150 Institutional research plan: CEZ:AV0Z10100521 Keywords : Eu 2+ 5d–4f transition * scintillator * spark plasma sintering Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.918, year: 2012

  19. Generalized synthetic kernel approximation for elastic moderation of fast neutrons

    International Nuclear Information System (INIS)

    Yamamoto, Koji; Sekiya, Tamotsu; Yamamura, Yasunori.

    1975-01-01

    A method of synthetic kernel approximation is examined in some detail with a view to simplifying the treatment of the elastic moderation of fast neutrons. A sequence of unified kernel (fsub(N)) is introduced, which is then divided into two subsequences (Wsub(n)) and (Gsub(n)) according to whether N is odd (Wsub(n)=fsub(2n-1), n=1,2, ...) or even (Gsub(n)=fsub(2n), n=0,1, ...). The W 1 and G 1 kernels correspond to the usual Wigner and GG kernels, respectively, and the Wsub(n) and Gsub(n) kernels for n>=2 represent generalizations thereof. It is shown that the Wsub(n) kernel solution with a relatively small n (>=2) is superior on the whole to the Gsub(n) kernel solution for the same index n, while both converge to the exact values with increasing n. To evaluate the collision density numerically and rapidly, a simple recurrence formula is derived. In the asymptotic region (except near resonances), this recurrence formula allows calculation with a relatively coarse mesh width whenever hsub(a)<=0.05 at least. For calculations in the transient lethargy region, a mesh width of order epsilon/10 is small enough to evaluate the approximate collision density psisub(N) with an accuracy comparable to that obtained analytically. It is shown that, with the present method, an order of approximation of about n=7 should yield a practically correct solution diviating not more than 1% in collision density. (auth.)

  20. Learning with Generalization Capability by Kernel Methods of Bounded Complexity

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra; Sanguineti, M.

    2005-01-01

    Roč. 21, č. 3 (2005), s. 350-367 ISSN 0885-064X R&D Projects: GA AV ČR 1ET100300419 Institutional research plan: CEZ:AV0Z10300504 Keywords : supervised learning * generalization * model complexity * kernel methods * minimization of regularized empirical errors * upper bounds on rates of approximate optimization Subject RIV: BA - General Mathematics Impact factor: 1.186, year: 2005

  1. Validation of Born Traveltime Kernels

    Science.gov (United States)

    Baig, A. M.; Dahlen, F. A.; Hung, S.

    2001-12-01

    Most inversions for Earth structure using seismic traveltimes rely on linear ray theory to translate observed traveltime anomalies into seismic velocity anomalies distributed throughout the mantle. However, ray theory is not an appropriate tool to use when velocity anomalies have scale lengths less than the width of the Fresnel zone. In the presence of these structures, we need to turn to a scattering theory in order to adequately describe all of the features observed in the waveform. By coupling the Born approximation to ray theory, the first order dependence of heterogeneity on the cross-correlated traveltimes (described by the Fréchet derivative or, more colourfully, the banana-doughnut kernel) may be determined. To determine for what range of parameters these banana-doughnut kernels outperform linear ray theory, we generate several random media specified by their statistical properties, namely the RMS slowness perturbation and the scale length of the heterogeneity. Acoustic waves are numerically generated from a point source using a 3-D pseudo-spectral wave propagation code. These waves are then recorded at a variety of propagation distances from the source introducing a third parameter to the problem: the number of wavelengths traversed by the wave. When all of the heterogeneity has scale lengths larger than the width of the Fresnel zone, ray theory does as good a job at predicting the cross-correlated traveltime as the banana-doughnut kernels do. Below this limit, wavefront healing becomes a significant effect and ray theory ceases to be effective even though the kernels remain relatively accurate provided the heterogeneity is weak. The study of wave propagation in random media is of a more general interest and we will also show our measurements of the velocity shift and the variance of traveltime compare to various theoretical predictions in a given regime.

  2. Effect of Palm Kernel Cake Replacement and Enzyme ...

    African Journals Online (AJOL)

    A feeding trial which lasted for twelve weeks was conducted to study the performance of finisher pigs fed five different levels of palm kernel cake replacement for maize (0%, 40%, 40%, 60%, 60%) in a maize-palm kernel cake based ration with or without enzyme supplementation. It was a completely randomized design ...

  3. An Evaluation of Kernel Equating: Parallel Equating with Classical Methods in the SAT Subject Tests[TM] Program. Research Report. ETS RR-09-06

    Science.gov (United States)

    Grant, Mary C.; Zhang, Lilly; Damiano, Michele

    2009-01-01

    This study investigated kernel equating methods by comparing these methods to operational equatings for two tests in the SAT Subject Tests[TM] program. GENASYS (ETS, 2007) was used for all equating methods and scaled score kernel equating results were compared to Tucker, Levine observed score, chained linear, and chained equipercentile equating…

  4. Possibility of surface carburization of refractory metals of electric spark alloying

    International Nuclear Information System (INIS)

    Verkhoturov, A.D.; Isaeva, L.P.; Timofeeva, I.I.; Tsyban', V.A.

    1981-01-01

    The paper is concerned with a study in the alloying layer formation under electric spark alloying of refractory (Ti, Zr, Nb, Mo, W, Co, Fe) metals with graphite in argon and in air using the EhFI-46A installation. It is shown that in electric spark alloying with graphite there appear certain specific conditions for the alloying layer formation manifested in the cathode mass decrease during treatment. In this case an alloying layer consisting of carbides, oxides of the corresponding metals and material of the base is formed on the metal surface. The best carburization conditions in the process of electric spark alloying are realized for group 4 metals when treating them in ''soft'' regime, specific time of alloying being 1-3 min/sm 2 and for group 5 and 6 metals - in ''rigid'' regime of treatment and specific time of alloying 3-5 min/cm 2 [ru

  5. 2-Methylfuran: A bio-derived octane booster for spark-ignition engines

    KAUST Repository

    Sarathy, Mani

    2018-04-02

    The efficiency of spark-ignition engines is limited by the phenomenon of knock, which is caused by auto-ignition of the fuel-air mixture ahead of the spark-initiated flame front. The resistance of a fuel to knock is quantified by its octane index; therefore, increasing the octane index of a spark-ignition engine fuel increases the efficiency of the respective engine. However, raising the octane index of gasoline increases the refining costs, as well as the energy consumption during production. The use of alternative fuels with synergistic blending effects presents an attractive option for improving octane index. In this work, the octane enhancing potential of 2-methylfuran (2-MF), a next-generation biofuel, has been examined and compared to other high-octane components (i.e., ethanol and toluene). A primary reference fuel with an octane index of 60 (PRF60) was chosen as the base fuel since it closely represents refinery naphtha streams, which are used as gasoline blend stocks. Initial screening of the fuels was done in an ignition quality tester (IQT). The PRF60/2-MF (80/20 v/v%) blend exhibited longer ignition delay times compared to PRF60/ethanol (80/20 v/v%) blend and PRF60/toluene (80/20 v/v%) blend, even though pure 2-MF is more reactive than both ethanol and toluene. The mixtures were also tested in a cooperative fuels research (CFR) engine under research octane number and motor octane number like conditions. The PRF60/2-MF blend again possesses a higher octane index than other blending components. A detailed chemical kinetic analysis was performed to understand the synergetic blending effect of 2-MF, using a well-validated PRF/2-MF kinetic model. Kinetic analysis revealed superior suppression of low-temperature chemistry with the addition of 2-MF. The results from simulations were further confirmed by homogeneous charge compression ignition engine experiments, which established its superior low-temperature heat release (LTHR) suppression compared to ethanol

  6. Efficient Online Subspace Learning With an Indefinite Kernel for Visual Tracking and Recognition

    NARCIS (Netherlands)

    Liwicki, Stephan; Zafeiriou, Stefanos; Tzimiropoulos, Georgios; Pantic, Maja

    2012-01-01

    We propose an exact framework for online learning with a family of indefinite (not positive) kernels. As we study the case of nonpositive kernels, we first show how to extend kernel principal component analysis (KPCA) from a reproducing kernel Hilbert space to Krein space. We then formulate an

  7. Flour quality and kernel hardness connection in winter wheat

    Directory of Open Access Journals (Sweden)

    Szabó B. P.

    2016-12-01

    Full Text Available Kernel hardness is controlled by friabilin protein and it depends on the relation between protein matrix and starch granules. Friabilin is present in high concentration in soft grain varieties and in low concentration in hard grain varieties. The high gluten, hard wheat our generally contains about 12.0–13.0% crude protein under Mid-European conditions. The relationship between wheat protein content and kernel texture is usually positive and kernel texture influences the power consumption during milling. Hard-textured wheat grains require more grinding energy than soft-textured grains.

  8. Influence of differently processed mango seed kernel meal on ...

    African Journals Online (AJOL)

    Influence of differently processed mango seed kernel meal on performance response of west African ... and TD( consisted spear grass and parboiled mango seed kernel meal with concentrate diet in a ratio of 35:30:35). ... HOW TO USE AJOL.

  9. Effects of solution volume on hydrogen production by pulsed spark discharge in ethanol solution

    Energy Technology Data Exchange (ETDEWEB)

    Xin, Y. B.; Sun, B., E-mail: sunb88@dlmu.edu.cn; Zhu, X. M.; Yan, Z. Y.; Liu, H.; Liu, Y. J. [College of Environmental Science and Engineering, Dalian Maritime University, Dalian 116026 (China)

    2016-07-15

    Hydrogen production from ethanol solution (ethanol/water) by pulsed spark discharge was optimized by varying the volume of ethanol solution (liquid volume). Hydrogen yield was initially increased and then decreased with the increase in solution volume, which achieved 1.5 l/min with a solution volume of 500 ml. The characteristics of pulsed spark discharge were studied in this work; the results showed that the intensity of peak current, the rate of current rise, and energy efficiency of hydrogen production can be changed by varying the volume of ethanol solution. Meanwhile, the mechanism analysis of hydrogen production was accomplished by monitoring the process of hydrogen production and the state of free radicals. The analysis showed that decreasing the retention time of gas production and properly increasing the volume of ethanol solution can enhance the hydrogen yield. Through this research, a high-yield and large-scale method of hydrogen production can be achieved, which is more suitable for industrial application.

  10. River water remediation using pulsed corona, pulsed spark or ozonation

    Energy Technology Data Exchange (ETDEWEB)

    Izdebski, T.; Dors, M. [Polish Academy of Sciences, Szewalski Inst. of Fluid Flow Machiney, Fiszera (Poland). Centre for Plasma and Laser Engineering; Mizeraczyk, J. [Polish Academy of Sciences, Szewalski Inst. of Fluid Flow Machiney, Fiszera (Poland). Centre for Plasma and Laser Engineering; Gdynia Maritime Univ., Morska (Poland). Dept. of Marine Electronics

    2010-07-01

    The most common reason for epidemic formation is the pollution of surface and drinking water by wastewater bacteria. Pathogenic microorganisms that form the largest part of this are fecal bacteria, such as escherichia coli (E. coli). Wastewater treatment plants reduce the amount of the fecal bacteria by 1-3 orders of magnitude, depending on the initial number of bacteria. There is a lack of data on waste and drinking water purification by the electrohydraulic discharges method, which causes the destruction and inactivation of viruses, yeast, and bacteria. This paper investigated river water cleaning from microorganisms using pulsed corona, spark discharge and ozonization. The paper discussed the experimental setup and results. It was concluded that ozonization is the most efficient method of water disinfection as compared with pulsed spark and pulsed corona discharges. The pulsed spark discharge in water was capable of killing all microorganism similarly to ozonization, but with much lower energy efficiency. The pulsed corona discharge was found to be the less effective method of water disinfection. 21 refs., 4 figs.

  11. Material machining with pseudo-spark electron beams

    International Nuclear Information System (INIS)

    Benker, W.; Christiansen, J.; Frank, K.; Gundel, H.; Redel, T.; Stetter, M.

    1989-01-01

    The authors give a brief description of the production of pseudo-spark (low pressure gas discharge) electron beams. They illustrate the use of these electron beams for machining not only conducting, semiconducting and insulating materials, but also thin layers of such materials as high temperature superconducting ceramics

  12. A framework for dense triangular matrix kernels on various manycore architectures

    KAUST Repository

    Charara, Ali

    2017-06-06

    We present a new high-performance framework for dense triangular Basic Linear Algebra Subroutines (BLAS) kernels, ie, triangular matrix-matrix multiplication (TRMM) and triangular solve (TRSM), on various manycore architectures. This is an extension of a previous work on a single GPU by the same authors, presented at the EuroPar\\'16 conference, in which we demonstrated the effectiveness of recursive formulations in enhancing the performance of these kernels. In this paper, the performance of triangular BLAS kernels on a single GPU is further enhanced by implementing customized in-place CUDA kernels for TRMM and TRSM, which are called at the bottom of the recursion. In addition, a multi-GPU implementation of TRMM and TRSM is proposed and we show an almost linear performance scaling, as the number of GPUs increases. Finally, the algorithmic recursive formulation of these triangular BLAS kernels is in fact oblivious to the targeted hardware architecture. We, therefore, port these recursive kernels to homogeneous x86 hardware architectures by relying on the vendor optimized BLAS implementations. Results reported on various hardware architectures highlight a significant performance improvement against state-of-the-art implementations. These new kernels are freely available in the KAUST BLAS (KBLAS) open-source library at https://github.com/ecrc/kblas.

  13. A framework for dense triangular matrix kernels on various manycore architectures

    KAUST Repository

    Charara, Ali; Keyes, David E.; Ltaief, Hatem

    2017-01-01

    We present a new high-performance framework for dense triangular Basic Linear Algebra Subroutines (BLAS) kernels, ie, triangular matrix-matrix multiplication (TRMM) and triangular solve (TRSM), on various manycore architectures. This is an extension of a previous work on a single GPU by the same authors, presented at the EuroPar'16 conference, in which we demonstrated the effectiveness of recursive formulations in enhancing the performance of these kernels. In this paper, the performance of triangular BLAS kernels on a single GPU is further enhanced by implementing customized in-place CUDA kernels for TRMM and TRSM, which are called at the bottom of the recursion. In addition, a multi-GPU implementation of TRMM and TRSM is proposed and we show an almost linear performance scaling, as the number of GPUs increases. Finally, the algorithmic recursive formulation of these triangular BLAS kernels is in fact oblivious to the targeted hardware architecture. We, therefore, port these recursive kernels to homogeneous x86 hardware architectures by relying on the vendor optimized BLAS implementations. Results reported on various hardware architectures highlight a significant performance improvement against state-of-the-art implementations. These new kernels are freely available in the KAUST BLAS (KBLAS) open-source library at https://github.com/ecrc/kblas.

  14. PERI - auto-tuning memory-intensive kernels for multicore

    International Nuclear Information System (INIS)

    Williams, S; Carter, J; Oliker, L; Shalf, J; Yelick, K; Bailey, D; Datta, K

    2008-01-01

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to sparse matrix vector multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the high-performance computing literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4x improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications

  15. PERI - Auto-tuning Memory Intensive Kernels for Multicore

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H

    2008-06-24

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.

  16. Kernel Bayesian ART and ARTMAP.

    Science.gov (United States)

    Masuyama, Naoki; Loo, Chu Kiong; Dawood, Farhan

    2018-02-01

    Adaptive Resonance Theory (ART) is one of the successful approaches to resolving "the plasticity-stability dilemma" in neural networks, and its supervised learning model called ARTMAP is a powerful tool for classification. Among several improvements, such as Fuzzy or Gaussian based models, the state of art model is Bayesian based one, while solving the drawbacks of others. However, it is known that the Bayesian approach for the high dimensional and a large number of data requires high computational cost, and the covariance matrix in likelihood becomes unstable. This paper introduces Kernel Bayesian ART (KBA) and ARTMAP (KBAM) by integrating Kernel Bayes' Rule (KBR) and Correntropy Induced Metric (CIM) to Bayesian ART (BA) and ARTMAP (BAM), respectively, while maintaining the properties of BA and BAM. The kernel frameworks in KBA and KBAM are able to avoid the curse of dimensionality. In addition, the covariance-free Bayesian computation by KBR provides the efficient and stable computational capability to KBA and KBAM. Furthermore, Correntropy-based similarity measurement allows improving the noise reduction ability even in the high dimensional space. The simulation experiments show that KBA performs an outstanding self-organizing capability than BA, and KBAM provides the superior classification ability than BAM, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Low-Resolution Tactile Image Recognition for Automated Robotic Assembly Using Kernel PCA-Based Feature Fusion and Multiple Kernel Learning-Based Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Yi-Hung Liu

    2014-01-01

    Full Text Available In this paper, we propose a robust tactile sensing image recognition scheme for automatic robotic assembly. First, an image reprocessing procedure is designed to enhance the contrast of the tactile image. In the second layer, geometric features and Fourier descriptors are extracted from the image. Then, kernel principal component analysis (kernel PCA is applied to transform the features into ones with better discriminating ability, which is the kernel PCA-based feature fusion. The transformed features are fed into the third layer for classification. In this paper, we design a classifier by combining the multiple kernel learning (MKL algorithm and support vector machine (SVM. We also design and implement a tactile sensing array consisting of 10-by-10 sensing elements. Experimental results, carried out on real tactile images acquired by the designed tactile sensing array, show that the kernel PCA-based feature fusion can significantly improve the discriminating performance of the geometric features and Fourier descriptors. Also, the designed MKL-SVM outperforms the regular SVM in terms of recognition accuracy. The proposed recognition scheme is able to achieve a high recognition rate of over 85% for the classification of 12 commonly used metal parts in industrial applications.

  18. Analysis of cyclic variations during mode switching between spark ignition and controlled auto-ignition combustion operations

    OpenAIRE

    Chen, T; Zhao, H; Xie, H; He, B

    2014-01-01

    © IMechE 2014. Controlled auto-ignition, also known as homogeneous charge compression ignition, has been the subject of extensive research because of their ability to provide simultaneous reductions in fuel consumption and NOx emissions from a gasoline engine. However, due to its limited operation range, switching between controlled auto-ignition and spark ignition combustion is needed to cover the complete operating range of a gasoline engine for passenger car applications. Previous research...

  19. Design and construction of palm kernel cracking and separation ...

    African Journals Online (AJOL)

    Design and construction of palm kernel cracking and separation machines. ... Username, Password, Remember me, or Register. DOWNLOAD FULL TEXT Open Access DOWNLOAD FULL TEXT Subscription or Fee Access. Design and construction of palm kernel cracking and separation machines. JO Nordiana, K ...

  20. Variable kernel density estimation in high-dimensional feature spaces

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2017-02-01

    Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...

  1. The optimization of some of the conditions for analysis by spark-source mass spectrometry

    International Nuclear Information System (INIS)

    Pearton, D.C.P.; Sobiecki, A.

    1980-01-01

    The need for improved precision in spark-source mass spectrometry is highlighted. Several parameters, such as photoplate-development technique, instrumental stability and focus, and sparking conditions, were optimized. Measurements made under these optimum conditions attained precisions of more than 12 per cent

  2. Heat Kernel Asymptotics of Zaremba Boundary Value Problem

    Energy Technology Data Exchange (ETDEWEB)

    Avramidi, Ivan G. [Department of Mathematics, New Mexico Institute of Mining and Technology (United States)], E-mail: iavramid@nmt.edu

    2004-03-15

    The Zaremba boundary-value problem is a boundary value problem for Laplace-type second-order partial differential operators acting on smooth sections of a vector bundle over a smooth compact Riemannian manifold with smooth boundary but with discontinuous boundary conditions, which include Dirichlet boundary conditions on one part of the boundary and Neumann boundary conditions on another part of the boundary. We study the heat kernel asymptotics of Zaremba boundary value problem. The construction of the asymptotic solution of the heat equation is described in detail and the heat kernel is computed explicitly in the leading approximation. Some of the first nontrivial coefficients of the heat kernel asymptotic expansion are computed explicitly.

  3. Graphical analyses of connected-kernel scattering equations

    International Nuclear Information System (INIS)

    Picklesimer, A.

    1982-10-01

    Simple graphical techniques are employed to obtain a new (simultaneous) derivation of a large class of connected-kernel scattering equations. This class includes the Rosenberg, Bencze-Redish-Sloan, and connected-kernel multiple scattering equations as well as a host of generalizations of these and other equations. The graphical method also leads to a new, simplified form for some members of the class and elucidates the general structural features of the entire class

  4. An Ensemble Approach to Building Mercer Kernels with Prior Information

    Science.gov (United States)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  5. Exploration of Shorea robusta (Sal seeds, kernels and its oil

    Directory of Open Access Journals (Sweden)

    Shashi Kumar C.

    2016-12-01

    Full Text Available Physical, mechanical, and chemical properties of Shorea robusta seed with wing, seed without wing, and kernel were investigated in the present work. The physico-chemical composition of sal oil was also analyzed. The physico-mechanical properties and proximate composition of seed with wing, seed without wing, and kernel at three moisture contents of 9.50% (w.b, 9.54% (w.b, and 12.14% (w.b, respectively, were studied. The results show that the moisture content of the kernel was highest as compared to seed with wing and seed without wing. The sphericity of the kernel was closer to that of a sphere as compared to seed with wing and seed without wing. The hardness of the seed with wing (32.32, N/mm and seed without wing (42.49, N/mm was lower than the kernels (72.14, N/mm. The proximate composition such as moisture, protein, carbohydrates, oil, crude fiber, and ash content were also determined. The kernel (30.20%, w/w contains higher oil percentage as compared to seed with wing and seed without wing. The scientific data from this work are important for designing of equipment and processes for post-harvest value addition of sal seeds.

  6. Ca2+ sparks act as potent regulators of excitation-contraction coupling in airway smooth muscle.

    Science.gov (United States)

    Zhuge, Ronghua; Bao, Rongfeng; Fogarty, Kevin E; Lifshitz, Lawrence M

    2010-01-15

    Ca2+ sparks are short lived and localized Ca2+ transients resulting from the opening of ryanodine receptors in sarcoplasmic reticulum. These events relax certain types of smooth muscle by activating big conductance Ca2+-activated K+ channels to produce spontaneous transient outward currents (STOCs) and the resultant closure of voltage-dependent Ca2+ channels. But in many smooth muscles from a variety of organs, Ca2+ sparks can additionally activate Ca2+-activated Cl(-) channels to generate spontaneous transient inward current (STICs). To date, the physiological roles of Ca2+ sparks in this latter group of smooth muscle remain elusive. Here, we show that in airway smooth muscle, Ca2+ sparks under physiological conditions, activating STOCs and STICs, induce biphasic membrane potential transients (BiMPTs), leading to membrane potential oscillations. Paradoxically, BiMPTs stabilize the membrane potential by clamping it within a negative range and prevent the generation of action potentials. Moreover, blocking either Ca2+ sparks or hyperpolarization components of BiMPTs activates voltage-dependent Ca2+ channels, resulting in an increase in global [Ca2+](i) and cell contraction. Therefore, Ca2+ sparks in smooth muscle presenting both STICs and STOCs act as a stabilizer of membrane potential, and altering the balance can profoundly alter the status of excitability and contractility. These results reveal a novel mechanism underlying the control of excitability and contractility in smooth muscle.

  7. Irradiation performance of coated fuel particles with fission product retaining kernel additives

    International Nuclear Information System (INIS)

    Foerthmann, R.

    1979-10-01

    The four irradiation experiments FRJ2-P17, FRJ2-P18, FRJ2-P19, and FRJ2-P20 for testing the efficiency of fission product-retaining kernel additives in coated fuel particles are described. The evaluation of the obtained experimental data led to the following results: - zirconia and alumina kernel additives are not suitable for an effective fission product retention in oxide fuel kernels, - alumina-silica kernel additives reduce the in-pile release of Sr 90 and Ba 140 from BISO-coated particles at temperatures of about 1200 0 C by two orders of magnitude, and the Cs release from kernels by one order of magnitude, - effective transport coefficients including all parameters which contribute to kernel release are given for (Th,U)O 2 mixed oxide kernels and low enriched UO 2 kernels containing 5 wt.% alumina-silica additives: 10g sub(K)/cm 2 s -1 = - 36 028/T + 6,261 (Sr 90), 10g Dsub(K)/cm 2 c -2 = - 29 646/T + 5,826 (Cs 134/137), alumina-silica kernel additives are ineffective for retaining Ag 110 m in coated particles. However, also an intact SiC-interlayer was found not to be effective at temperatures above 1200 0 C, - the penetration of the buffer layer by fission product containing eutectic additive melt during irradiation can be avoided by using additives which consist of alumina and mullite without an excess of silica, - annealing of LASER-failed irradiated particles and the irradiation test FRJ12-P20 indicate that the efficiency of alumina-silica kernel additives is not altered if the coating becomes defect. (orig.) [de

  8. Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies

    Science.gov (United States)

    Manitz, Juliane; Burger, Patricia; Amos, Christopher I.; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike

    2017-01-01

    The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility. PMID:28785300

  9. Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies.

    Science.gov (United States)

    Friedrichs, Stefanie; Manitz, Juliane; Burger, Patricia; Amos, Christopher I; Risch, Angela; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike; Hofner, Benjamin

    2017-01-01

    The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility.

  10. Large-Eddy Simulations of Motored Flow and Combustion in a Homogeneous-Charge Spark-Ignition Engine

    Science.gov (United States)

    Shekhawat, Yajuvendra Singh

    Cycle-to-cycle variations (CCV) of flow and combustion in internal combustion engines (ICE) limit their fuel efficiency and emissions potential. Large-eddy simulation (LES) is the most practical simulation tool to understand the nature of these CCV. In this research, multi-cycle LES of a two-valve, four-stroke, spark-ignition optical engine has been performed for motored and fired operations. The LES mesh quality is assessed using a length scale resolution parameter and a energy resolution parameter. For the motored operation, two 50-consecutive-cycle LES with different turbulence models (Smagorinsky model and dynamic structure model) are compared with the experiment. The pressure comparison shows that the LES is able to capture the wave-dynamics in the intake and exhaust ports. The LES velocity fields are compared with particle-image velocimetry (PIV) measurements at three cutting planes. Based on the structure and magnitude indices, the dynamic structure model is somewhat better than the Smagorinsky model as far as the ensemble-averaged velocity fields are concerned. The CCV in the velocity fields is assessed by proper-orthogonal decomposition (POD). The POD analysis shows that LES is able to capture the level of CCV seen in the experiment. For the fired operation, two 60-cycle LES with different combustion models (thickened frame model and coherent frame model) are compared with experiment. The in-cylinder pressure and the apparent heat release rate comparison shows higher CCV for LES compared to the experiment, with the thickened frame model showing higher CCV than the coherent frame model. The correlation analysis for the LES using thickened frame model shows that the CCV in combustion/pressure is correlated with: the tumble at the intake valve closing, the resolved and subfilter-scale kinetic energy just before spark time, and the second POD mode (shear flow near spark gap) of the velocity fields just before spark time.

  11. Dose calculation methods in photon beam therapy using energy deposition kernels

    International Nuclear Information System (INIS)

    Ahnesjoe, A.

    1991-01-01

    The problem of calculating accurate dose distributions in treatment planning of megavoltage photon radiation therapy has been studied. New dose calculation algorithms using energy deposition kernels have been developed. The kernels describe the transfer of energy by secondary particles from a primary photon interaction site to its surroundings. Monte Carlo simulations of particle transport have been used for derivation of kernels for primary photon energies form 0.1 MeV to 50 MeV. The trade off between accuracy and calculational speed has been addressed by the development of two algorithms; one point oriented with low computional overhead for interactive use and one for fast and accurate calculation of dose distributions in a 3-dimensional lattice. The latter algorithm models secondary particle transport in heterogeneous tissue by scaling energy deposition kernels with the electron density of the tissue. The accuracy of the methods has been tested using full Monte Carlo simulations for different geometries, and found to be superior to conventional algorithms based on scaling of broad beam dose distributions. Methods have also been developed for characterization of clinical photon beams in entities appropriate for kernel based calculation models. By approximating the spectrum as laterally invariant, an effective spectrum and dose distribution for contaminating charge particles are derived form depth dose distributions measured in water, using analytical constraints. The spectrum is used to calculate kernels by superposition of monoenergetic kernels. The lateral energy fluence distribution is determined by deconvolving measured lateral dose distributions by a corresponding pencil beam kernel. Dose distributions for contaminating photons are described using two different methods, one for estimation of the dose outside of the collimated beam, and the other for calibration of output factors derived from kernel based dose calculations. (au)

  12. Boundary singularity of Poisson and harmonic Bergman kernels

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav

    2015-01-01

    Roč. 429, č. 1 (2015), s. 233-272 ISSN 0022-247X R&D Projects: GA AV ČR IAA100190802 Institutional support: RVO:67985840 Keywords : harmonic Bergman kernel * Poisson kernel * pseudodifferential boundary operators Subject RIV: BA - General Mathematics Impact factor: 1.014, year: 2015 http://www.sciencedirect.com/science/article/pii/S0022247X15003170

  13. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    Science.gov (United States)

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  14. Multiple Kernel Learning with Random Effects for Predicting Longitudinal Outcomes and Data Integration

    Science.gov (United States)

    Chen, Tianle; Zeng, Donglin

    2015-01-01

    Summary Predicting disease risk and progression is one of the main goals in many clinical research studies. Cohort studies on the natural history and etiology of chronic diseases span years and data are collected at multiple visits. Although kernel-based statistical learning methods are proven to be powerful for a wide range of disease prediction problems, these methods are only well studied for independent data but not for longitudinal data. It is thus important to develop time-sensitive prediction rules that make use of the longitudinal nature of the data. In this paper, we develop a novel statistical learning method for longitudinal data by introducing subject-specific short-term and long-term latent effects through a designed kernel to account for within-subject correlation of longitudinal measurements. Since the presence of multiple sources of data is increasingly common, we embed our method in a multiple kernel learning framework and propose a regularized multiple kernel statistical learning with random effects to construct effective nonparametric prediction rules. Our method allows easy integration of various heterogeneous data sources and takes advantage of correlation among longitudinal measures to increase prediction power. We use different kernels for each data source taking advantage of the distinctive feature of each data modality, and then optimally combine data across modalities. We apply the developed methods to two large epidemiological studies, one on Huntington's disease and the other on Alzheimer's Disease (Alzheimer's Disease Neuroimaging Initiative, ADNI) where we explore a unique opportunity to combine imaging and genetic data to study prediction of mild cognitive impairment, and show a substantial gain in performance while accounting for the longitudinal aspect of the data. PMID:26177419

  15. Kernel structures for Clouds

    Science.gov (United States)

    Spafford, Eugene H.; Mckendry, Martin S.

    1986-01-01

    An overview of the internal structure of the Clouds kernel was presented. An indication of how these structures will interact in the prototype Clouds implementation is given. Many specific details have yet to be determined and await experimentation with an actual working system.

  16. Commutators of Integral Operators with Variable Kernels on Hardy ...

    Indian Academy of Sciences (India)

    Home; Journals; Proceedings – Mathematical Sciences; Volume 115; Issue 4. Commutators of Integral Operators with Variable Kernels on Hardy Spaces. Pu Zhang Kai Zhao. Volume 115 Issue 4 November 2005 pp 399-410 ... Keywords. Singular and fractional integrals; variable kernel; commutator; Hardy space.

  17. Oven-drying reduces ruminal starch degradation in maize kernels

    NARCIS (Netherlands)

    Ali, M.; Cone, J.W.; Hendriks, W.H.; Struik, P.C.

    2014-01-01

    The degradation of starch largely determines the feeding value of maize (Zea mays L.) for dairy cows. Normally, maize kernels are dried and ground before chemical analysis and determining degradation characteristics, whereas cows eat and digest fresh material. Drying the moist maize kernels

  18. RHAGOLETIS COMPLETA (DIPTERA; TEPHRITIDAE) DISTRIBUTION, FLIGHT DYNAMICS AND INFLUENCE ON WALNUT KERNEL QUALITY IN THE CONTINENTAL CROATIA

    OpenAIRE

    Božena Barić; Ivana Pajač Živković; Dinka Matošević; Milorad Šubić; Erzsébet Voigt; Miklós Tóth

    2015-01-01

    Walnut husk fly (WHF), Rhagoletis completa Cresson 1929 is an invasive species spreading quickly and damaging walnuts in Croatia and neighbouring countries. We researched distribution of this pest in the continental part of Croatia, flight dynamics in Međimurje County and its influence on quality of walnut kernels. CSALOMON®PALz traps were used for monitoring the spread and flight dynamics of R. completa. Weight and the protein content of kernels and the presence of mycotoxin contamination we...

  19. Perspectives for practical application of the combined fuel kernels in VVER-type reactors

    International Nuclear Information System (INIS)

    Baranov, V.; Ternovykh, M.; Tikhomirov, G.; Khlunov, A.; Tenishev, A.; Kurina, I.

    2011-01-01

    The paper considers the main physical processes that take place in fuel kernels under real operation conditions of VVER-type reactors. Main attention is given to the effects induced by combinations of layers with different physical properties inside of fuel kernels on these physical processes. Basic neutron-physical characteristics were calculated for some combined fuel kernels in fuel rods of VVER-type reactors. There are many goals in development of the combined fuel kernels, and these goals define selecting the combinations and compositions of radial layers inside of the kernels. For example, the slower formation of the rim-layer on outer surface of the kernels made of enriched uranium dioxide can be achieved by introduction of inner layer made of natural or depleted uranium dioxide. Other potential goals (lower temperature in the kernel center, better conditions for burn-up of neutron poisons, better retention of toxic materials) could be reached by other combinations of fuel compositions in central and peripheral zones of the fuel kernels. Also, the paper presents the results obtained in experimental manufacturing of the combined fuel pellets. (authors)

  20. Replacement value of palm kernel meal for maize on growth, egg ...

    African Journals Online (AJOL)

    A research was conducted to evaluate the effect of replacing maize with palm kernel meal (PKM) in the diet on the performance of duck hens. Five treatment diets were formulated in which PKM replaced maize at 0, 25, 50, 75 and 100% using a completely randomized design in three replications. The study lasted 8 weeks ...

  1. Silicon nanoparticles produced by spark discharge

    International Nuclear Information System (INIS)

    Vons, Vincent A.; Smet, Louis C. P. M. de; Munao, David; Evirgen, Alper; Kelder, Erik M.; Schmidt-Ott, Andreas

    2011-01-01

    On the example of silicon, the production of nanoparticles using spark discharge is shown to be feasible for semiconductors. The discharge circuit is modelled as a damped oscillator circuit. This analysis reveals that the electrode resistance should be kept low enough to limit energy loss by Joule heating and to enable effective nanoparticle production. The use of doped electrodes results in a thousand-fold increase in the mass production rate as compared to intrinsic silicon. Pure and oxidised uniformly sized silicon nanoparticles with a primary particle diameter of 3–5 nm are produced. It is shown that the colour of the particles can be used as a good indicator of the oxidation state. If oxygen and water are banned from the spark generation system by (a) gas purification, (b) outgassing and (c) by initially using the particles produced as getters, unoxidised Si particles are obtained. They exhibit pyrophoric behaviour. This continuous nanoparticle preparation method can be combined with other processing techniques, including surface functionalization or the immediate impaction of freshly prepared nanoparticles onto a substrate for applications in the field of batteries, hydrogen storage or sensors.

  2. Occurrence of 'super soft' wheat kernel texture in hexaploid and tetraploid wheats

    Science.gov (United States)

    Wheat kernel texture is a key trait that governs milling performance, flour starch damage, flour particle size, flour hydration properties, and baking quality. Kernel texture is commonly measured using the Perten Single Kernel Characterization System (SKCS). The SKCS returns texture values (Hardness...

  3. Kernel Methods for Machine Learning with Life Science Applications

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie

    Kernel methods refer to a family of widely used nonlinear algorithms for machine learning tasks like classification, regression, and feature extraction. By exploiting the so-called kernel trick straightforward extensions of classical linear algorithms are enabled as long as the data only appear a...

  4. Nonlinear Forecasting With Many Predictors Using Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter; Groenen, Patrick J.F.; Heij, Christiaan

    This paper puts forward kernel ridge regression as an approach for forecasting with many predictors that are related nonlinearly to the target variable. In kernel ridge regression, the observed predictor variables are mapped nonlinearly into a high-dimensional space, where estimation of the predi...

  5. Mode of inheritance and combining abilities for kernel row number, kernel number per row and grain yield in maize (Zea mays L.)

    NARCIS (Netherlands)

    Bocanski, J.; Sreckov, Z.; Nastasic, A.; Ivanovic, M.; Djalovic, I.; Vukosavljev, M.

    2010-01-01

    Bocanski J., Z. Sreckov, A. Nastasic, M. Ivanovic, I.Djalovic and M. Vukosavljev (2010): Mode of inheritance and combining abilities for kernel row number, kernel number per row and grain yield in maize (Zea mays L.) - Genetika, Vol 42, No. 1, 169- 176. Utilization of heterosis requires the study of

  6. Reproducing Kernels and Coherent States on Julia Sets

    Energy Technology Data Exchange (ETDEWEB)

    Thirulogasanthar, K., E-mail: santhar@cs.concordia.ca; Krzyzak, A. [Concordia University, Department of Computer Science and Software Engineering (Canada)], E-mail: krzyzak@cs.concordia.ca; Honnouvo, G. [Concordia University, Department of Mathematics and Statistics (Canada)], E-mail: g_honnouvo@yahoo.fr

    2007-11-15

    We construct classes of coherent states on domains arising from dynamical systems. An orthonormal family of vectors associated to the generating transformation of a Julia set is found as a family of square integrable vectors, and, thereby, reproducing kernels and reproducing kernel Hilbert spaces are associated to Julia sets. We also present analogous results on domains arising from iterated function systems.

  7. Reproducing Kernels and Coherent States on Julia Sets

    International Nuclear Information System (INIS)

    Thirulogasanthar, K.; Krzyzak, A.; Honnouvo, G.

    2007-01-01

    We construct classes of coherent states on domains arising from dynamical systems. An orthonormal family of vectors associated to the generating transformation of a Julia set is found as a family of square integrable vectors, and, thereby, reproducing kernels and reproducing kernel Hilbert spaces are associated to Julia sets. We also present analogous results on domains arising from iterated function systems

  8. DNA sequence+shape kernel enables alignment-free modeling of transcription factor binding.

    Science.gov (United States)

    Ma, Wenxiu; Yang, Lin; Rohs, Remo; Noble, William Stafford

    2017-10-01

    Transcription factors (TFs) bind to specific DNA sequence motifs. Several lines of evidence suggest that TF-DNA binding is mediated in part by properties of the local DNA shape: the width of the minor groove, the relative orientations of adjacent base pairs, etc. Several methods have been developed to jointly account for DNA sequence and shape properties in predicting TF binding affinity. However, a limitation of these methods is that they typically require a training set of aligned TF binding sites. We describe a sequence + shape kernel that leverages DNA sequence and shape information to better understand protein-DNA binding preference and affinity. This kernel extends an existing class of k-mer based sequence kernels, based on the recently described di-mismatch kernel. Using three in vitro benchmark datasets, derived from universal protein binding microarrays (uPBMs), genomic context PBMs (gcPBMs) and SELEX-seq data, we demonstrate that incorporating DNA shape information improves our ability to predict protein-DNA binding affinity. In particular, we observe that (i) the k-spectrum + shape model performs better than the classical k-spectrum kernel, particularly for small k values; (ii) the di-mismatch kernel performs better than the k-mer kernel, for larger k; and (iii) the di-mismatch + shape kernel performs better than the di-mismatch kernel for intermediate k values. The software is available at https://bitbucket.org/wenxiu/sequence-shape.git. rohs@usc.edu or william-noble@uw.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  9. Multiple Kernel Sparse Representation based Orthogonal Discriminative Projection and Its Cost-Sensitive Extension.

    Science.gov (United States)

    Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen

    2016-07-07

    Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.

  10. Mixed kernel function support vector regression for global sensitivity analysis

    Science.gov (United States)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  11. Feature Selection and Kernel Learning for Local Learning-Based Clustering.

    Science.gov (United States)

    Zeng, Hong; Cheung, Yiu-ming

    2011-08-01

    The performance of the most clustering algorithms highly relies on the representation of data in the input space or the Hilbert space of kernel methods. This paper is to obtain an appropriate data representation through feature selection or kernel learning within the framework of the Local Learning-Based Clustering (LLC) (Wu and Schölkopf 2006) method, which can outperform the global learning-based ones when dealing with the high-dimensional data lying on manifold. Specifically, we associate a weight to each feature or kernel and incorporate it into the built-in regularization of the LLC algorithm to take into account the relevance of each feature or kernel for the clustering. Accordingly, the weights are estimated iteratively in the clustering process. We show that the resulting weighted regularization with an additional constraint on the weights is equivalent to a known sparse-promoting penalty. Hence, the weights of those irrelevant features or kernels can be shrunk toward zero. Extensive experiments show the efficacy of the proposed methods on the benchmark data sets.

  12. Semisupervised kernel marginal Fisher analysis for face recognition.

    Science.gov (United States)

    Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun

    2013-01-01

    Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.

  13. Aida-CMK multi-algorithm optimization kernel applied to analog IC sizing

    CERN Document Server

    Lourenço, Ricardo; Horta, Nuno

    2015-01-01

    This work addresses the research and development of an innovative optimization kernel applied to analog integrated circuit (IC) design. Particularly, this works describes the modifications inside the AIDA Framework, an electronic design automation framework fully developed by at the Integrated Circuits Group-LX of the Instituto de Telecomunicações, Lisbon. It focusses on AIDA-CMK, by enhancing AIDA-C, which is the circuit optimizer component of AIDA, with a new multi-objective multi-constraint optimization module that constructs a base for multiple algorithm implementations. The proposed solution implements three approaches to multi-objective multi-constraint optimization, namely, an evolutionary approach with NSGAII, a swarm intelligence approach with MOPSO and stochastic hill climbing approach with MOSA. Moreover, the implemented structure allows the easy hybridization between kernels transforming the previous simple NSGAII optimization module into a more evolved and versatile module supporting multiple s...

  14. Evaluation of Biosynthesis, Accumulation and Antioxidant Activityof Vitamin E in Sweet Corn (Zea mays L. during Kernel Development

    Directory of Open Access Journals (Sweden)

    Lihua Xie

    2017-12-01

    Full Text Available Sweet corn kernels were used in this research to study the dynamics of vitamin E, by evaluatingthe expression levels of genes involved in vitamin E synthesis, the accumulation of vitamin E, and the antioxidant activity during the different stage of kernel development. Results showed that expression levels of ZmHPT and ZmTC genes increased, whereas ZmTMT gene dramatically decreased during kernel development. The contents of all the types of vitamin E in sweet corn had a significant upward increase during kernel development, and reached the highest level at 30 days after pollination (DAP. Amongst the eight isomers of vitamin E, the content of γ-tocotrienol was the highest, and increased by 14.9 folds, followed by α-tocopherolwith an increase of 22 folds, and thecontents of isomers γ-tocopherol, α-tocotrienol, δ-tocopherol,δ-tocotrienol, and β-tocopherol were also followed during kernel development. The antioxidant activity of sweet corn during kernel development was increased, and was up to 101.8 ± 22.3 μmol of α-tocopherol equivlent/100 g in fresh weight (FW at 30 DAP. There was a positive correlation between vitamin E contents and antioxidant activity in sweet corn during the kernel development, and a negative correlation between the expressions of ZmTMT gene and vitamin E contents. These results revealed the relations amongst the content of vitamin E isomers and the gene expression, vitamin E accumulation, and antioxidant activity. The study can provide a harvesting strategy for vitamin E bio-fortification in sweet corn.

  15. Evaluation of Biosynthesis, Accumulation and Antioxidant Activityof Vitamin E in Sweet Corn (Zea mays L.) during Kernel Development.

    Science.gov (United States)

    Xie, Lihua; Yu, Yongtao; Mao, Jihua; Liu, Haiying; Hu, Jian Guang; Li, Tong; Guo, Xinbo; Liu, Rui Hai

    2017-12-20

    Sweet corn kernels were used in this research to study the dynamics of vitamin E, by evaluatingthe expression levels of genes involved in vitamin E synthesis, the accumulation of vitamin E, and the antioxidant activity during the different stage of kernel development. Results showed that expression levels of Zm HPT and Zm TC genes increased, whereas Zm TMT gene dramatically decreased during kernel development. The contents of all the types of vitamin E in sweet corn had a significant upward increase during kernel development, and reached the highest level at 30 days after pollination (DAP). Amongst the eight isomers of vitamin E, the content of γ-tocotrienol was the highest, and increased by 14.9 folds, followed by α-tocopherolwith an increase of 22 folds, and thecontents of isomers γ-tocopherol, α-tocotrienol, δ-tocopherol,δ-tocotrienol, and β-tocopherol were also followed during kernel development. The antioxidant activity of sweet corn during kernel development was increased, and was up to 101.8 ± 22.3 μmol of α-tocopherol equivlent/100 g in fresh weight (FW) at 30 DAP. There was a positive correlation between vitamin E contents and antioxidant activity in sweet corn during the kernel development, and a negative correlation between the expressions of Zm TMT gene and vitamin E contents. These results revealed the relations amongst the content of vitamin E isomers and the gene expression, vitamin E accumulation, and antioxidant activity. The study can provide a harvesting strategy for vitamin E bio-fortification in sweet corn.

  16. Fuzzy-based multi-kernel spherical support vector machine for ...

    Indian Academy of Sciences (India)

    In the proposed classifier, we design a new multi-kernel function based on the fuzzy triangular membership function. Finally, a newly developed multi-kernel function is incorporated into the spherical support vector machine to enhance the performance significantly. The experimental results are evaluated and performance is ...

  17. Apparatus for atmospheric pressure pin-to-hole spark discharge and uses thereof

    Science.gov (United States)

    Dobrynin, Danil V.; Fridman, Alexander; Cho, Young I.; Fridman, Gregory; Friedman, Gennady

    2016-12-06

    Disclosed herein are atmospheric pressure pin-to-hole pulsed spark discharge devices and methods for creating plasma. The devices include a conduit for fluidically communicating a gas, a plasma, or both, therethrough, portion of the conduit capable of being connected to a gas supply, and a second portion of the conduit capable of emitting a plasma; a positive electrode comprising a sharp tip; and a ground plate electrode. Disclosed are methods for treating a skin ulcer using non-thermal plasma include flowing a gas through a cold spark discharge zone simultaneously with the creation of a pulsed spark discharge to give rise to a non-thermal plasma emitted from a conduit, the non-thermal plasma comprising NO; and contacting a skin ulcer with said non-thermal plasma for sufficient time and intensity to give rise to treatment of the skin ulcer.

  18. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    Science.gov (United States)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  19. THE EFFECT OF COMPRESSION RATIO VARIATIONS ON THE ENGINE PERFORMANCE PARAMETRES IN SPARK IGNITION ENGINES

    Directory of Open Access Journals (Sweden)

    Yakup SEKMEN

    2005-01-01

    Full Text Available Performance of the spark ignition engines may be increased by changing the geometrical compression ratio according to the amount of charging in cylinders. The designed geometrical compression ratio can be realized as an effective compression ratio under the full load and full open throttle conditions since the effective compression ratio changes with the amount of charging into the cylinder in spark ignition engines. So, this condition of the spark ignition engines forces designers to change their geometrical compression ratio according to the amount of charging into the cylinder for improvement of performance and fuel economy. In order to improve the combustion efficiency, fuel economy, power output, exhaust emissions at partial loads, compression ratio must be increased; but, under high load and low speed conditions to prevent probable knock and hard running the compression ratio must be decreased gradually. In this paper, relation of the performance parameters to compression ratio such as power, torque, specific fuel consumption, cylindir pressure, exhaust gas temperature, combustion chamber surface area/volume ratio, thermal efficiency, spark timing etc. in spark ignition engines have been investigated and using of engines with variable compression ratio is suggested to fuel economy and more clear environment.

  20. Modelling microwave heating of discrete samples of oil palm kernels

    International Nuclear Information System (INIS)

    Law, M.C.; Liew, E.L.; Chang, S.L.; Chan, Y.S.; Leo, C.P.

    2016-01-01

    Highlights: • Microwave (MW) drying of oil palm kernels is experimentally determined and modelled. • MW heating of discrete samples of oil palm kernels (OPKs) is simulated. • OPK heating is due to contact effect, MW interference and heat transfer mechanisms. • Electric field vectors circulate within OPKs sample. • Loosely-packed arrangement improves temperature uniformity of OPKs. - Abstract: Recently, microwave (MW) pre-treatment of fresh palm fruits has showed to be environmentally friendly compared to the existing oil palm milling process as it eliminates the condensate production of palm oil mill effluent (POME) in the sterilization process. Moreover, MW-treated oil palm fruits (OPF) also possess better oil quality. In this work, the MW drying kinetic of the oil palm kernels (OPK) was determined experimentally. Microwave heating/drying of oil palm kernels was modelled and validated. The simulation results show that temperature of an OPK is not the same over the entire surface due to constructive and destructive interferences of MW irradiance. The volume-averaged temperature of an OPK is higher than its surface temperature by 3–7 °C, depending on the MW input power. This implies that point measurement of temperature reading is inadequate to determine the temperature history of the OPK during the microwave heating process. The simulation results also show that arrangement of OPKs in a MW cavity affects the kernel temperature profile. The heating of OPKs were identified to be affected by factors such as local electric field intensity due to MW absorption, refraction, interference, the contact effect between kernels and also heat transfer mechanisms. The thermal gradient patterns of OPKs change as the heating continues. The cracking of OPKs is expected to occur first in the core of the kernel and then it propagates to the kernel surface. The model indicates that drying of OPKs is a much slower process compared to its MW heating. The model is useful

  1. Finite frequency traveltime sensitivity kernels for acoustic anisotropic media: Angle dependent bananas

    KAUST Repository

    Djebbi, Ramzi

    2013-08-19

    Anisotropy is an inherent character of the Earth subsurface. It should be considered for modeling and inversion. The acoustic VTI wave equation approximates the wave behavior in anisotropic media, and especially it\\'s kinematic characteristics. To analyze which parts of the model would affect the traveltime for anisotropic traveltime inversion methods, especially for wave equation tomography (WET), we drive the sensitivity kernels for anisotropic media using the VTI acoustic wave equation. A Born scattering approximation is first derived using the Fourier domain acoustic wave equation as a function of perturbations in three anisotropy parameters. Using the instantaneous traveltime, which unwraps the phase, we compute the kernels. These kernels resemble those for isotropic media, with the η kernel directionally dependent. They also have a maximum sensitivity along the geometrical ray, which is more realistic compared to the cross-correlation based kernels. Focusing on diving waves, which is used more often, especially recently in waveform inversion, we show sensitivity kernels in anisotropic media for this case.

  2. Finite frequency traveltime sensitivity kernels for acoustic anisotropic media: Angle dependent bananas

    KAUST Repository

    Djebbi, Ramzi; Alkhalifah, Tariq Ali

    2013-01-01

    Anisotropy is an inherent character of the Earth subsurface. It should be considered for modeling and inversion. The acoustic VTI wave equation approximates the wave behavior in anisotropic media, and especially it's kinematic characteristics. To analyze which parts of the model would affect the traveltime for anisotropic traveltime inversion methods, especially for wave equation tomography (WET), we drive the sensitivity kernels for anisotropic media using the VTI acoustic wave equation. A Born scattering approximation is first derived using the Fourier domain acoustic wave equation as a function of perturbations in three anisotropy parameters. Using the instantaneous traveltime, which unwraps the phase, we compute the kernels. These kernels resemble those for isotropic media, with the η kernel directionally dependent. They also have a maximum sensitivity along the geometrical ray, which is more realistic compared to the cross-correlation based kernels. Focusing on diving waves, which is used more often, especially recently in waveform inversion, we show sensitivity kernels in anisotropic media for this case.

  3. knock characteristics analysis of a supercharged spark ignition

    African Journals Online (AJOL)

    user

    The power output of a spark ignition engine could be improved by boosting the ... that the presence of aromatics was responsible for the better anti-knock ..... System, a Master's Thesis in the Institutionen för ... Maintenance and Reliability, Vol.

  4. Bioconversions of Palm Kernel Cake and Rice Bran Mixtures by Trichoderma viride Toward Nutritional Contents

    OpenAIRE

    Yana Sukaryana; Umi Atmomarsono; Vitus D. Yunianto; Ejeng Supriyatna

    2010-01-01

    The objective of the research is to examine the mixtures of palm kernel cake and rice bran of fermented by Trichoderma viride. Completely randomized design in factorial pattern 4 x 4 was used in this experiment. factor I is the doses of inoculums; D1 = 0%, D2 =  0,1% , D3 =  0,2%, D4 =  0,3%, and  complement factor II is mixtures of palm kernel cake and rice bran : T1=20:80% ; T2=40:60% ; T3=60:40% ; T4=80:20%. The treatment each of three replicate. Fermentation was conduc...

  5. The influence of maize kernel moisture on the sterilizing effect of gamma rays

    International Nuclear Information System (INIS)

    Khanymova, T.; Poloni, E.

    1980-01-01

    The influence of 4 levels of maize kernel moisture (16, 20, 25 and 30%) on gamma-ray sterilizing effect was studied and the after-effect of radiation on the microorganisms at short term storage was followed up. Maize kernels of the hybrid Knezha-36 produced in 1975 were used. Gamma-ray treatment of the kernels was effected by GUBEh-4000 irradiator at doses of 0.2 and 0.3 Mrad and after that they were stored for a month at 12 deg and 25 deg C and controlled moisture conditions. Surface and subepidermal infection of the kernels was determined immediately post irradiation and at the end of the experiment. Non-irradiated kernels were used as controls. Results indicated that the initial kernel moisture has a considerable influence on the sterilizing effect of gamma-rays at the rates used in the experiment and affects to a considerable extent the post-irradiation recovery of organisms. The speed of recovery was highest in the treatment with 30% moisture and lowest in the treatment with 16% kernel moisture. Irradiation of the kernels causes pronounced changes on the surface and subepidermal infection. This was due to the unequal radio resistance to the microbial components and to the modifying effect of the moisture holding capacity. The useful effect of maize kernel irradiation was more prolonged at 12 deg C than at 25 deg C

  6. Capturing option anomalies with a variance-dependent pricing kernel

    NARCIS (Netherlands)

    Christoffersen, P.; Heston, S.; Jacobs, K.

    2013-01-01

    We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is

  7. Resolvent kernel for the Kohn Laplacian on Heisenberg groups

    Directory of Open Access Journals (Sweden)

    Neur Eddine Askour

    2002-07-01

    Full Text Available We present a formula that relates the Kohn Laplacian on Heisenberg groups and the magnetic Laplacian. Then we obtain the resolvent kernel for the Kohn Laplacian and find its spectral density. We conclude by obtaining the Green kernel for fractional powers of the Kohn Laplacian.

  8. Ambient fields generated by a laser spark

    Czech Academy of Sciences Publication Activity Database

    Rohlena, Karel; Mašek, Martin

    2016-01-01

    Roč. 61, č. 2 (2016), s. 119-124 ISSN 0029-5922 R&D Projects: GA MŠk(CZ) LD14089; GA MŠk(CZ) LG13031 Institutional support: RVO:68378271 Keywords : laser spark * radiation chemistry * field generation Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 0.760, year: 2016

  9. Mapping QTLs controlling kernel dimensions in a wheat inter-varietal RIL mapping population.

    Science.gov (United States)

    Cheng, Ruiru; Kong, Zhongxin; Zhang, Liwei; Xie, Quan; Jia, Haiyan; Yu, Dong; Huang, Yulong; Ma, Zhengqiang

    2017-07-01

    Seven kernel dimension QTLs were identified in wheat, and kernel thickness was found to be the most important dimension for grain weight improvement. Kernel morphology and weight of wheat (Triticum aestivum L.) affect both yield and quality; however, the genetic basis of these traits and their interactions has not been fully understood. In this study, to investigate the genetic factors affecting kernel morphology and the association of kernel morphology traits with kernel weight, kernel length (KL), width (KW) and thickness (KT) were evaluated, together with hundred-grain weight (HGW), in a recombinant inbred line population derived from Nanda2419 × Wangshuibai, with data from five trials (two different locations over 3 years). The results showed that HGW was more closely correlated with KT and KW than with KL. A whole genome scan revealed four QTLs for KL, one for KW and two for KT, distributed on five different chromosomes. Of them, QKl.nau-2D for KL, and QKt.nau-4B and QKt.nau-5A for KT were newly identified major QTLs for the respective traits, explaining up to 32.6 and 41.5% of the phenotypic variations, respectively. Increase of KW and KT and reduction of KL/KT and KW/KT ratios always resulted in significant higher grain weight. Lines combining the Nanda 2419 alleles of the 4B and 5A intervals had wider, thicker, rounder kernels and a 14% higher grain weight in the genotype-based analysis. A strong, negative linear relationship of the KW/KT ratio with grain weight was observed. It thus appears that kernel thickness is the most important kernel dimension factor in wheat improvement for higher yield. Mapping and marker identification of the kernel dimension-related QTLs definitely help realize the breeding goals.

  10. A Heterogeneous Multi-core Architecture with a Hardware Kernel for Control Systems

    DEFF Research Database (Denmark)

    Li, Gang; Guan, Wei; Sierszecki, Krzysztof

    2012-01-01

    Rapid industrialisation has resulted in a demand for improved embedded control systems with features such as predictability, high processing performance and low power consumption. Software kernel implementation on a single processor is becoming more difficult to satisfy those constraints....... This paper presents a multi-core architecture incorporating a hardware kernel on FPGAs, intended for high performance applications in control engineering domain. First, the hardware kernel is investigated on the basis of a component-based real-time kernel HARTEX (Hard Real-Time Executive for Control Systems...

  11. SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics for Scientific Data and Analysis

    Data.gov (United States)

    National Aeronautics and Space Administration — We will construct SciSpark, a scalable system for interactive model evaluation and for the rapid development of climate metrics and analyses. SciSpark directly...

  12. Impulse tests on distribution transformers protected by means of spark gaps

    Energy Technology Data Exchange (ETDEWEB)

    Pykaelae, M L; Palva, V [Helsinki Univ. of Technology, Otaniemi (Finland). High Voltage Institute; Niskanen, K [ABB Corporate Research, Vaasa (Finland)

    1998-12-31

    Distribution transformers in rural networks have to cope with transient overvoltages, even with those caused by the direct lightning strokes to the lines. In Finland the 24 kV network conditions, such as wooden pole lines, high soil resistivity and isolated neutral network, lead into fast transient overvoltages. Impulse testing of pole-mounted distribution transformers ({<=} 200 kVA) protected by means of spark gaps were studied. Different failure detection methods were used. Results can be used as background information for standardization work dealing with distribution transformers protected by means of spark gaps. (orig.) 9 refs.

  13. Impulse tests on distribution transformers protected by means of spark gaps

    Energy Technology Data Exchange (ETDEWEB)

    Pykaelae, M.L.; Palva, V. [Helsinki Univ. of Technology, Otaniemi (Finland). High Voltage Institute; Niskanen, K. [ABB Corporate Research, Vaasa (Finland)

    1997-12-31

    Distribution transformers in rural networks have to cope with transient overvoltages, even with those caused by the direct lightning strokes to the lines. In Finland the 24 kV network conditions, such as wooden pole lines, high soil resistivity and isolated neutral network, lead into fast transient overvoltages. Impulse testing of pole-mounted distribution transformers ({<=} 200 kVA) protected by means of spark gaps were studied. Different failure detection methods were used. Results can be used as background information for standardization work dealing with distribution transformers protected by means of spark gaps. (orig.) 9 refs.

  14. Graphene-induced strengthening in spark plasma sintered tantalum carbide–nanotube composite

    International Nuclear Information System (INIS)

    Lahiri, Debrupa; Khaleghi, Evan; Bakshi, Srinivasa Rao; Li, Wei; Olevsky, Eugene A.; Agarwal, Arvind

    2013-01-01

    Transverse rupture strength of spark plasma sintered tantalum carbide (TaC) composites reinforced with long and short carbon nanotubes (CNTs) is reported. The rupture strength depends on the transformation behavior of the CNTs during spark plasma sintering, which is dependent on their length. The TaC composite with short nanotubes shows the highest specific rupture strength. Shorter CNTs transform into multi-layered graphene sheets between TaC grains, whereas long ones retain the tubular structure. Two-dimensionsal graphene platelets offer higher resistance to pull-out, resulting in delayed fracture and higher strength.

  15. Vacuum spark breakdown model based on exploding metal wire phenomena

    International Nuclear Information System (INIS)

    Haaland, J.

    1984-06-01

    Spark source mass spectra (SSMS) indicates that ions are extracted from an expanding and decaying plasma. The intensity distribution shows no dependance on vaporization properties of individual elements which indicates explosive vapour formation. This seems further to be a requirement for bridging a vacuum gap. A model including plasma ejection from a superheated anode spot by a process similar to that of an exploding metal wire is proposed. The appearance of hot plasma points in low inductance vacuum sparks can then be explained as exploding micro particles ejected from a final central anode spot. The phenomenological model is compared with available experimental results from literature, but no extensive quantification is attempted

  16. Glucose sensing on graphite screen-printed electrode modified by sparking of copper nickel alloys.

    Science.gov (United States)

    Riman, Daniel; Spyrou, Konstantinos; Karantzalis, Alexandros E; Hrbac, Jan; Prodromidis, Mamas I

    2017-04-01

    Electric spark discharge was employed as a green, fast and extremely facile method to modify disposable graphite screen-printed electrodes (SPEs) with copper, nickel and mixed copper/nickel nanoparticles (NPs) in order to be used as nonenzymatic glucose sensors. Direct SPEs-to-metal (copper, nickel or copper/nickel alloys with 25/75, 50/50 and 75/25wt% compositions) sparking at 1.2kV was conducted in the absence of any solutions under ambient conditions. Morphological characterization of the sparked surfaces was performed by scanning electron microscopy, while the chemical composition of the sparked NPs was evaluated with energy dispersive X-ray spectroscopy and X-ray photoelectron spectroscopy. The performance of the various sparked SPEs towards the electro oxidation of glucose in alkaline media and the critical role of hydroxyl ions were evaluated with cyclic voltammetry and kinetic studies. Results indicated a mixed charge transfer- and hyroxyl ion transport-limited process. Best performing sensors fabricated by Cu/Ni 50/50wt% alloy showed linear response over the concentration range 2-400μM glucose and they were successfully applied to the amperometric determination of glucose in blood. The detection limit (S/N 3) and the relative standard deviation of the method were 0.6µM and green methods in sensor's development. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Pollen source effects on growth of kernel structures and embryo chemical compounds in maize.

    Science.gov (United States)

    Tanaka, W; Mantese, A I; Maddonni, G A

    2009-08-01

    Previous studies have reported effects of pollen source on the oil concentration of maize (Zea mays) kernels through modifications to both the embryo/kernel ratio and embryo oil concentration. The present study expands upon previous analyses by addressing pollen source effects on the growth of kernel structures (i.e. pericarp, endosperm and embryo), allocation of embryo chemical constituents (i.e. oil, protein, starch and soluble sugars), and the anatomy and histology of the embryos. Maize kernels with different oil concentration were obtained from pollinations with two parental genotypes of contrasting oil concentration. The dynamics of the growth of kernel structures and allocation of embryo chemical constituents were analysed during the post-flowering period. Mature kernels were dissected to study the anatomy (embryonic axis and scutellum) and histology [cell number and cell size of the scutellums, presence of sub-cellular structures in scutellum tissue (starch granules, oil and protein bodies)] of the embryos. Plants of all crosses exhibited a similar kernel number and kernel weight. Pollen source modified neither the growth period of kernel structures, nor pericarp growth rate. By contrast, pollen source determined a trade-off between embryo and endosperm growth rates, which impacted on the embryo/kernel ratio of mature kernels. Modifications to the embryo size were mediated by scutellum cell number. Pollen source also affected (P embryo chemical compounds. Negative correlations among embryo oil concentration and those of starch (r = 0.98, P embryos with low oil concentration had an increased (P embryo/kernel ratio and allocation of embryo chemicals seems to be related to the early established sink strength (i.e. sink size and sink activity) of the embryos.

  18. Searching remote homology with spectral clustering with symmetry in neighborhood cluster kernels.

    Directory of Open Access Journals (Sweden)

    Ujjwal Maulik

    Full Text Available Remote homology detection among proteins utilizing only the unlabelled sequences is a central problem in comparative genomics. The existing cluster kernel methods based on neighborhoods and profiles and the Markov clustering algorithms are currently the most popular methods for protein family recognition. The deviation from random walks with inflation or dependency on hard threshold in similarity measure in those methods requires an enhancement for homology detection among multi-domain proteins. We propose to combine spectral clustering with neighborhood kernels in Markov similarity for enhancing sensitivity in detecting homology independent of "recent" paralogs. The spectral clustering approach with new combined local alignment kernels more effectively exploits the unsupervised protein sequences globally reducing inter-cluster walks. When combined with the corrections based on modified symmetry based proximity norm deemphasizing outliers, the technique proposed in this article outperforms other state-of-the-art cluster kernels among all twelve implemented kernels. The comparison with the state-of-the-art string and mismatch kernels also show the superior performance scores provided by the proposed kernels. Similar performance improvement also is found over an existing large dataset. Therefore the proposed spectral clustering framework over combined local alignment kernels with modified symmetry based correction achieves superior performance for unsupervised remote homolog detection even in multi-domain and promiscuous domain proteins from Genolevures database families with better biological relevance. Source code available upon request.sarkar@labri.fr.

  19. Graphical analyses of connected-kernel scattering equations

    International Nuclear Information System (INIS)

    Picklesimer, A.

    1983-01-01

    Simple graphical techniques are employed to obtain a new (simultaneous) derivation of a large class of connected-kernel scattering equations. This class includes the Rosenberg, Bencze-Redish-Sloan, and connected-kernel multiple scattering equations as well as a host of generalizations of these and other equations. The basic result is the application of graphical methods to the derivation of interaction-set equations. This yields a new, simplified form for some members of the class and elucidates the general structural features of the entire class

  20. The global kernel k-means algorithm for clustering in feature space.

    Science.gov (United States)

    Tzortzis, Grigorios F; Likas, Aristidis C

    2009-07-01

    Kernel k-means is an extension of the standard k -means clustering algorithm that identifies nonlinearly separable clusters. In order to overcome the cluster initialization problem associated with this method, we propose the global kernel k-means algorithm, a deterministic and incremental approach to kernel-based clustering. Our method adds one cluster at each stage, through a global search procedure consisting of several executions of kernel k-means from suitable initializations. This algorithm does not depend on cluster initialization, identifies nonlinearly separable clusters, and, due to its incremental nature and search procedure, locates near-optimal solutions avoiding poor local minima. Furthermore, two modifications are developed to reduce the computational cost that do not significantly affect the solution quality. The proposed methods are extended to handle weighted data points, which enables their application to graph partitioning. We experiment with several data sets and the proposed approach compares favorably to kernel k -means with random restarts.

  1. Bioconversions of Palm Kernel Cake and Rice Bran Mixtures by Trichoderma viride Toward Nutritional Contents

    Directory of Open Access Journals (Sweden)

    Yana Sukaryana

    2010-12-01

    Full Text Available The objective of the research is to examine the mixtures of palm kernel cake and rice bran of fermented by Trichoderma viride. Completely randomized design in factorial pattern 4 x 4 was used in this experiment. factor I is the doses of inoculums; D1 = 0%, D2 =  0,1% , D3 =  0,2%, D4 =  0,3%, and  complement factor II is mixtures of palm kernel cake and rice bran : T1=20:80% ; T2=40:60% ; T3=60:40% ; T4=80:20%. The treatment each of three replicate. Fermentation was conducted at temperature 28 oC as long as 9 days. Determining the best of the mixtures be based on the crude protein increased and the crude fibre decreased. The results showed that the combination of product mix is the best fermentation inoculums doses 0.3% in mixture of palm kernel cake and rice bran ; 80%: 20%, which produces dry matter of 88,12%, crude protein 17.34%, ether extract 5,35%, crude fibre 23.67%, and ash 6.43%. When compared with a mixture of palm kernel cake and rice bran; 80%: 20% without of fermentation is crude protein increase 29.58% and crude fibre decreased 22.53%.

  2. On methods to increase the security of the Linux kernel

    International Nuclear Information System (INIS)

    Matvejchikov, I.V.

    2014-01-01

    Methods to increase the security of the Linux kernel for the implementation of imposed protection tools have been examined. The methods of incorporation into various subsystems of the kernel on the x86 architecture have been described [ru

  3. Relationship between processing score and kernel-fraction particle size in whole-plant corn silage.

    Science.gov (United States)

    Dias Junior, G S; Ferraretto, L F; Salvati, G G S; de Resende, L C; Hoffman, P C; Pereira, M N; Shaver, R D

    2016-04-01

    Kernel processing increases starch digestibility in whole-plant corn silage (WPCS). Corn silage processing score (CSPS), the percentage of starch passing through a 4.75-mm sieve, is widely used to assess degree of kernel breakage in WPCS. However, the geometric mean particle size (GMPS) of the kernel-fraction that passes through the 4.75-mm sieve has not been well described. Therefore, the objectives of this study were (1) to evaluate particle size distribution and digestibility of kernels cut in varied particle sizes; (2) to propose a method to measure GMPS in WPCS kernels; and (3) to evaluate the relationship between CSPS and GMPS of the kernel fraction in WPCS. Composite samples of unfermented, dried kernels from 110 corn hybrids commonly used for silage production were kept whole (WH) or manually cut in 2, 4, 8, 16, 32 or 64 pieces (2P, 4P, 8P, 16P, 32P, and 64P, respectively). Dry sieving to determine GMPS, surface area, and particle size distribution using 9 sieves with nominal square apertures of 9.50, 6.70, 4.75, 3.35, 2.36, 1.70, 1.18, and 0.59 mm and pan, as well as ruminal in situ dry matter (DM) digestibilities were performed for each kernel particle number treatment. Incubation times were 0, 3, 6, 12, and 24 h. The ruminal in situ DM disappearance of unfermented kernels increased with the reduction in particle size of corn kernels. Kernels kept whole had the lowest ruminal DM disappearance for all time points with maximum DM disappearance of 6.9% at 24 h and the greatest disappearance was observed for 64P, followed by 32P and 16P. Samples of WPCS (n=80) from 3 studies representing varied theoretical length of cut settings and processor types and settings were also evaluated. Each WPCS sample was divided in 2 and then dried at 60 °C for 48 h. The CSPS was determined in duplicate on 1 of the split samples, whereas on the other split sample the kernel and stover fractions were separated using a hydrodynamic separation procedure. After separation, the

  4. Structured Kernel Dictionary Learning with Correlation Constraint for Object Recognition.

    Science.gov (United States)

    Wang, Zhengjue; Wang, Yinghua; Liu, Hongwei; Zhang, Hao

    2017-06-21

    In this paper, we propose a new discriminative non-linear dictionary learning approach, called correlation constrained structured kernel KSVD, for object recognition. The objective function for dictionary learning contains a reconstructive term and a discriminative term. In the reconstructive term, signals are implicitly non-linearly mapped into a space, where a structured kernel dictionary, each sub-dictionary of which lies in the span of the mapped signals from the corresponding class, is established. In the discriminative term, by analyzing the classification mechanism, the correlation constraint is proposed in kernel form, constraining the correlations between different discriminative codes, and restricting the coefficient vectors to be transformed into a feature space, where the features are highly correlated inner-class and nearly independent between-classes. The objective function is optimized by the proposed structured kernel KSVD. During the classification stage, the specific form of the discriminative feature is needless to be known, while the inner product of the discriminative feature with kernel matrix embedded is available, and is suitable for a linear SVM classifier. Experimental results demonstrate that the proposed approach outperforms many state-of-the-art dictionary learning approaches for face, scene and synthetic aperture radar (SAR) vehicle target recognition.

  5. A relationship between Gel'fand-Levitan and Marchenko kernels

    International Nuclear Information System (INIS)

    Kirst, T.; Von Geramb, H.V.; Amos, K.A.

    1989-01-01

    An integral equation which relates the output kernels of the Gel'fand-Levitan and Marchenko inverse scattering equations is specified. Structural details of this integral equation are studied when the S-matrix is a rational function, and the output kernels are separable in terms of Bessel, Hankel and Jost solutions. 4 refs

  6. Migration of ThO2 kernels under the influence of a temperature gradient

    International Nuclear Information System (INIS)

    Smith, C.L.

    1976-11-01

    BISO coated ThO 2 fertile fuel kernels will migrate up the thermal gradients imposed across coated particles during HTGR operation. Thorium dioxide kernel migration has been studied as a function of temperature (1300 to 1700 0 C) and ThO 2 kernel burnup (0.9 to 5.8 percent FIMA) in out-of-pile, postirradiation thermal gradient heating experiments. The studies were conducted to obtain descriptions of migration rates that will be used in core design studies to evaluate the impact of ThO 2 migration on fertile fuel performance in an operating HTGR and to define characteristics needed by any comprehensive model describing ThO 2 kernel migration. The kinetics data generated in these postirradiation studies are consistent with in-pile data collected by investigators at Oak Ridge National Laboratory, which supports use of the more precise postirradiation heating results in HTGR core design studies. Observations of intergranular carbon deposits on the cool side of migrating kernels support the assumption that the kinetics of kernel migration are controlled by solid state diffusion within irradiated ThO 2 kernels. The migration is characterized by a period of no migration (incubation period) followed by migration at the equilibrium rate for ThO 2 . The incubation period decreases with increasing temperature and kernel burnup. The improved understanding of the kinetics of ThO 2 kernel migration provided by this work will contribute to an optimization of HTGR core design and an increased confidence in fuel performance predictions

  7. Nutrition quality of extraction mannan residue from palm kernel cake on brolier chicken

    Science.gov (United States)

    Tafsin, M.; Hanafi, N. D.; Kejora, E.; Yusraini, E.

    2018-02-01

    This study aims to find out the nutrient residue of palm kernel cake from mannan extraction on broiler chicken by evaluating physical quality (specific gravity, bulk density and compacted bulk density), chemical quality (proximate analysis and Van Soest Test) and biological test (metabolizable energy). Treatment composed of T0 : palm kernel cake extracted aquadest (control), T1 : palm kernel cake extracted acetic acid (CH3COOH) 1%, T2 : palm kernel cake extracted aquadest + mannanase enzyme 100 u/l and T3 : palm kernel cake extracted acetic acid (CH3COOH) 1% + enzyme mannanase 100 u/l. The results showed that mannan extraction had significant effect (P<0.05) in improving the quality of physical and numerically increase the value of crude protein and decrease the value of NDF (Neutral Detergent Fiber). Treatments had highly significant influence (P<0.01) on the metabolizable energy value of palm kernel cake residue in broiler chickens. It can be concluded that extraction with aquadest + enzyme mannanase 100 u/l yields the best nutrient quality of palm kernel cake residue for broiler chicken.

  8. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Hansen, Lars Kai; Madsen, Kristoffer Hougaard

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus...... on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...

  9. Explicit signal to noise ratio in reproducing kernel Hilbert spaces

    DEFF Research Database (Denmark)

    Gomez-Chova, Luis; Nielsen, Allan Aasbjerg; Camps-Valls, Gustavo

    2011-01-01

    This paper introduces a nonlinear feature extraction method based on kernels for remote sensing data analysis. The proposed approach is based on the minimum noise fraction (MNF) transform, which maximizes the signal variance while also minimizing the estimated noise variance. We here propose...... an alternative kernel MNF (KMNF) in which the noise is explicitly estimated in the reproducing kernel Hilbert space. This enables KMNF dealing with non-linear relations between the noise and the signal features jointly. Results show that the proposed KMNF provides the most noise-free features when confronted...

  10. Reduction of Aflatoxins in Apricot Kernels by Electronic and Manual Color Sorting

    Directory of Open Access Journals (Sweden)

    Rosanna Zivoli

    2016-01-01

    Full Text Available The efficacy of color sorting on reducing aflatoxin levels in shelled apricot kernels was assessed. Naturally-contaminated kernels were submitted to an electronic optical sorter or blanched, peeled, and manually sorted to visually identify and sort discolored kernels (dark and spotted from healthy ones. The samples obtained from the two sorting approaches were ground, homogenized, and analysed by HPLC-FLD for their aflatoxin content. A mass balance approach was used to measure the distribution of aflatoxins in the collected fractions. Aflatoxin B1 and B2 were identified and quantitated in all collected fractions at levels ranging from 1.7 to 22,451.5 µg/kg of AFB1 + AFB2, whereas AFG1 and AFG2 were not detected. Excellent results were obtained by manual sorting of peeled kernels since the removal of discolored kernels (2.6%–19.9% of total peeled kernels removed 97.3%–99.5% of total aflatoxins. The combination of peeling and visual/manual separation of discolored kernels is a feasible strategy to remove 97%–99% of aflatoxins accumulated in naturally-contaminated samples. Electronic optical sorter gave highly variable results since the amount of AFB1 + AFB2 measured in rejected fractions (15%–18% of total kernels ranged from 13% to 59% of total aflatoxins. An improved immunoaffinity-based HPLC-FLD method having low limits of detection for the four aflatoxins (0.01–0.05 µg/kg was developed and used to monitor the occurrence of aflatoxins in 47 commercial products containing apricot kernels and/or almonds commercialized in Italy. Low aflatoxin levels were found in 38% of the tested samples and ranged from 0.06 to 1.50 μg/kg for AFB1 and from 0.06 to 1.79 μg/kg for total aflatoxins.

  11. Effects of de-oiled palm kernel cake based fertilizers on sole maize ...

    African Journals Online (AJOL)

    A study was conducted to determine the effect of de-oiled palm kernel cake based fertilizer formulations on the yield of sole maize and cassava crops. Two de-oiled palm kernel cake based fertilizer formulations A and B were compounded from different proportions of de-oiled palm kernel cake, urea, muriate of potash and ...

  12. Evaluating the Impact of Data Placement to Spark and SciDB with an Earth Science Use Case

    Science.gov (United States)

    Doan, Khoa; Oloso, Amidu; Kuo, Kwo-Sen; Clune, Thomas; Yu, Hongfeng; Nelson, Brian; Zhang, Jian

    2016-01-01

    We investigate the impact of data placement for two Big Data technologies, Spark and SciDB, with a use case from Earth Science where data arrays are multidimensional. Simultaneously, this investigation provides an opportunity to evaluate the performance of the technologies involved. Two datastores, HDFS and Cassandra, are used with Spark for our comparison. It is found that Spark with Cassandra performs better than with HDFS, but SciDB performs better yet than Spark with either datastore. The investigation also underscores the value of having data aligned for the most common analysis scenarios in advance on a shared nothing architecture. Otherwise, repartitioning needs to be carried out on the fly, degrading overall performance.

  13. Towards constrained optimal control of spark-ignition engines

    NARCIS (Netherlands)

    Feru, E.; Luo, X.

    2015-01-01

    In this paper, the torque control problem for spark-ignition engines is considered. The objective is to provide good output torque tracking with minimum fuel consumption, while avoiding engine knock and misre. To this end, three control strategies are proposed: a feed-forward controller with

  14. Gaussian processes with optimal kernel construction for neuro-degenerative clinical onset prediction

    Science.gov (United States)

    Canas, Liane S.; Yvernault, Benjamin; Cash, David M.; Molteni, Erika; Veale, Tom; Benzinger, Tammie; Ourselin, Sébastien; Mead, Simon; Modat, Marc

    2018-02-01

    Gaussian Processes (GP) are a powerful tool to capture the complex time-variations of a dataset. In the context of medical imaging analysis, they allow a robust modelling even in case of highly uncertain or incomplete datasets. Predictions from GP are dependent of the covariance kernel function selected to explain the data variance. To overcome this limitation, we propose a framework to identify the optimal covariance kernel function to model the data.The optimal kernel is defined as a composition of base kernel functions used to identify correlation patterns between data points. Our approach includes a modified version of the Compositional Kernel Learning (CKL) algorithm, in which we score the kernel families using a new energy function that depends both the Bayesian Information Criterion (BIC) and the explained variance score. We applied the proposed framework to model the progression of neurodegenerative diseases over time, in particular the progression of autosomal dominantly-inherited Alzheimer's disease, and use it to predict the time to clinical onset of subjects carrying genetic mutation.

  15. A Comparative Study of Pairwise Learning Methods Based on Kernel Ridge Regression.

    Science.gov (United States)

    Stock, Michiel; Pahikkala, Tapio; Airola, Antti; De Baets, Bernard; Waegeman, Willem

    2018-06-12

    Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction, or network inference problems. During the past decade, kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression, and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency, and spectral filtering properties. Our theoretical results provide valuable insights into assessing the advantages and limitations of existing pairwise learning methods.

  16. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    Science.gov (United States)

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  17. Minimum Information Loss Based Multi-kernel Learning for Flagellar Protein Recognition in Trypanosoma Brucei

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-12-01

    Trypanosma brucei (T. Brucei) is an important pathogen agent of African trypanosomiasis. The flagellum is an essential and multifunctional organelle of T. Brucei, thus it is very important to recognize the flagellar proteins from T. Brucei proteins for the purposes of both biological research and drug design. In this paper, we investigate computationally recognizing flagellar proteins in T. Brucei by pattern recognition methods. It is argued that an optimal decision function can be obtained as the difference of probability functions of flagella protein and the non-flagellar protein for the purpose of flagella protein recognition. We propose to learn a multi-kernel classification function to approximate this optimal decision function, by minimizing the information loss of such approximation which is measured by the Kull back-Leibler (KL) divergence. An iterative multi-kernel classifier learning algorithm is developed to minimize the KL divergence for the problem of T. Brucei flagella protein recognition, experiments show its advantage over other T. Brucei flagellar protein recognition and multi-kernel learning methods. © 2014 IEEE.

  18. Fluidization calculation on nuclear fuel kernel coating

    International Nuclear Information System (INIS)

    Sukarsono; Wardaya; Indra-Suryawan

    1996-01-01

    The fluidization of nuclear fuel kernel coating was calculated. The bottom of the reactor was in the from of cone on top of the cone there was a cylinder, the diameter of the cylinder for fluidization was 2 cm and at the upper part of the cylinder was 3 cm. Fluidization took place in the cone and the first cylinder. The maximum and the minimum velocity of the gas of varied kernel diameter, the porosity and bed height of varied stream gas velocity were calculated. The calculation was done by basic program

  19. Metabolite identification through multiple kernel learning on fragmentation trees.

    Science.gov (United States)

    Shen, Huibin; Dührkop, Kai; Böcker, Sebastian; Rousu, Juho

    2014-06-15

    Metabolite identification from tandem mass spectrometric data is a key task in metabolomics. Various computational methods have been proposed for the identification of metabolites from tandem mass spectra. Fragmentation tree methods explore the space of possible ways in which the metabolite can fragment, and base the metabolite identification on scoring of these fragmentation trees. Machine learning methods have been used to map mass spectra to molecular fingerprints; predicted fingerprints, in turn, can be used to score candidate molecular structures. Here, we combine fragmentation tree computations with kernel-based machine learning to predict molecular fingerprints and identify molecular structures. We introduce a family of kernels capturing the similarity of fragmentation trees, and combine these kernels using recently proposed multiple kernel learning approaches. Experiments on two large reference datasets show that the new methods significantly improve molecular fingerprint prediction accuracy. These improvements result in better metabolite identification, doubling the number of metabolites ranked at the top position of the candidates list. © The Author 2014. Published by Oxford University Press.

  20. Probabilistic wind power forecasting based on logarithmic transformation and boundary kernel

    International Nuclear Information System (INIS)

    Zhang, Yao; Wang, Jianxue; Luo, Xu

    2015-01-01

    Highlights: • Quantitative information on the uncertainty of wind power generation. • Kernel density estimator provides non-Gaussian predictive distributions. • Logarithmic transformation reduces the skewness of wind power density. • Boundary kernel method eliminates the density leakage near the boundary. - Abstracts: Probabilistic wind power forecasting not only produces the expectation of wind power output, but also gives quantitative information on the associated uncertainty, which is essential for making better decisions about power system and market operations with the increasing penetration of wind power generation. This paper presents a novel kernel density estimator for probabilistic wind power forecasting, addressing two characteristics of wind power which have adverse impacts on the forecast accuracy, namely, the heavily skewed and double-bounded nature of wind power density. Logarithmic transformation is used to reduce the skewness of wind power density, which improves the effectiveness of the kernel density estimator in a transformed scale. Transformations partially relieve the boundary effect problem of the kernel density estimator caused by the double-bounded nature of wind power density. However, the case study shows that there are still some serious problems of density leakage after the transformation. In order to solve this problem in the transformed scale, a boundary kernel method is employed to eliminate the density leak at the bounds of wind power distribution. The improvement of the proposed method over the standard kernel density estimator is demonstrated by short-term probabilistic forecasting results based on the data from an actual wind farm. Then, a detailed comparison is carried out of the proposed method and some existing probabilistic forecasting methods