A high-order SPH method by introducing inverse kernels
Le Fang
2017-02-01
Full Text Available The smoothed particle hydrodynamics (SPH method is usually expected to be an efficient numerical tool for calculating the fluid-structure interactions in compressors; however, an endogenetic restriction is the problem of low-order consistency. A high-order SPH method by introducing inverse kernels, which is quite easy to be implemented but efficient, is proposed for solving this restriction. The basic inverse method and the special treatment near boundary are introduced with also the discussion of the combination of the Least-Square (LS and Moving-Least-Square (MLS methods. Then detailed analysis in spectral space is presented for people to better understand this method. Finally we show three test examples to verify the method behavior.
Duru, Kenneth; Virta, Kristoffer
2014-01-01
to be discontinuous. The key feature is the highly accurate and provably stable treatment of interfaces where media discontinuities arise. We discretize in space using high order accurate finite difference schemes that satisfy the summation by parts rule. Conditions
Duru, Kenneth
2014-12-01
© 2014 Elsevier Inc. In this paper, we develop a stable and systematic procedure for numerical treatment of elastic waves in discontinuous and layered media. We consider both planar and curved interfaces where media parameters are allowed to be discontinuous. The key feature is the highly accurate and provably stable treatment of interfaces where media discontinuities arise. We discretize in space using high order accurate finite difference schemes that satisfy the summation by parts rule. Conditions at layer interfaces are imposed weakly using penalties. By deriving lower bounds of the penalty strength and constructing discrete energy estimates we prove time stability. We present numerical experiments in two space dimensions to illustrate the usefulness of the proposed method for simulations involving typical interface phenomena in elastic materials. The numerical experiments verify high order accuracy and time stability.
Energy stable and high-order-accurate finite difference methods on staggered grids
O'Reilly, Ossian; Lundquist, Tomas; Dunham, Eric M.; Nordström, Jan
2017-10-01
For wave propagation over distances of many wavelengths, high-order finite difference methods on staggered grids are widely used due to their excellent dispersion properties. However, the enforcement of boundary conditions in a stable manner and treatment of interface problems with discontinuous coefficients usually pose many challenges. In this work, we construct a provably stable and high-order-accurate finite difference method on staggered grids that can be applied to a broad class of boundary and interface problems. The staggered grid difference operators are in summation-by-parts form and when combined with a weak enforcement of the boundary conditions, lead to an energy stable method on multiblock grids. The general applicability of the method is demonstrated by simulating an explosive acoustic source, generating waves reflecting against a free surface and material discontinuity.
Xu, Meng; Yan, Yaming; Liu, Yanying; Shi, Qiang
2018-04-01
The Nakajima-Zwanzig generalized master equation provides a formally exact framework to simulate quantum dynamics in condensed phases. Yet, the exact memory kernel is hard to obtain and calculations based on perturbative expansions are often employed. By using the spin-boson model as an example, we assess the convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation. The exact memory kernels are calculated by combining the hierarchical equation of motion approach and the Dyson expansion of the exact memory kernel. High order expansions of the memory kernels are obtained by extending our previous work to calculate perturbative expansions of open system quantum dynamics [M. Xu et al., J. Chem. Phys. 146, 064102 (2017)]. It is found that the high order expansions do not necessarily converge in certain parameter regimes where the exact kernel show a long memory time, especially in cases of slow bath, weak system-bath coupling, and low temperature. Effectiveness of the Padé and Landau-Zener resummation approaches is tested, and the convergence of higher order rate constants beyond Fermi's golden rule is investigated.
A high-order time-accurate interrogation method for time-resolved PIV
Lynch, Kyle; Scarano, Fulvio
2013-01-01
A novel method is introduced for increasing the accuracy and extending the dynamic range of time-resolved particle image velocimetry (PIV). The approach extends the concept of particle tracking velocimetry by multiple frames to the pattern tracking by cross-correlation analysis as employed in PIV. The working principle is based on tracking the patterned fluid element, within a chosen interrogation window, along its individual trajectory throughout an image sequence. In contrast to image-pair interrogation methods, the fluid trajectory correlation concept deals with variable velocity along curved trajectories and non-zero tangential acceleration during the observed time interval. As a result, the velocity magnitude and its direction are allowed to evolve in a nonlinear fashion along the fluid element trajectory. The continuum deformation (namely spatial derivatives of the velocity vector) is accounted for by adopting local image deformation. The principle offers important reductions of the measurement error based on three main points: by enlarging the temporal measurement interval, the relative error becomes reduced; secondly, the random and peak-locking errors are reduced by the use of least-squares polynomial fits to individual trajectories; finally, the introduction of high-order (nonlinear) fitting functions provides the basis for reducing the truncation error. Lastly, the instantaneous velocity is evaluated as the temporal derivative of the polynomial representation of the fluid parcel position in time. The principal features of this algorithm are compared with a single-pair iterative image deformation method. Synthetic image sequences are considered with steady flow (translation, shear and rotation) illustrating the increase of measurement precision. An experimental data set obtained by time-resolved PIV measurements of a circular jet is used to verify the robustness of the method on image sequences affected by camera noise and three-dimensional motions. In
Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi
2018-06-01
Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.
Lu Weiguo; Olivera, Gustavo H; Chen Mingli; Reckwerdt, Paul J; Mackie, Thomas R
2005-01-01
Convolution/superposition (C/S) is regarded as the standard dose calculation method in most modern radiotherapy treatment planning systems. Different implementations of C/S could result in significantly different dose distributions. This paper addresses two major implementation issues associated with collapsed cone C/S: one is how to utilize the tabulated kernels instead of analytical parametrizations and the other is how to deal with voxel size effects. Three methods that utilize the tabulated kernels are presented in this paper. These methods differ in the effective kernels used: the differential kernel (DK), the cumulative kernel (CK) or the cumulative-cumulative kernel (CCK). They result in slightly different computation times but significantly different voxel size effects. Both simulated and real multi-resolution dose calculations are presented. For simulation tests, we use arbitrary kernels and various voxel sizes with a homogeneous phantom, and assume forward energy transportation only. Simulations with voxel size up to 1 cm show that the CCK algorithm has errors within 0.1% of the maximum gold standard dose. Real dose calculations use a heterogeneous slab phantom, both the 'broad' (5 x 5 cm 2 ) and the 'narrow' (1.2 x 1.2 cm 2 ) tomotherapy beams. Various voxel sizes (0.5 mm, 1 mm, 2 mm, 4 mm and 8 mm) are used for dose calculations. The results show that all three algorithms have negligible difference (0.1%) for the dose calculation in the fine resolution (0.5 mm voxels). But differences become significant when the voxel size increases. As for the DK or CK algorithm in the broad (narrow) beam dose calculation, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 10% (7%) of the maximum dose. As for the broad (narrow) beam dose calculation using the CCK algorithm, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 1% of the maximum dose. Among all three methods, the CCK algorithm
Elnasir, Selma; Shamsuddin, Siti Mariyam; Farokhi, Sajad
2015-01-01
Palm vein recognition (PVR) is a promising new biometric that has been applied successfully as a method of access control by many organizations, which has even further potential in the field of forensics. The palm vein pattern has highly discriminative features that are difficult to forge because of its subcutaneous position in the palm. Despite considerable progress and a few practical issues, providing accurate palm vein readings has remained an unsolved issue in biometrics. We propose a robust and more accurate PVR method based on the combination of wavelet scattering (WS) with spectral regression kernel discriminant analysis (SRKDA). As the dimension of WS generated features is quite large, SRKDA is required to reduce the extracted features to enhance the discrimination. The results based on two public databases-PolyU Hyper Spectral Palmprint public database and PolyU Multi Spectral Palmprint-show the high performance of the proposed scheme in comparison with state-of-the-art methods. The proposed approach scored a 99.44% identification rate and a 99.90% verification rate [equal error rate (EER)=0.1%] for the hyperspectral database and a 99.97% identification rate and a 99.98% verification rate (EER=0.019%) for the multispectral database.
A high order solver for the unbounded Poisson equation
Hejlesen, Mads Mølholm; Rasmussen, Johannes Tophøj; Chatelain, Philippe
In mesh-free particle methods a high order solution to the unbounded Poisson equation is usually achieved by constructing regularised integration kernels for the Biot-Savart law. Here the singular, point particles are regularised using smoothed particles to obtain an accurate solution with an order...... of convergence consistent with the moments conserved by the applied smoothing function. In the hybrid particle-mesh method of Hockney and Eastwood (HE) the particles are interpolated onto a regular mesh where the unbounded Poisson equation is solved by a discrete non-cyclic convolution of the mesh values...... and the integration kernel. In this work we show an implementation of high order regularised integration kernels in the HE algorithm for the unbounded Poisson equation to formally achieve an arbitrary high order convergence. We further present a quantitative study of the convergence rate to give further insight...
Silva, Goncalo; Talon, Laurent; Ginzburg, Irina
2017-01-01
and FEM is thoroughly evaluated in three benchmark tests, which are run throughout three distinctive permeability regimes. The first configuration is a horizontal porous channel, studied with a symbolic approach, where we construct the exact solutions of FEM and BF/IBF with different boundary schemes. The second problem refers to an inclined porous channel flow, which brings in as new challenge the formation of spurious boundary layers in LBM; that is, numerical artefacts that arise due to a deficient accommodation of the bulk solution by the low-accurate boundary scheme. The third problem considers a porous flow past a periodic square array of solid cylinders, which intensifies the previous two tests with the simulation of a more complex flow pattern. The ensemble of numerical tests provides guidelines on the effect of grid resolution and the TRT free collision parameter over the accuracy and the quality of the velocity field, spanning from Stokes to Darcy permeability regimes. It is shown that, with the use of the high-order accurate boundary schemes, the simple, uniform-mesh-based TRT-LBM formulation can even surpass the accuracy of FEM employing hardworking body-fitted meshes.
Silva, Goncalo, E-mail: goncalo.nuno.silva@gmail.com [Irstea, Antony Regional Centre, HBAN, 1 rue Pierre-Gilles de Gennes CS 10030, 92761 Antony cedex (France); Talon, Laurent, E-mail: talon@fast.u-psud.fr [CNRS (UMR 7608), Laboratoire FAST, Batiment 502, Campus University, 91405 Orsay (France); Ginzburg, Irina, E-mail: irina.ginzburg@irstea.fr [Irstea, Antony Regional Centre, HBAN, 1 rue Pierre-Gilles de Gennes CS 10030, 92761 Antony cedex (France)
2017-04-15
and FEM is thoroughly evaluated in three benchmark tests, which are run throughout three distinctive permeability regimes. The first configuration is a horizontal porous channel, studied with a symbolic approach, where we construct the exact solutions of FEM and BF/IBF with different boundary schemes. The second problem refers to an inclined porous channel flow, which brings in as new challenge the formation of spurious boundary layers in LBM; that is, numerical artefacts that arise due to a deficient accommodation of the bulk solution by the low-accurate boundary scheme. The third problem considers a porous flow past a periodic square array of solid cylinders, which intensifies the previous two tests with the simulation of a more complex flow pattern. The ensemble of numerical tests provides guidelines on the effect of grid resolution and the TRT free collision parameter over the accuracy and the quality of the velocity field, spanning from Stokes to Darcy permeability regimes. It is shown that, with the use of the high-order accurate boundary schemes, the simple, uniform-mesh-based TRT-LBM formulation can even surpass the accuracy of FEM employing hardworking body-fitted meshes.
High order Poisson Solver for unbounded flows
Hejlesen, Mads Mølholm; Rasmussen, Johannes Tophøj; Chatelain, Philippe
2015-01-01
This paper presents a high order method for solving the unbounded Poisson equation on a regular mesh using a Green’s function solution. The high order convergence was achieved by formulating mollified integration kernels, that were derived from a filter regularisation of the solution field....... The method was implemented on a rectangular domain using fast Fourier transforms (FFT) to increase computational efficiency. The Poisson solver was extended to directly solve the derivatives of the solution. This is achieved either by including the differential operator in the integration kernel...... the equations of fluid mechanics as an example, but can be used in many physical problems to solve the Poisson equation on a rectangular unbounded domain. For the two-dimensional case we propose an infinitely smooth test function which allows for arbitrary high order convergence. Using Gaussian smoothing...
Exploiting graph kernels for high performance biomedical relation extraction.
Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri
2018-01-30
Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM
Generation of high order modes
Ngcobo, S
2012-07-01
Full Text Available with the location of the Laguerre polynomial zeros. The Diffractive optical element is used to shape the TEM00 Gassian beam and force the laser to operate on a higher order TEMp0 Laguerre-Gaussian modes or high order superposition of Laguerre-Gaussian modes...
Learning High-Order Filters for Efficient Blind Deconvolution of Document Photographs
Xiao, Lei; Wang, Jue; Heidrich, Wolfgang; Hirsch, Michael
2016-01-01
by small-scale high-order structures, we propose to learn a multi-scale, interleaved cascade of shrinkage fields model, which contains a series of high-order filters to facilitate joint recovery of blur kernel and latent image. With extensive experiments
High order depletion sensitivity analysis
Naguib, K.; Adib, M.; Morcos, H.N.
2002-01-01
A high order depletion sensitivity method was applied to calculate the sensitivities of build-up of actinides in the irradiated fuel due to cross-section uncertainties. An iteration method based on Taylor series expansion was applied to construct stationary principle, from which all orders of perturbations were calculated. The irradiated EK-10 and MTR-20 fuels at their maximum burn-up of 25% and 65% respectively were considered for sensitivity analysis. The results of calculation show that, in case of EK-10 fuel (low burn-up), the first order sensitivity was found to be enough to perform an accuracy of 1%. While in case of MTR-20 (high burn-up) the fifth order was found to provide 3% accuracy. A computer code SENS was developed to provide the required calculations
High-order nonlinear susceptibilities of He
Liu, W.C.; Clark, C.W.
1996-01-01
High-order nonlinear optical response of noble gases to intense laser radiation is of considerable experimental interest, but is difficult to measure or calculate accurately. The authors have begun a set of calculations of frequency-dependent nonlinear susceptibilities of He 1s, within the framework of Rayleigh=Schroedinger perturbation theory at lowest applicable order, with the goal of providing critically evaluated atomic data for modelling high harmonic generation processes. The atomic Hamiltonian is decomposed in term of Hylleraas coordinates and spherical harmonics using the formalism of Ponte and Shakeshaft, and the hierarchy of inhomogeneous equations of perturbation theory is solved iteratively. A combination of Hylleraas and Frankowski basis functions is used; the compact Hylleraas basis provides a highly accurate representation of the ground state wavefunction, whereas the diffuse Frankowski basis functions efficiently reproduce the correct asymptotic structure of the perturbed orbitals
A high order solver for the unbounded Poisson equation
Hejlesen, Mads Mølholm; Rasmussen, Johannes Tophøj; Chatelain, Philippe
2013-01-01
. The method is extended to directly solve the derivatives of the solution to Poissonʼs equation. In this way differential operators such as the divergence or curl of the solution field can be solved to the same high order convergence without additional computational effort. The method, is applied......A high order converging Poisson solver is presented, based on the Greenʼs function solution to Poissonʼs equation subject to free-space boundary conditions. The high order convergence is achieved by formulating regularised integration kernels, analogous to a smoothing of the solution field...... and validated, however not restricted, to the equations of fluid mechanics, and can be used in many applications to solve Poissonʼs equation on a rectangular unbounded domain....
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
High-order passive photonic temporal integrators.
Asghari, Mohammad H; Wang, Chao; Yao, Jianping; Azaña, José
2010-04-15
We experimentally demonstrate, for the first time to our knowledge, an ultrafast photonic high-order (second-order) complex-field temporal integrator. The demonstrated device uses a single apodized uniform-period fiber Bragg grating (FBG), and it is based on a general FBG design approach for implementing optimized arbitrary-order photonic passive temporal integrators. Using this same design approach, we also fabricate and test a first-order passive temporal integrator offering an energetic-efficiency improvement of more than 1 order of magnitude as compared with previously reported passive first-order temporal integrators. Accurate and efficient first- and second-order temporal integrations of ultrafast complex-field optical signals (with temporal features as fast as approximately 2.5ps) are successfully demonstrated using the fabricated FBG devices.
High-Order Hamilton's Principle and the Hamilton's Principle of High-Order Lagrangian Function
Zhao Hongxia; Ma Shanjun
2008-01-01
In this paper, based on the theorem of the high-order velocity energy, integration and variation principle, the high-order Hamilton's principle of general holonomic systems is given. Then, three-order Lagrangian equations and four-order Lagrangian equations are obtained from the high-order Hamilton's principle. Finally, the Hamilton's principle of high-order Lagrangian function is given.
High-order finite volume advection
Shaw, James
2018-01-01
The cubicFit advection scheme is limited to second-order convergence because it uses a polynomial reconstruction fitted to point values at cell centres. The highOrderFit advection scheme achieves higher than second order by calculating high-order moments over the mesh geometry.
High-Order Frequency-Locked Loops
Golestan, Saeed; Guerrero, Josep M.; Quintero, Juan Carlos Vasquez
2017-01-01
In very recent years, some attempts for designing high-order frequency-locked loops (FLLs) have been made. Nevertheless, the advantages and disadvantages of these structures, particularly in comparison with a standard FLL and high-order phase-locked loops (PLLs), are rather unclear. This lack...... study, and its small-signal modeling, stability analysis, and parameter tuning are presented. Finally, to gain insight about advantages and disadvantages of high-order FLLs, a theoretical and experimental performance comparison between the designed second-order FLL and a standard FLL (first-order FLL...
Efficiency of High Order Spectral Element Methods on Petascale Architectures
Hutchinson, Maxwell; Heinecke, Alexander; Pabst, Hans; Henry, Greg; Parsani, Matteo; Keyes, David E.
2016-01-01
High order methods for the solution of PDEs expose a tradeoff between computational cost and accuracy on a per degree of freedom basis. In many cases, the cost increases due to higher arithmetic intensity while affecting data movement minimally. As architectures tend towards wider vector instructions and expect higher arithmetic intensities, the best order for a particular simulation may change. This study highlights preferred orders by identifying the high order efficiency frontier of the spectral element method implemented in Nek5000 and NekBox: the set of orders and meshes that minimize computational cost at fixed accuracy. First, we extract Nek’s order-dependent computational kernels and demonstrate exceptional hardware utilization by hardware-aware implementations. Then, we perform productionscale calculations of the nonlinear single mode Rayleigh-Taylor instability on BlueGene/Q and Cray XC40-based supercomputers to highlight the influence of the architecture. Accuracy is defined with respect to physical observables, and computational costs are measured by the corehour charge of the entire application. The total number of grid points needed to achieve a given accuracy is reduced by increasing the polynomial order. On the XC40 and BlueGene/Q, polynomial orders as high as 31 and 15 come at no marginal cost per timestep, respectively. Taken together, these observations lead to a strong preference for high order discretizations that use fewer degrees of freedom. From a performance point of view, we demonstrate up to 60% full application bandwidth utilization at scale and achieve ≈1PFlop/s of compute performance in Nek’s most flop-intense methods.
Efficiency of High Order Spectral Element Methods on Petascale Architectures
Hutchinson, Maxwell
2016-06-14
High order methods for the solution of PDEs expose a tradeoff between computational cost and accuracy on a per degree of freedom basis. In many cases, the cost increases due to higher arithmetic intensity while affecting data movement minimally. As architectures tend towards wider vector instructions and expect higher arithmetic intensities, the best order for a particular simulation may change. This study highlights preferred orders by identifying the high order efficiency frontier of the spectral element method implemented in Nek5000 and NekBox: the set of orders and meshes that minimize computational cost at fixed accuracy. First, we extract Nek’s order-dependent computational kernels and demonstrate exceptional hardware utilization by hardware-aware implementations. Then, we perform productionscale calculations of the nonlinear single mode Rayleigh-Taylor instability on BlueGene/Q and Cray XC40-based supercomputers to highlight the influence of the architecture. Accuracy is defined with respect to physical observables, and computational costs are measured by the corehour charge of the entire application. The total number of grid points needed to achieve a given accuracy is reduced by increasing the polynomial order. On the XC40 and BlueGene/Q, polynomial orders as high as 31 and 15 come at no marginal cost per timestep, respectively. Taken together, these observations lead to a strong preference for high order discretizations that use fewer degrees of freedom. From a performance point of view, we demonstrate up to 60% full application bandwidth utilization at scale and achieve ≈1PFlop/s of compute performance in Nek’s most flop-intense methods.
High-order beam optics - an overview
Heighway, E.A.
1989-01-01
Beam-transport codes have been around for as long as thirty years and high order codes, second-order at least, for close to twenty years. Before this period of design-code development, there was considerable high-order treatment, but it was almost entirely analytical. History has a way of repeating itself, and the current excitement in the field of high-order optics is based on the application of Lie algebra and the so-called differential algebra to beam-transport codes, both of which are highly analytical in foundation. The author will describe some of the main design tools available today, giving a little of their history, and will conclude by trying to convey some of the excitement in the field through a brief description of Lie and differential algebra. 30 refs., 7 figs., 1 tab
Bioinspired Nanocomposite Hydrogels with Highly Ordered Structures.
Zhao, Ziguang; Fang, Ruochen; Rong, Qinfeng; Liu, Mingjie
2017-12-01
In the human body, many soft tissues with hierarchically ordered composite structures, such as cartilage, skeletal muscle, the corneas, and blood vessels, exhibit highly anisotropic mechanical strength and functionality to adapt to complex environments. In artificial soft materials, hydrogels are analogous to these biological soft tissues due to their "soft and wet" properties, their biocompatibility, and their elastic performance. However, conventional hydrogel materials with unordered homogeneous structures inevitably lack high mechanical properties and anisotropic functional performances; thus, their further application is limited. Inspired by biological soft tissues with well-ordered structures, researchers have increasingly investigated highly ordered nanocomposite hydrogels as functional biological engineering soft materials with unique mechanical, optical, and biological properties. These hydrogels incorporate long-range ordered nanocomposite structures within hydrogel network matrixes. Here, the critical design criteria and the state-of-the-art fabrication strategies of nanocomposite hydrogels with highly ordered structures are systemically reviewed. Then, recent progress in applications in the fields of soft actuators, tissue engineering, and sensors is highlighted. The future development and prospective application of highly ordered nanocomposite hydrogels are also discussed. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Eliminating high-order scattering effects in optical microbubble sizing.
Qiu, Huihe
2003-04-01
Measurements of bubble size and velocity in multiphase flows are important in much research and many industrial applications. It has been found that high-order refractions have great impact on microbubble sizing by use of phase-Doppler anemometry (PDA). The problem has been investigated, and a model of phase-size correlation, which also takes high-order refractions into consideration, is introduced to improve the accuracy of bubble sizing. Hence the model relaxes the assumption of a single-scattering mechanism in a conventional PDA system. The results of simulation based on this new model are compared with those based on a single-scattering-mechanism approach or a first-order approach. An optimization method for accurately sizing air bubbles in water has been suggested.
Practical aspects of spherical near-field antenna measurements using a high-order probe
Laitinen, Tommi; Pivnenko, Sergey; Nielsen, Jeppe Majlund
2006-01-01
Two practical aspects related to accurate antenna pattern characterization by probe-corrected spherical near-field antenna measurements with a high-order probe are examined. First, the requirements set by an arbitrary high-order probe on the scanning technique are pointed out. Secondly, a channel...... balance calibration procedure for a high-order dual-port probe with non-identical ports is presented, and the requirements set by this procedure for the probe are discussed....
High order harmonic generation from plasma mirror
Thaury, C.
2008-09-01
When an intense laser beam is focused on a solid target, its surface is rapidly ionized and forms a dense plasma that reflects the incident field. For laser intensities above few 10 15 W/cm 2 , high order harmonics of the laser frequency, associated in the time domain to a train of atto-second pulses (1 as = 10 18 s), can be generated upon this reflection. Because such a plasma mirror can be used with arbitrarily high laser intensities, this process should eventually lead to the production of very intense pulses in the X-ray domain. In this thesis, we demonstrate that for laser intensities about 10 19 W/cm 2 , two mechanisms can contribute to the generation of high order harmonics: the coherent wake emission and the relativistic emission. These two mechanisms are studied both theoretically and experimentally. In particular, we show that, thanks to very different properties, the harmonics generated by these two processes can be unambiguously distinguished experimentally. We then investigate the phase properties of the harmonic, in the spectral and in the spatial domain. Finally, we illustrate how to exploit the coherence of the generation mechanisms to get information on the dynamics of the plasma electrons. (author)
Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping
2016-01-01
To the best of our knowledge, there are no general well-founded robust methods for statistical unsupervised learning. Most of the unsupervised methods explicitly or implicitly depend on the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). They are sensitive to contaminated data, even when using bounded positive definite kernels. First, we propose robust kernel covariance operator (robust kernel CO) and robust kernel crosscovariance operator (robust kern...
HOKF: High Order Kalman Filter for Epilepsy Forecasting Modeling.
Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee
2017-08-01
Epilepsy forecasting has been extensively studied using high-order time series obtained from scalp-recorded electroencephalography (EEG). An accurate seizure prediction system would not only help significantly improve patients' quality of life, but would also facilitate new therapeutic strategies to manage epilepsy. This paper thus proposes an improved Kalman Filter (KF) algorithm to mine seizure forecasts from neural activity by modeling three properties in the high-order EEG time series: noise, temporal smoothness, and tensor structure. The proposed High-Order Kalman Filter (HOKF) is an extension of the standard Kalman filter, for which higher-order modeling is limited. The efficient dynamic of HOKF system preserves the tensor structure of the observations and latent states. As such, the proposed method offers two main advantages: (i) effectiveness with HOKF results in hidden variables that capture major evolving trends suitable to predict neural activity, even in the presence of missing values; and (ii) scalability in that the wall clock time of the HOKF is linear with respect to the number of time-slices of the sequence. The HOKF algorithm is examined in terms of its effectiveness and scalability by conducting forecasting and scalability experiments with a real epilepsy EEG dataset. The results of the simulation demonstrate the superiority of the proposed method over the original Kalman Filter and other existing methods. Copyright © 2017 Elsevier B.V. All rights reserved.
High order corrections to the renormalon
Faleev, S.V.
1997-01-01
High order corrections to the renormalon are considered. Each new type of insertion into the renormalon chain of graphs generates a correction to the asymptotics of perturbation theory of the order of ∝1. However, this series of corrections to the asymptotics is not the asymptotic one (i.e. the mth correction does not grow like m.). The summation of these corrections for the UV renormalon may change the asymptotics by a factor N δ . For the traditional IR renormalon the mth correction diverges like (-2) m . However, this divergence has no infrared origin and may be removed by a proper redefinition of the IR renormalon. On the other hand, for IR renormalons in hadronic event shapes one should naturally expect these multiloop contributions to decrease like (-2) -m . Some problems expected upon reaching the best accuracy of perturbative QCD are also discussed. (orig.)
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
High-order space charge effects using automatic differentiation
Reusch, Michael F.; Bruhwiler, David L.
1997-01-01
The Northrop Grumman Topkark code has been upgraded to Fortran 90, making use of operator overloading, so the same code can be used to either track an array of particles or construct a Taylor map representation of the accelerator lattice. We review beam optics and beam dynamics simulations conducted with TOPKARK in the past and we present a new method for modeling space charge forces to high-order with automatic differentiation. This method generates an accurate, high-order, 6-D Taylor map of the phase space variable trajectories for a bunched, high-current beam. The spatial distribution is modeled as the product of a Taylor Series times a Gaussian. The variables in the argument of the Gaussian are normalized to the respective second moments of the distribution. This form allows for accurate representation of a wide range of realistic distributions, including any asymmetries, and allows for rapid calculation of the space charge fields with free space boundary conditions. An example problem is presented to illustrate our approach
High-order space charge effects using automatic differentiation
Reusch, M.F.; Bruhwiler, D.L.; Computer Accelerator Physics Conference Williamsburg, Virginia 1996)
1997-01-01
The Northrop Grumman Topkark code has been upgraded to Fortran 90, making use of operator overloading, so the same code can be used to either track an array of particles or construct a Taylor map representation of the accelerator lattice. We review beam optics and beam dynamics simulations conducted with TOPKARK in the past and we present a new method for modeling space charge forces to high-order with automatic differentiation. This method generates an accurate, high-order, 6-D Taylor map of the phase space variable trajectories for a bunched, high-current beam. The spatial distribution is modeled as the product of a Taylor Series times a Gaussian. The variables in the argument of the Gaussian are normalized to the respective second moments of the distribution. This form allows for accurate representation of a wide range of realistic distributions, including any asymmetries, and allows for rapid calculation of the space charge fields with free space boundary conditions. An example problem is presented to illustrate our approach. copyright 1997 American Institute of Physics
High-order nonuniformly correlated beams
Wu, Dan; Wang, Fei; Cai, Yangjian
2018-02-01
We have introduced a class of partially coherent beams with spatially varying correlations named high-order nonuniformly correlated (HNUC) beams, as an extension of conventional nonuniformly correlated (NUC) beams. Such beams bring a new parameter (mode order) which is used to tailor the spatial coherence properties. The behavior of the spectral density of the HNUC beams on propagation has been investigated through numerical examples with the help of discrete model decomposition and fast Fourier transform (FFT) algorithm. Our results reveal that by selecting the mode order appropriately, the more sharpened intensity maxima can be achieved at a certain propagation distance compared to that of the NUC beams, and the lateral shift of the intensity maxima on propagation is closed related to the mode order. Furthermore, analytical expressions for the r.m.s width and the propagation factor of the HNUC beams on free-space propagation are derived by means of Wigner distribution function. The influence of initial beam parameters on the evolution of the r.m.s width and the propagation factor, and the relation between the r.m.s width and the occurring of the sharpened intensity maxima on propagation have been studied and discussed in detail.
High Order Semi-Lagrangian Advection Scheme
Malaga, Carlos; Mandujano, Francisco; Becerra, Julian
2014-11-01
In most fluid phenomena, advection plays an important roll. A numerical scheme capable of making quantitative predictions and simulations must compute correctly the advection terms appearing in the equations governing fluid flow. Here we present a high order forward semi-Lagrangian numerical scheme specifically tailored to compute material derivatives. The scheme relies on the geometrical interpretation of material derivatives to compute the time evolution of fields on grids that deform with the material fluid domain, an interpolating procedure of arbitrary order that preserves the moments of the interpolated distributions, and a nonlinear mapping strategy to perform interpolations between undeformed and deformed grids. Additionally, a discontinuity criterion was implemented to deal with discontinuous fields and shocks. Tests of pure advection, shock formation and nonlinear phenomena are presented to show performance and convergence of the scheme. The high computational cost is considerably reduced when implemented on massively parallel architectures found in graphic cards. The authors acknowledge funding from Fondo Sectorial CONACYT-SENER Grant Number 42536 (DGAJ-SPI-34-170412-217).
High order harmonic generation from plasma mirrors
George, H.
2010-01-01
When an intense laser beam is focused on a solid target, the target's surface is rapidly ionized and forms dense plasma that reflects the incident field. For laser intensities above few 10 to the power of 15 Wcm -2 , high order harmonics of the laser frequency, associated in the time domain to a train of atto-second pulses (1 as 10 -18 s), can be generated upon this reflection. In this thesis, we developed numerical tools to reveal original aspects of harmonic generation mechanisms in three different interaction regime: the coherent wake emission, the relativistic emission and the resonant absorption. In particular, we established the role of these mechanisms when the target is a very thin foil (thickness of the order of 100 nm). Then we study experimentally the spectral, spatial and coherence properties of the emitted light. We illustrate how to exploit these measurements to get information on the plasma mirror dynamics on the femtosecond and atto-second time scales. Last, we propose a technique for the single-shot complete characterization of the temporal structure of the harmonic light emission from the laser-plasma mirror interaction. (author)
Global Monte Carlo Simulation with High Order Polynomial Expansions
William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin
2007-01-01
The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as 'local' piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi's method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source convergence
High order harmonic generation in rare gases
Budil, Kimberly Susan [Univ. of California, Davis, CA (United States)
1994-05-01
The process of high order harmonic generation in atomic gases has shown great promise as a method of generating extremely short wavelength radiation, extending far into the extreme ultraviolet (XUV). The process is conceptually simple. A very intense laser pulse (I ~10^{13}-10^{14} W/cm^{2}) is focused into a dense (~10^{17} particles/cm^{3}) atomic medium, causing the atoms to become polarized. These atomic dipoles are then coherently driven by the laser field and begin to radiate at odd harmonics of the laser field. This dissertation is a study of both the physical mechanism of harmonic generation as well as its development as a source of coherent XUV radiation. Recently, a semiclassical theory has been proposed which provides a simple, intuitive description of harmonic generation. In this picture the process is treated in two steps. The atom ionizes via tunneling after which its classical motion in the laser field is studied. Electron trajectories which return to the vicinity of the nucleus may recombine and emit a harmonic photon, while those which do not return will ionize. An experiment was performed to test the validity of this model wherein the trajectory of the electron as it orbits the nucleus or ion core is perturbed by driving the process with elliptically, rather than linearly, polarized laser radiation. The semiclassical theory predicts a rapid turn-off of harmonic production as the ellipticity of the driving field is increased. This decrease in harmonic production is observed experimentally and a simple quantum mechanical theory is used to model the data. The second major focus of this work was on development of the harmonic "source". A series of experiments were performed examining the spatial profiles of the harmonics. The quality of the spatial profile is crucial if the harmonics are to be used as the source for experiments, particularly if they must be refocused.
Optimized Kernel Entropy Components.
Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau
2017-06-01
This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.
Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger
2011-01-01
In a recent paper we have introduced the class of realised kernel estimators of the increments of quadratic variation in the presence of noise. We showed that this estimator is consistent and derived its limit distribution under various assumptions on the kernel weights. In this paper we extend our...... that subsampling is impotent, in the sense that subsampling has no effect on the asymptotic distribution. Perhaps surprisingly, for the efficient smooth kernels, such as the Parzen kernel, we show that subsampling is harmful as it increases the asymptotic variance. We also study the performance of subsampled...
Duff, I.
1994-12-31
This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.
Validation of Born Traveltime Kernels
Baig, A. M.; Dahlen, F. A.; Hung, S.
2001-12-01
Most inversions for Earth structure using seismic traveltimes rely on linear ray theory to translate observed traveltime anomalies into seismic velocity anomalies distributed throughout the mantle. However, ray theory is not an appropriate tool to use when velocity anomalies have scale lengths less than the width of the Fresnel zone. In the presence of these structures, we need to turn to a scattering theory in order to adequately describe all of the features observed in the waveform. By coupling the Born approximation to ray theory, the first order dependence of heterogeneity on the cross-correlated traveltimes (described by the Fréchet derivative or, more colourfully, the banana-doughnut kernel) may be determined. To determine for what range of parameters these banana-doughnut kernels outperform linear ray theory, we generate several random media specified by their statistical properties, namely the RMS slowness perturbation and the scale length of the heterogeneity. Acoustic waves are numerically generated from a point source using a 3-D pseudo-spectral wave propagation code. These waves are then recorded at a variety of propagation distances from the source introducing a third parameter to the problem: the number of wavelengths traversed by the wave. When all of the heterogeneity has scale lengths larger than the width of the Fresnel zone, ray theory does as good a job at predicting the cross-correlated traveltime as the banana-doughnut kernels do. Below this limit, wavefront healing becomes a significant effect and ray theory ceases to be effective even though the kernels remain relatively accurate provided the heterogeneity is weak. The study of wave propagation in random media is of a more general interest and we will also show our measurements of the velocity shift and the variance of traveltime compare to various theoretical predictions in a given regime.
Identification of Fusarium damaged wheat kernels using image analysis
Ondřej Jirsa
2011-01-01
Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.
High order QED corrections in Z physics
Marck, S.C. van der.
1991-01-01
In this thesis a number of calculations of higher order QED corrections are presented, all applying to the standard LEP/SLC processes e + e - → f-bar f, where f stands for any fermion. In cases where f≠ e - , ν e , the above process is only possible via annihilation of the incoming electron positron pair. At LEP/SLC this mainly occurs via the production and the subsequent decay of a Z boson, i.e. the cross section is heavily dominated by the Z resonance. These processes and the corrections to them, treated in a semi-analytical way, are discussed (ch. 2). In the case f = e - (Bhabha scattering) the process can also occur via the exchange of a virtual photon in the t-channel. Since the latter contribution is dominant at small scattering angles one has to exclude these angles if one is interested in Z physics. Having excluded that region one has to recalculate all QED corrections (ch. 3). The techniques introduced there enables for the calculation the difference between forward and backward scattering, the forward backward symmetry, for the cases f ≠ e - , ν e (ch. 4). At small scattering angles, where Bhabha scattering is dominated by photon exchange in the t-channel, this process is used in experiments to determine the luminosity of the e + e - accelerator. hence an accurate theoretical description of this process at small angles is of vital interest to the overall normalization of all measurements at LEP/SLC. Ch. 5 gives such a description in a semi-analytical way. The last two chapters discuss Monte Carlo techniques that are used for the cases f≠ e - , ν e . Ch. 6 describes the simulation of two photon bremsstrahlung, which is a second order QED correction effect. The results are compared with results of the semi-analytical treatment in ch. 2. Finally ch. 7 reviews several techniques that have been used to simulate higher order QED corrections for the cases f≠ e - , ν e . (author). 132 refs.; 10 figs.; 16 tabs
Learning High-Order Filters for Efficient Blind Deconvolution of Document Photographs
Xiao, Lei
2016-09-16
Photographs of text documents taken by hand-held cameras can be easily degraded by camera motion during exposure. In this paper, we propose a new method for blind deconvolution of document images. Observing that document images are usually dominated by small-scale high-order structures, we propose to learn a multi-scale, interleaved cascade of shrinkage fields model, which contains a series of high-order filters to facilitate joint recovery of blur kernel and latent image. With extensive experiments, we show that our method produces high quality results and is highly efficient at the same time, making it a practical choice for deblurring high resolution text images captured by modern mobile devices. © Springer International Publishing AG 2016.
Classification With Truncated Distance Kernel.
Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas
2018-05-01
This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.
High-Order Calderón Preconditioned Time Domain Integral Equation Solvers
Valdes, Felipe
2013-05-01
Two high-order accurate Calderón preconditioned time domain electric field integral equation (TDEFIE) solvers are presented. In contrast to existing Calderón preconditioned time domain solvers, the proposed preconditioner allows for high-order surface representations and current expansions by using a novel set of fully-localized high-order div-and quasi curl-conforming (DQCC) basis functions. Numerical results demonstrate that the linear systems of equations obtained using the proposed basis functions converge rapidly, regardless of the mesh density and of the order of the current expansion. © 1963-2012 IEEE.
High-Order Calderón Preconditioned Time Domain Integral Equation Solvers
Valdes, Felipe; Ghaffari-Miab, Mohsen; Andriulli, Francesco P.; Cools, Kristof; Michielssen,
2013-01-01
Two high-order accurate Calderón preconditioned time domain electric field integral equation (TDEFIE) solvers are presented. In contrast to existing Calderón preconditioned time domain solvers, the proposed preconditioner allows for high-order surface representations and current expansions by using a novel set of fully-localized high-order div-and quasi curl-conforming (DQCC) basis functions. Numerical results demonstrate that the linear systems of equations obtained using the proposed basis functions converge rapidly, regardless of the mesh density and of the order of the current expansion. © 1963-2012 IEEE.
Gärtner, Thomas
2009-01-01
This book provides a unique treatment of an important area of machine learning and answers the question of how kernel methods can be applied to structured data. Kernel methods are a class of state-of-the-art learning algorithms that exhibit excellent learning results in several application domains. Originally, kernel methods were developed with data in mind that can easily be embedded in a Euclidean vector space. Much real-world data does not have this property but is inherently structured. An example of such data, often consulted in the book, is the (2D) graph structure of molecules formed by
Locally linear approximation for Kernel methods : the Railway Kernel
Muñoz, Alberto; González, Javier
2008-01-01
In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capab...
High-order accurate numerical algorithm for three-dimensional transport prediction
Pepper, D W [Savannah River Lab., Aiken, SC; Baker, A J
1980-01-01
The numerical solution of the three-dimensional pollutant transport equation is obtained with the method of fractional steps; advection is solved by the method of moments and diffusion by cubic splines. Topography and variable mesh spacing are accounted for with coordinate transformations. First estimate wind fields are obtained by interpolation to grid points surrounding specific data locations. Numerical results agree with results obtained from analytical Gaussian plume relations for ideal conditions. The numerical model is used to simulate the transport of tritium released from the Savannah River Plant on 2 May 1974. Predicted ground level air concentration 56 km from the release point is within 38% of the experimentally measured value.
Efficiency of High-Order Accurate Difference Schemes for the Korteweg-de Vries Equation
Kanyuta Poochinapan
2014-01-01
Full Text Available Two numerical models to obtain the solution of the KdV equation are proposed. Numerical tools, compact fourth-order and standard fourth-order finite difference techniques, are applied to the KdV equation. The fundamental conservative properties of the equation are preserved by the finite difference methods. Linear stability analysis of two methods is presented by the Von Neumann analysis. The new methods give second- and fourth-order accuracy in time and space, respectively. The numerical experiments show that the proposed methods improve the accuracy of the solution significantly.
High-Order Accurate Solutions to the Helmholtz Equation in the Presence of Boundary Singularities
2015-03-31
2.52) by finite differences, see Section 3.1.1. Building the matrix QH of (3.22) that enters into the BEP (3.4) (or, in the inhomogeneous case , (3.5...necessary for this study , but it may in some cases be convenient to allow the power series to have a more general form in which the exponent of r is...from this error estimate the tolerance σ can be set. The numerical case study of Section 5.2.2 provides evidence that taking σ equal to the overall
Motai, Yuichi
2015-01-01
Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include
An analysis of 1-D smoothed particle hydrodynamics kernels
Fulk, D.A.; Quinn, D.W.
1996-01-01
In this paper, the smoothed particle hydrodynamics (SPH) kernel is analyzed, resulting in measures of merit for one-dimensional SPH. Various methods of obtaining an objective measure of the quality and accuracy of the SPH kernel are addressed. Since the kernel is the key element in the SPH methodology, this should be of primary concern to any user of SPH. The results of this work are two measures of merit, one for smooth data and one near shocks. The measure of merit for smooth data is shown to be quite accurate and a useful delineator of better and poorer kernels. The measure of merit for non-smooth data is not quite as accurate, but results indicate the kernel is much less important for these types of problems. In addition to the theory, 20 kernels are analyzed using the measure of merit demonstrating the general usefulness of the measure of merit and the individual kernels. In general, it was decided that bell-shaped kernels perform better than other shapes. 12 refs., 16 figs., 7 tabs
High-order computer-assisted estimates of topological entropy
Grote, Johannes
The concept of Taylor Models is introduced, which offers highly accurate C0-estimates for the enclosures of functional dependencies, combining high-order Taylor polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified interval arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly nonlinear dynamical systems. A method to obtain sharp rigorous enclosures of Poincare maps for certain types of flows and surfaces is developed and numerical examples are presented. Differential algebraic techniques allow the efficient and accurate computation of polynomial approximations for invariant curves of certain planar maps around hyperbolic fixed points. Subsequently we introduce a procedure to extend these polynomial curves to verified Taylor Model enclosures of local invariant manifolds with C0-errors of size 10-10--10 -14, and proceed to generate the global invariant manifold tangle up to comparable accuracy through iteration in Taylor Model arithmetic. Knowledge of the global manifold structure up to finite iterations of the local manifold pieces enables us to find all homoclinic and heteroclinic intersections in the generated manifold tangle. Combined with the mapping properties of the homoclinic points and their ordering we are able to construct a subshift of finite type as a topological factor of the original planar system to obtain rigorous lower bounds for its topological entropy. This construction is fully automatic and yields homoclinic tangles with several hundred homoclinic points. As an example rigorous lower bounds for the topological entropy of the Henon map are computed, which to the best knowledge of the authors yield the largest such estimates published so far.
Measurement of Weight of Kernels in a Simulated Cylindrical Fuel Compact for HTGR
Kim, Woong Ki; Lee, Young Woo; Kim, Young Min; Kim, Yeon Ku; Eom, Sung Ho; Jeong, Kyung Chai; Cho, Moon Sung; Cho, Hyo Jin; Kim, Joo Hee
2011-01-01
The TRISO-coated fuel particle for the high temperature gas-cooled reactor (HTGR) is composed of a nuclear fuel kernel and outer coating layers. The coated particles are mixed with graphite matrix to make HTGR fuel element. The weight of fuel kernels in an element is generally measured by the chemical analysis or a gamma-ray spectrometer. Although it is accurate to measure the weight of kernels by the chemical analysis, the samples used in the analysis cannot be put again in the fabrication process. Furthermore, radioactive wastes are generated during the inspection procedure. The gamma-ray spectrometer requires an elaborate reference sample to reduce measurement errors induced from the different geometric shape of test sample from that of reference sample. X-ray computed tomography (CT) is an alternative to measure the weight of kernels in a compact nondestructively. In this study, X-ray CT is applied to measure the weight of kernels in a cylindrical compact containing simulated TRISO-coated particles with ZrO 2 kernels. The volume of kernels as well as the number of kernels in the simulated compact is measured from the 3-D density information. The weight of kernels was calculated from the volume of kernels or the number of kernels. Also, the weight of kernels was measured by extracting the kernels from a compact to review the result of the X-ray CT application
Barndorff-Nielsen, Ole Eiler; Hansen, P. Reinhard; Lunde, Asger
2009-01-01
and find a remarkable level of agreement. We identify some features of the high-frequency data, which are challenging for realized kernels. They are when there are local trends in the data, over periods of around 10 minutes, where the prices and quotes are driven up or down. These can be associated......Realized kernels use high-frequency data to estimate daily volatility of individual stock prices. They can be applied to either trade or quote data. Here we provide the details of how we suggest implementing them in practice. We compare the estimates based on trade and quote data for the same stock...
Adaptive metric kernel regression
Goutte, Cyril; Larsen, Jan
2000-01-01
Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...
Adaptive Metric Kernel Regression
Goutte, Cyril; Larsen, Jan
1998-01-01
Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...
Du, Qiang; Yang, Jiang
2017-01-01
This work is concerned with the Fourier spectral approximation of various integral differential equations associated with some linear nonlocal diffusion and peridynamic operators under periodic boundary conditions. For radially symmetric kernels, the nonlocal operators under consideration are diagonalizable in the Fourier space so that the main computational challenge is on the accurate and fast evaluation of their eigenvalues or Fourier symbols consisting of possibly singular and highly oscillatory integrals. For a large class of fractional power-like kernels, we propose a new approach based on reformulating the Fourier symbols both as coefficients of a series expansion and solutions of some simple ODE models. We then propose a hybrid algorithm that utilizes both truncated series expansions and high order Runge–Kutta ODE solvers to provide fast evaluation of Fourier symbols in both one and higher dimensional spaces. It is shown that this hybrid algorithm is robust, efficient and accurate. As applications, we combine this hybrid spectral discretization in the spatial variables and the fourth-order exponential time differencing Runge–Kutta for temporal discretization to offer high order approximations of some nonlocal gradient dynamics including nonlocal Allen–Cahn equations, nonlocal Cahn–Hilliard equations, and nonlocal phase-field crystal models. Numerical results show the accuracy and effectiveness of the fully discrete scheme and illustrate some interesting phenomena associated with the nonlocal models.
Kernel methods for deep learning
Cho, Youngmin
2012-01-01
We introduce a new family of positive-definite kernels that mimic the computation in large neural networks. We derive the different members of this family by considering neural networks with different activation functions. Using these kernels as building blocks, we also show how to construct other positive-definite kernels by operations such as composition, multiplication, and averaging. We explore the use of these kernels in standard models of supervised learning, such as support vector mach...
Barndorff-Nielsen, Ole; Hansen, Peter Reinhard; Lunde, Asger
We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement noise of certain types and can also handle non-synchronous trading. It is the first estimator...
Sommer, Stefan Horst; Lauze, Francois Bernard; Nielsen, Mads
2011-01-01
In the LDDMM framework, optimal warps for image registration are found as end-points of critical paths for an energy functional, and the EPDiff equations describe the evolution along such paths. The Large Deformation Diffeomorphic Kernel Bundle Mapping (LDDKBM) extension of LDDMM allows scale space...
Spafford, Eugene H.; Mckendry, Martin S.
1986-01-01
An overview of the internal structure of the Clouds kernel was presented. An indication of how these structures will interact in the prototype Clouds implementation is given. Many specific details have yet to be determined and await experimentation with an actual working system.
Methods for compressible fluid simulation on GPUs using high-order finite differences
Pekkilä, Johannes; Väisälä, Miikka S.; Käpylä, Maarit J.; Käpylä, Petri J.; Anjum, Omer
2017-08-01
We focus on implementing and optimizing a sixth-order finite-difference solver for simulating compressible fluids on a GPU using third-order Runge-Kutta integration. Since graphics processing units perform well in data-parallel tasks, this makes them an attractive platform for fluid simulation. However, high-order stencil computation is memory-intensive with respect to both main memory and the caches of the GPU. We present two approaches for simulating compressible fluids using 55-point and 19-point stencils. We seek to reduce the requirements for memory bandwidth and cache size in our methods by using cache blocking and decomposing a latency-bound kernel into several bandwidth-bound kernels. Our fastest implementation is bandwidth-bound and integrates 343 million grid points per second on a Tesla K40t GPU, achieving a 3 . 6 × speedup over a comparable hydrodynamics solver benchmarked on two Intel Xeon E5-2690v3 processors. Our alternative GPU implementation is latency-bound and achieves the rate of 168 million updates per second.
Ji, Xing; Zhao, Fengxiang; Shyy, Wei; Xu, Kun
2018-03-01
Most high order computational fluid dynamics (CFD) methods for compressible flows are based on Riemann solver for the flux evaluation and Runge-Kutta (RK) time stepping technique for temporal accuracy. The advantage of this kind of space-time separation approach is the easy implementation and stability enhancement by introducing more middle stages. However, the nth-order time accuracy needs no less than n stages for the RK method, which can be very time and memory consuming due to the reconstruction at each stage for a high order method. On the other hand, the multi-stage multi-derivative (MSMD) method can be used to achieve the same order of time accuracy using less middle stages with the use of the time derivatives of the flux function. For traditional Riemann solver based CFD methods, the lack of time derivatives in the flux function prevents its direct implementation of the MSMD method. However, the gas kinetic scheme (GKS) provides such a time accurate evolution model. By combining the second-order or third-order GKS flux functions with the MSMD technique, a family of high order gas kinetic methods can be constructed. As an extension of the previous 2-stage 4th-order GKS, the 5th-order schemes with 2 and 3 stages will be developed in this paper. Based on the same 5th-order WENO reconstruction, the performance of gas kinetic schemes from the 2nd- to the 5th-order time accurate methods will be evaluated. The results show that the 5th-order scheme can achieve the theoretical order of accuracy for the Euler equations, and present accurate Navier-Stokes solutions as well due to the coupling of inviscid and viscous terms in the GKS formulation. In comparison with Riemann solver based 5th-order RK method, the high order GKS has advantages in terms of efficiency, accuracy, and robustness, for all test cases. The 4th- and 5th-order GKS have the same robustness as the 2nd-order scheme for the capturing of discontinuous solutions. The current high order MSMD GKS is a
Computational Aero-Acoustic Using High-order Finite-Difference Schemes
Zhu, Wei Jun; Shen, Wen Zhong; Sørensen, Jens Nørkær
2007-01-01
are solved using the in-house flow solver EllipSys2D/3D which is a second-order finite volume code. The acoustic solution is found by solving the acoustic equations using high-order finite difference schemes. The incompressible flow equations and the acoustic equations are solved at the same time levels......In this paper, a high-order technique to accurately predict flow-generated noise is introduced. The technique consists of solving the viscous incompressible flow equations and inviscid acoustic equations using a incompressible/compressible splitting technique. The incompressible flow equations...
Phase matching of high-order harmonics in a semi-infinite gas cell
Steingrube, Daniel S.; Vockerodt, Tobias; Schulz, Emilia; Morgner, Uwe; Kovacev, Milutin
2009-01-01
Phase matching of high-order harmonic generation is investigated experimentally for various parameters in a semi-infinite gas-cell (SIGC) geometry. The optimized harmonic yield is identified using two different noble gases (Xe and He) and its parameter dependence is studied in a systematic way. Beside the straightforward setup of the SIGC, this geometry promises a high photon flux due to a large interaction region. Moreover, since the experimental parameters within this cell are known accurately, direct comparison to simulations is performed. Spectral splitting and blueshift of high-order harmonics are observed.
Efficient Unsteady Flow Visualization with High-Order Access Dependencies
Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru
2016-04-19
We present a novel high-order access dependencies based model for efficient pathline computation in unsteady flow visualization. By taking longer access sequences into account to model more sophisticated data access patterns in particle tracing, our method greatly improves the accuracy and reliability in data access prediction. In our work, high-order access dependencies are calculated by tracing uniformly-seeded pathlines in both forward and backward directions in a preprocessing stage. The effectiveness of our proposed approach is demonstrated through a parallel particle tracing framework with high-order data prefetching. Results show that our method achieves higher data locality and hence improves the efficiency of pathline computation.
Viscosity kernel of molecular fluids
Puscasu, Ruslan; Todd, Billy; Daivis, Peter
2010-01-01
, temperature, and chain length dependencies of the reciprocal and real-space viscosity kernels are presented. We find that the density has a major effect on the shape of the kernel. The temperature range and chain lengths considered here have by contrast less impact on the overall normalized shape. Functional...... forms that fit the wave-vector-dependent kernel data over a large density and wave-vector range have also been tested. Finally, a structural normalization of the kernels in physical space is considered. Overall, the real-space viscosity kernel has a width of roughly 3–6 atomic diameters, which means...
Generation of intense high-order vortex harmonics.
Zhang, Xiaomei; Shen, Baifei; Shi, Yin; Wang, Xiaofeng; Zhang, Lingang; Wang, Wenpeng; Xu, Jiancai; Yi, Longqiong; Xu, Zhizhan
2015-05-01
This Letter presents for the first time a scheme to generate intense high-order optical vortices that carry orbital angular momentum in the extreme ultraviolet region based on relativistic harmonics from the surface of a solid target. In the three-dimensional particle-in-cell simulation, the high-order harmonics of the high-order vortex mode is generated in both reflected and transmitted light beams when a linearly polarized Laguerre-Gaussian laser pulse impinges on a solid foil. The azimuthal mode of the harmonics scales with its order. The intensity of the high-order vortex harmonics is close to the relativistic region, with the pulse duration down to attosecond scale. The obtained intense vortex beam possesses the combined properties of fine transversal structure due to the high-order mode and the fine longitudinal structure due to the short wavelength of the high-order harmonics. In addition to the application in high-resolution detection in both spatial and temporal scales, it also presents new opportunities in the intense vortex required fields, such as the inner shell ionization process and high energy twisted photons generation by Thomson scattering of such an intense vortex beam off relativistic electrons.
Compact high order schemes with gradient-direction derivatives for absorbing boundary conditions
Gordon, Dan; Gordon, Rachel; Turkel, Eli
2015-09-01
We consider several compact high order absorbing boundary conditions (ABCs) for the Helmholtz equation in three dimensions. A technique called "the gradient method" (GM) for ABCs is also introduced and combined with the high order ABCs. GM is based on the principle of using directional derivatives in the direction of the wavefront propagation. The new ABCs are used together with the recently introduced compact sixth order finite difference scheme for variable wave numbers. Experiments on problems with known analytic solutions produced very accurate results, demonstrating the efficacy of the high order schemes, particularly when combined with GM. The new ABCs are then applied to the SEG/EAGE Salt model, showing the advantages of the new schemes.
Variable Kernel Density Estimation
Terrell, George R.; Scott, David W.
1992-01-01
We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...
Steerability of Hermite Kernel
Yang, Bo; Flusser, Jan; Suk, Tomáš
2013-01-01
Roč. 27, č. 4 (2013), 1354006-1-1354006-25 ISSN 0218-0014 R&D Projects: GA ČR GAP103/11/1552 Institutional support: RVO:67985556 Keywords : Hermite polynomials * Hermite kernel * steerability * adaptive filtering Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.558, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/yang-0394387. pdf
Li, Xiaofan; Nie, Qing
2009-07-01
Many applications in materials involve surface diffusion of elastically stressed solids. Study of singularity formation and long-time behavior of such solid surfaces requires accurate simulations in both space and time. Here we present a high-order boundary integral method for an elastically stressed solid with axi-symmetry due to surface diffusions. In this method, the boundary integrals for isotropic elasticity in axi-symmetric geometry are approximated through modified alternating quadratures along with an extrapolation technique, leading to an arbitrarily high-order quadrature; in addition, a high-order (temporal) integration factor method, based on explicit representation of the mean curvature, is used to reduce the stability constraint on time-step. To apply this method to a periodic (in axial direction) and axi-symmetric elastically stressed cylinder, we also present a fast and accurate summation method for the periodic Green's functions of isotropic elasticity. Using the high-order boundary integral method, we demonstrate that in absence of elasticity the cylinder surface pinches in finite time at the axis of the symmetry and the universal cone angle of the pinching is found to be consistent with the previous studies based on a self-similar assumption. In the presence of elastic stress, we show that a finite time, geometrical singularity occurs well before the cylindrical solid collapses onto the axis of symmetry, and the angle of the corner singularity on the cylinder surface is also estimated.
Kernel Machine SNP-set Testing under Multiple Candidate Kernels
Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.
2013-01-01
Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868
Li, Xiaofan; Nie, Qing
2009-01-01
Many applications in materials involve surface diffusion of elastically stressed solids. Study of singularity formation and long-time behavior of such solid surfaces requires accurate simulations in both space and time. Here we present a high-order boundary integral method for an elastically stressed solid with axi-symmetry due to surface diffusions. In this method, the boundary integrals for isotropic elasticity in axi-symmetric geometry are approximated through modified alternating quadratu...
Dynamic Stability Analysis Using High-Order Interpolation
Juarez-Toledo C.
2012-10-01
Full Text Available A non-linear model with robust precision for transient stability analysis in multimachine power systems is proposed. The proposed formulation uses the interpolation of Lagrange and Newton's Divided Difference. The High-Order Interpolation technique developed can be used for evaluation of the critical conditions of the dynamic system.The technique is applied to a 5-area 45-machine model of the Mexican interconnected system. As a particular case, this paper shows the application of the High-Order procedure for identifying the slow-frequency mode for a critical contingency. Numerical examples illustrate the method and demonstrate the ability of the High-Order technique to isolate and extract temporal modal behavior.
Electrochemical Hydrogen Storage in a Highly Ordered Mesoporous Carbon
Dan eLiu
2014-10-01
Full Text Available A highly order mesoporous carbon has been synthesized through a strongly acidic, aqueous cooperative assembly route. The structure and morphology of the carbon material were investigated using TEM, SEM and nitrogen adsorption-desorption isotherms. The carbon was proven to be meso-structural and consisted of graphitic micro-domain with larger interlayer space. AC impedance and electrochemical measurements reveal that the synthesized highly ordered mesoporous carbon exhibits a promoted electrochemical hydrogen insertion process and improved capacitance and hydrogen storage stability. The meso-structure and enlarged interlayer distance within the highly ordered mesoporous carbon are suggested as possible causes for the enhancement in hydrogen storage. Both hydrogen capacity in the carbon and mass diffusion within the matrix were improved.
Retrieval of high-order susceptibilities of nonlinear metamaterials
Wang Zhi-Yu; Qiu Jin-Peng; Chen Hua; Mo Jiong-Jiong; Yu Fa-Xin
2017-01-01
Active metamaterials embedded with nonlinear elements are able to exhibit strong nonlinearity in microwave regime. However, existing S -parameter based parameter retrieval approaches developed for linear metamaterials do not apply in nonlinear cases. In this paper, a retrieval algorithm of high-order susceptibilities for nonlinear metamaterials is derived. Experimental demonstration shows that, by measuring the power level of each harmonic while sweeping the incident power, high-order susceptibilities of a thin-layer nonlinear metamaterial can be effectively retrieved. The proposedapproach can be widely used in the research of active metamaterials. (paper)
Smolka, Gert
1994-01-01
Oz is a concurrent language providing for functional, object-oriented, and constraint programming. This paper defines Kernel Oz, a semantically complete sublanguage of Oz. It was an important design requirement that Oz be definable by reduction to a lean kernel language. The definition of Kernel Oz introduces three essential abstractions: the Oz universe, the Oz calculus, and the actor model. The Oz universe is a first-order structure defining the values and constraints Oz computes with. The ...
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...
7 CFR 981.408 - Inedible kernel.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...
7 CFR 981.8 - Inedible kernel.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...
Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger
2011-01-01
We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement error of certain types and can also handle non-synchronous trading. It is the first estimator...... which has these three properties which are all essential for empirical work in this area. We derive the large sample asymptotics of this estimator and assess its accuracy using a Monte Carlo study. We implement the estimator on some US equity data, comparing our results to previous work which has used...
Clustering via Kernel Decomposition
Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan
2006-01-01
Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....
Intra-cavity generation of high order LGpl modes
Ngcobo, S
2012-08-01
Full Text Available with the location of the Laguerre polynomial zeros. The Diffractive optical element is used to shape the TEM00 Gaussian beam and force the laser to operate on a higher order LGpl Laguerre-Gaussian modes or high order superposition of Laguerre-Gaussian modes...
A high order solver for the unbounded Poisson equation
Hejlesen, Mads Mølholm; Rasmussen, Johannes Tophøj; Chatelain, Philippe
2012-01-01
This work improves upon Hockney and Eastwood's Fourier-based algorithm for the unbounded Poisson equation to formally achieve arbitrary high order of convergence without any additional computational cost. We assess the methodology on the kinematic relations between the velocity and vorticity fields....
Enhanced high-order harmonic generation from Argon-clusters
Tao, Yin; Hagmeijer, Rob; Bastiaens, Hubertus M.J.; Goh, S.J.; van der Slot, P.J.M.; Biedron, S.; Milton, S.; Boller, Klaus J.
2017-01-01
High-order harmonic generation (HHG) in clusters is of high promise because clusters appear to offer an increased optical nonlinearity. We experimentally investigate HHG from Argon clusters in a supersonic gas jet that can generate monomer-cluster mixtures with varying atomic number density and
Airfoil noise computation use high-order schemes
Zhu, Wei Jun; Shen, Wen Zhong; Sørensen, Jens Nørkær
2007-01-01
High-order finite difference schemes with at least 4th-order spatial accuracy are used to simulate aerodynamically generated noise. The aeroacoustic solver with 4th-order up to 8th-order accuracy is implemented into the in-house flow solver, EllipSys2D/3D. Dispersion-Relation-Preserving (DRP) fin...
A rigorous analysis of high-order electromagnetic invisibility cloaks
Weder, Ricardo
2008-01-01
There is currently a great deal of interest in the invisibility cloaks recently proposed by Pendry et al that are based on the transformation approach. They obtained their results using first-order transformations. In recent papers, Hendi et al and Cai et al considered invisibility cloaks with high-order transformations. In this paper, we study high-order electromagnetic invisibility cloaks in transformation media obtained by high-order transformations from general anisotropic media. We consider the case where there is a finite number of spherical cloaks located in different points in space. We prove that for any incident plane wave, at any frequency, the scattered wave is identically zero. We also consider the scattering of finite-energy wave packets. We prove that the scattering matrix is the identity, i.e., that for any incoming wave packet the outgoing wave packet is the same as the incoming one. This proves that the invisibility cloaks cannot be detected in any scattering experiment with electromagnetic waves in high-order transformation media, and in particular in the first-order transformation media of Pendry et al. We also prove that the high-order invisibility cloaks, as well as the first-order ones, cloak passive and active devices. The cloaked objects completely decouple from the exterior. Actually, the cloaking outside is independent of what is inside the cloaked objects. The electromagnetic waves inside the cloaked objects cannot leave the concealed regions and vice versa, the electromagnetic waves outside the cloaked objects cannot go inside the concealed regions. As we prove our results for media that are obtained by transformation from general anisotropic materials, we prove that it is possible to cloak objects inside general crystals
Dobrev, Veselin A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kolev, Tzanio V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rieben, Robert N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2012-09-20
The numerical approximation of the Euler equations of gas dynamics in a movingLagrangian frame is at the heart of many multiphysics simulation algorithms. Here, we present a general framework for high-order Lagrangian discretization of these compressible shock hydrodynamics equations using curvilinear finite elements. This method is an extension of the approach outlined in [Dobrev et al., Internat. J. Numer. Methods Fluids, 65 (2010), pp. 1295--1310] and can be formulated for any finite dimensional approximation of the kinematic and thermodynamic fields, including generic finite elements on two- and three-dimensional meshes with triangular, quadrilateral, tetrahedral, or hexahedral zones. We discretize the kinematic variables of position and velocity using a continuous high-order basis function expansion of arbitrary polynomial degree which is obtained via a corresponding high-order parametric mapping from a standard reference element. This enables the use of curvilinear zone geometry, higher-order approximations for fields within a zone, and a pointwise definition of mass conservation which we refer to as strong mass conservation. Moreover, we discretize the internal energy using a piecewise discontinuous high-order basis function expansion which is also of arbitrary polynomial degree. This facilitates multimaterial hydrodynamics by treating material properties, such as equations of state and constitutive models, as piecewise discontinuous functions which vary within a zone. To satisfy the Rankine--Hugoniot jump conditions at a shock boundary and generate the appropriate entropy, we introduce a general tensor artificial viscosity which takes advantage of the high-order kinematic and thermodynamic information available in each zone. Finally, we apply a generic high-order time discretization process to the semidiscrete equations to develop the fully discrete numerical algorithm. Our method can be viewed as the high-order generalization of the so-called staggered
Global Polynomial Kernel Hazard Estimation
Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch
2015-01-01
This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...
High-Order Modulation for Optical Fiber Transmission
Seimetz, Matthias
2009-01-01
Catering to the current interest in increasing the spectral efficiency of optical fiber networks by the deployment of high-order modulation formats, this monograph describes transmitters, receivers and performance of optical systems with high-order phase and quadrature amplitude modulation. In the first part of the book, the author discusses various transmitter implementation options as well as several receiver concepts based on direct and coherent detection, including designs of new structures. Hereby, both optical and electrical parts are considered, allowing the assessment of practicability and complexity. In the second part, a detailed characterization of optical fiber transmission systems is presented, regarding a wide range of modulation formats. It provides insight in the fundamental behavior of different formats with respect to relevant performance degradation effects and identifies the major trends in system performance.
High-order harmonic generation in laser plasma plumes
Ganeev, Rashid A
2013-01-01
This book represents the first comprehensive treatment of high-order harmonic generation in laser-produced plumes, covering the principles, past and present experimental status and important applications. It shows how this method of frequency conversion of laser radiation towards the extreme ultraviolet range matured over the course of multiple studies and demonstrated new approaches in the generation of strong coherent short-wavelength radiation for various applications. Significant discoveries and pioneering contributions of researchers in this field carried out in various laser scientific centers worldwide are included in this first attempt to describe the important findings in this area of nonlinear spectroscopy. "High-Order Harmonic Generation in Laser Plasma Plumes" is a self-contained and unified review of the most recent achievements in the field, such as the application of clusters (fullerenes, nanoparticles, nanotubes) for efficient harmonic generation of ultrashort laser pulses in cluster-containin...
Omnibus risk assessment via accelerated failure time kernel machine modeling.
Sinnott, Jennifer A; Cai, Tianxi
2013-12-01
Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.
Analysis and Design of High-Order Parallel Resonant Converters
Batarseh, Issa Eid
1990-01-01
In this thesis, a special state variable transformation technique has been derived for the analysis of high order dc-to-dc resonant converters. Converters comprised of high order resonant tanks have the advantage of utilizing the parasitic elements by making them part of the resonant tank. A new set of state variables is defined in order to make use of two-dimensional state-plane diagrams in the analysis of high order converters. Such a method has been successfully used for the analysis of the conventional Parallel Resonant Converters (PRC). Consequently, two -dimensional state-plane diagrams are used to analyze the steady state response for third and fourth order PRC's when these converters are operated in the continuous conduction mode. Based on this analysis, a set of control characteristic curves for the LCC-, LLC- and LLCC-type PRC are presented from which various converter design parameters are obtained. Various design curves for component value selections and device ratings are given. This analysis of high order resonant converters shows that the addition of the reactive components to the resonant tank results in converters with better performance characteristics when compared with the conventional second order PRC. Complete design procedure along with design examples for 2nd, 3rd and 4th order converters are presented. Practical power supply units, normally used for computer applications, were built and tested by using the LCC-, LLC- and LLCC-type commutation schemes. In addition, computer simulation results are presented for these converters in order to verify the theoretical results.
Discrete nonlinear Schrodinger equations with arbitrarily high-order nonlinearities
Khare, A.; Rasmussen, Kim Ø; Salerno, M.
2006-01-01
-Ladik equation. As a common property, these equations possess three kinds of exact analytical stationary solutions for which the Peierls-Nabarro barrier is zero. Several properties of these solutions, including stability, discrete breathers, and moving solutions, are investigated.......A class of discrete nonlinear Schrodinger equations with arbitrarily high-order nonlinearities is introduced. These equations are derived from the same Hamiltonian using different Poisson brackets and include as particular cases the saturable discrete nonlinear Schrodinger equation and the Ablowitz...
High order modes in Project-X linac
Sukhanov, A., E-mail: ais@fnal.gov; Lunin, A.; Yakovlev, V.; Awida, M.; Champion, M.; Ginsburg, C.; Gonin, I.; Grimm, C.; Khabiboulline, T.; Nicol, T.; Orlov, Yu.; Saini, A.; Sergatskov, D.; Solyak, N.; Vostrikov, A.
2014-01-11
Project-X, a multi-MW proton source, is now under development at Fermilab. In this paper we present study of high order modes (HOM) excited in continues-wave (CW) superconducting linac of Project-X. We investigate effects of cryogenic losses caused by HOMs and influence of HOMs on beam dynamics. We find that these effects are small. We conclude that HOM couplers/dampers are not needed in the Project-X SC RF cavities.
Machine Learning Control For Highly Reconfigurable High-Order Systems
2015-01-02
calibration and applications,” Mechatronics and Embedded Systems and Applications (MESA), 2010 IEEE/ASME International Conference on, IEEE, 2010, pp. 38–43...AFRL-OSR-VA-TR-2015-0012 MACHINE LEARNING CONTROL FOR HIGHLY RECONFIGURABLE HIGH-ORDER SYSTEMS John Valasek TEXAS ENGINEERING EXPERIMENT STATION...DIMENSIONAL RECONFIGURABLE SYSTEMS FA9550-11-1-0302 Period of Performance 1 July 2011 – 29 September 2014 John Valasek Aerospace Engineering
International Conference on Spectral and High-Order Methods
Dumont, Ney; Hesthaven, Jan
2017-01-01
This book features a selection of high-quality papers chosen from the best presentations at the International Conference on Spectral and High-Order Methods (2016), offering an overview of the depth and breadth of the activities within this important research area. The carefully reviewed papers provide a snapshot of the state of the art, while the extensive bibliography helps initiate new research directions.
Conditional High-Order Boltzmann Machines for Supervised Relation Learning.
Huang, Yan; Wang, Wei; Wang, Liang; Tan, Tieniu
2017-09-01
Relation learning is a fundamental problem in many vision tasks. Recently, high-order Boltzmann machine and its variants have shown their great potentials in learning various types of data relation in a range of tasks. But most of these models are learned in an unsupervised way, i.e., without using relation class labels, which are not very discriminative for some challenging tasks, e.g., face verification. In this paper, with the goal to perform supervised relation learning, we introduce relation class labels into conventional high-order multiplicative interactions with pairwise input samples, and propose a conditional high-order Boltzmann Machine (CHBM), which can learn to classify the data relation in a binary classification way. To be able to deal with more complex data relation, we develop two improved variants of CHBM: 1) latent CHBM, which jointly performs relation feature learning and classification, by using a set of latent variables to block the pathway from pairwise input samples to output relation labels and 2) gated CHBM, which untangles factors of variation in data relation, by exploiting a set of latent variables to multiplicatively gate the classification of CHBM. To reduce the large number of model parameters generated by the multiplicative interactions, we approximately factorize high-order parameter tensors into multiple matrices. Then, we develop efficient supervised learning algorithms, by first pretraining the models using joint likelihood to provide good parameter initialization, and then finetuning them using conditional likelihood to enhance the discriminant ability. We apply the proposed models to a series of tasks including invariant recognition, face verification, and action similarity labeling. Experimental results demonstrate that by exploiting supervised relation labels, our models can greatly improve the performance.
High order spectral difference lattice Boltzmann method for incompressible hydrodynamics
Li, Weidong
2017-09-01
This work presents a lattice Boltzmann equation (LBE) based high order spectral difference method for incompressible flows. In the present method, the spectral difference (SD) method is adopted to discretize the convection and collision term of the LBE to obtain high order (≥3) accuracy. Because the SD scheme represents the solution as cell local polynomials and the solution polynomials have good tensor-product property, the present spectral difference lattice Boltzmann method (SD-LBM) can be implemented on arbitrary unstructured quadrilateral meshes for effective and efficient treatment of complex geometries. Thanks to only first oder PDEs involved in the LBE, no special techniques, such as hybridizable discontinuous Galerkin method (HDG), local discontinuous Galerkin method (LDG) and so on, are needed to discrete diffusion term, and thus, it simplifies the algorithm and implementation of the high order spectral difference method for simulating viscous flows. The proposed SD-LBM is validated with four incompressible flow benchmarks in two-dimensions: (a) the Poiseuille flow driven by a constant body force; (b) the lid-driven cavity flow without singularity at the two top corners-Burggraf flow; and (c) the unsteady Taylor-Green vortex flow; (d) the Blasius boundary-layer flow past a flat plate. Computational results are compared with analytical solutions of these cases and convergence studies of these cases are also given. The designed accuracy of the proposed SD-LBM is clearly verified.
Hybrid RANS-LES using high order numerical methods
Henry de Frahan, Marc; Yellapantula, Shashank; Vijayakumar, Ganesh; Knaus, Robert; Sprague, Michael
2017-11-01
Understanding the impact of wind turbine wake dynamics on downstream turbines is particularly important for the design of efficient wind farms. Due to their tractable computational cost, hybrid RANS/LES models are an attractive framework for simulating separation flows such as the wake dynamics behind a wind turbine. High-order numerical methods can be computationally efficient and provide increased accuracy in simulating complex flows. In the context of LES, high-order numerical methods have shown some success in predictions of turbulent flows. However, the specifics of hybrid RANS-LES models, including the transition region between both modeling frameworks, pose unique challenges for high-order numerical methods. In this work, we study the effect of increasing the order of accuracy of the numerical scheme in simulations of canonical turbulent flows using RANS, LES, and hybrid RANS-LES models. We describe the interactions between filtering, model transition, and order of accuracy and their effect on turbulence quantities such as kinetic energy spectra, boundary layer evolution, and dissipation rate. This work was funded by the U.S. Department of Energy, Exascale Computing Project, under Contract No. DE-AC36-08-GO28308 with the National Renewable Energy Laboratory.
Bruemmer, David J [Idaho Falls, ID
2009-11-17
A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.
Retrieval of interatomic separations of molecules from laser-induced high-order harmonic spectra
Le, Van-Hoang; Nguyen, Ngoc-Ty; Jin, C; Le, Anh-Thu; Lin, C D
2008-01-01
We illustrate an iterative method for retrieving the internuclear separations of N 2 , O 2 and CO 2 molecules using the high-order harmonics generated from these molecules by intense infrared laser pulses. We show that accurate results can be retrieved with a small set of harmonics and with one or few alignment angles of the molecules. For linear molecules the internuclear separations can also be retrieved from harmonics generated using isotropically distributed molecules. By extracting the transition dipole moment from the high-order harmonic spectra, we further demonstrated that it is preferable to retrieve the interatomic separation iteratively by fitting the extracted dipole moment. Our results show that time-resolved chemical imaging of molecules using infrared laser pulses with femtosecond temporal resolutions is possible
Retrieval of interatomic separations of molecules from laser-induced high-order harmonic spectra
Le, Van-Hoang; Nguyen, Ngoc-Ty [Department of Physics, University of Pedagogy, 280 An Duong Vuong, Ward 5, Ho Chi Minh City (Viet Nam); Jin, C; Le, Anh-Thu; Lin, C D [J. R. Macdonald Laboratory, Department of Physics, Kansas State University, Manhattan, KS 66506 (United States)
2008-04-28
We illustrate an iterative method for retrieving the internuclear separations of N{sub 2}, O{sub 2} and CO{sub 2} molecules using the high-order harmonics generated from these molecules by intense infrared laser pulses. We show that accurate results can be retrieved with a small set of harmonics and with one or few alignment angles of the molecules. For linear molecules the internuclear separations can also be retrieved from harmonics generated using isotropically distributed molecules. By extracting the transition dipole moment from the high-order harmonic spectra, we further demonstrated that it is preferable to retrieve the interatomic separation iteratively by fitting the extracted dipole moment. Our results show that time-resolved chemical imaging of molecules using infrared laser pulses with femtosecond temporal resolutions is possible.
Trask, Nathaniel; Maxey, Martin; Hu, Xiaozhe
2018-02-01
A stable numerical solution of the steady Stokes problem requires compatibility between the choice of velocity and pressure approximation that has traditionally proven problematic for meshless methods. In this work, we present a discretization that couples a staggered scheme for pressure approximation with a divergence-free velocity reconstruction to obtain an adaptive, high-order, finite difference-like discretization that can be efficiently solved with conventional algebraic multigrid techniques. We use analytic benchmarks to demonstrate equal-order convergence for both velocity and pressure when solving problems with curvilinear geometries. In order to study problems in dense suspensions, we couple the solution for the flow to the equations of motion for freely suspended particles in an implicit monolithic scheme. The combination of high-order accuracy with fully-implicit schemes allows the accurate resolution of stiff lubrication forces directly from the solution of the Stokes problem without the need to introduce sub-grid lubrication models.
Mixture Density Mercer Kernels: A Method to Learn Kernels
National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...
On high-order perturbative calculations at finite density
Ghisoiu, Ioan; Kurkela, Aleksi; Romatschke, Paul; Säppi, Matias; Vuorinen, Aleksi
2017-01-01
We discuss the prospects of performing high-order perturbative calculations in systems characterized by a vanishing temperature but finite density. In particular, we show that the determination of generic Feynman integrals containing fermionic chemical potentials can be reduced to the evaluation of three-dimensional phase space integrals over vacuum on-shell amplitudes. Applications of these rules will be discussed in the context of the thermodynamics of cold and dense QCD, where it is argued that they facilitate an extension of the Equation of State of cold quark matter to higher perturbative orders.
High-order harmonic generation in a capillary discharge
Rocca, Jorge J.; Kapteyn, Henry C.; Mumane, Margaret M.; Gaudiosi, David; Grisham, Michael E.; Popmintchev, Tenio V.; Reagan, Brendan A.
2010-06-01
A pre-ionized medium created by a capillary discharge results in more efficient use of laser energy in high-order harmonic generation (HHG) from ions. It extends the cutoff photon energy, and reduces the distortion of the laser pulse as it propagates down the waveguide. The observed enhancements result from a combination of reduced ionization energy loss and reduced ionization-induced defocusing of the driving laser as well as waveguiding of the driving laser pulse. The discharge plasma also provides a means to spectrally tune the harmonics by tailoring the initial level of ionization of the medium.
On high-order perturbative calculations at finite density
Ghişoiu, Ioan, E-mail: ioan.ghisoiu@helsinki.fi [Helsinki Institute of Physics and Department of Physics, University of Helsinki (Finland); Gorda, Tyler, E-mail: tyler.gorda@helsinki.fi [Helsinki Institute of Physics and Department of Physics, University of Helsinki (Finland); Department of Physics, University of Colorado Boulder, Boulder, CO (United States); Kurkela, Aleksi, E-mail: aleksi.kurkela@cern.ch [Theoretical Physics Department, CERN, Geneva (Switzerland); Faculty of Science and Technology, University of Stavanger, Stavanger (Norway); Romatschke, Paul, E-mail: paul.romatschke@colorado.edu [Department of Physics, University of Colorado Boulder, Boulder, CO (United States); Center for Theory of Quantum Matter, University of Colorado, Boulder, CO (United States); Säppi, Matias, E-mail: matias.sappi@helsinki.fi [Helsinki Institute of Physics and Department of Physics, University of Helsinki (Finland); Vuorinen, Aleksi, E-mail: aleksi.vuorinen@helsinki.fi [Helsinki Institute of Physics and Department of Physics, University of Helsinki (Finland)
2017-02-15
We discuss the prospects of performing high-order perturbative calculations in systems characterized by a vanishing temperature but finite density. In particular, we show that the determination of generic Feynman integrals containing fermionic chemical potentials can be reduced to the evaluation of three-dimensional phase space integrals over vacuum on-shell amplitudes — a result reminiscent of a previously proposed “naive real-time formalism” for vacuum diagrams. Applications of these rules are discussed in the context of the thermodynamics of cold and dense QCD, where it is argued that they facilitate an extension of the Equation of State of cold quark matter to higher perturbative orders.
Theoretical description of high-order harmonic generation in solids
Kemper, A F; Moritz, B; Devereaux, T P; Freericks, J K
2013-01-01
We consider several aspects of high-order harmonic generation in solids: the effects of elastic and inelastic scattering, varying pulse characteristics and inclusion of material-specific parameters through a realistic band structure. We reproduce many observed characteristics of high harmonic generation experiments in solids including the formation of only odd harmonics in inversion-symmetric materials, and the nonlinear formation of high harmonics with increasing field. We find that the harmonic spectra are fairly robust against elastic and inelastic scattering. Furthermore, we find that the pulse characteristics can play an important role in determining the harmonic spectra. (paper)
High-order harmonic generation with short-pulse lasers
Schafer, K.J.; Krause, J.L.; Kulander, K.C.
1992-12-01
Recent progress in the understanding of high-order harmonic conversion from atoms and ions exposed to high-intensity, short-pulse optical lasers is reviewed. We find that ions can produce harmonics comparable in strength to those obtained from neutral atoms, and that the emission extends to much higher order. Simple scaling laws for the strength of the harmonic emission and the maximium observable harmonic are suggested. These results imply that the photoemission observed in recent experiments in helium and neon contains contributions from ions as well as neutrals
Reproducing Kernel Method for Solving Nonlinear Differential-Difference Equations
Reza Mokhtari
2012-01-01
Full Text Available On the basis of reproducing kernel Hilbert spaces theory, an iterative algorithm for solving some nonlinear differential-difference equations (NDDEs is presented. The analytical solution is shown in a series form in a reproducing kernel space, and the approximate solution , is constructed by truncating the series to terms. The convergence of , to the analytical solution is also proved. Results obtained by the proposed method imply that it can be considered as a simple and accurate method for solving such differential-difference problems.
Employment of kernel methods on wind turbine power performance assessment
Skrimpas, Georgios Alexandros; Sweeney, Christian Walsted; Marhadi, Kun S.
2015-01-01
A power performance assessment technique is developed for the detection of power production discrepancies in wind turbines. The method employs a widely used nonparametric pattern recognition technique, the kernel methods. The evaluation is based on the trending of an extracted feature from...... the kernel matrix, called similarity index, which is introduced by the authors for the first time. The operation of the turbine and consequently the computation of the similarity indexes is classified into five power bins offering better resolution and thus more consistent root cause analysis. The accurate...
Multiscale high-order/low-order (HOLO) algorithms and applications
Chacón, L.; Chen, G.; Knoll, D.A.; Newman, C.; Park, H.; Taitano, W.; Willert, J.A.; Womeldorff, G.
2017-01-01
We review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. The HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.
Benchmarking with high-order nodal diffusion methods
Tomasevic, D.; Larsen, E.W.
1993-01-01
Significant progress in the solution of multidimensional neutron diffusion problems was made in the late 1970s with the introduction of nodal methods. Modern nodal reactor analysis codes provide significant improvements in both accuracy and computing speed over earlier codes based on fine-mesh finite difference methods. In the past, the performance of advanced nodal methods was determined by comparisons with fine-mesh finite difference codes. More recently, the excellent spatial convergence of nodal methods has permitted their use in establishing reference solutions for some important bench-mark problems. The recent development of the self-consistent high-order nodal diffusion method and its subsequent variational formulation has permitted the calculation of reference solutions with one node per assembly mesh size. In this paper, we compare results for four selected benchmark problems to those obtained by high-order response matrix methods and by two well-known state-of-the-art nodal methods (the open-quotes analyticalclose quotes and open-quotes nodal expansionclose quotes methods)
High-order perturbations of a spherical collapsing star
Brizuela, David; Martin-Garcia, Jose M.; Sperhake, Ulrich; Kokkotas, Kostas D.
2010-01-01
A formalism to deal with high-order perturbations of a general spherical background was developed in earlier work [D. Brizuela, J. M. Martin-Garcia, and G. A. Mena Marugan, Phys. Rev. D 74, 044039 (2006); D. Brizuela, J. M. Martin-Garcia, and G. A. Mena Marugan, Phys. Rev. D 76, 024004 (2007)]. In this paper, we apply it to the particular case of a perfect fluid background. We have expressed the perturbations of the energy-momentum tensor at any order in terms of the perturbed fluid's pressure, density, and velocity. In general, these expressions are not linear and have sources depending on lower-order perturbations. For the second-order case we make the explicit decomposition of these sources in tensor spherical harmonics. Then, a general procedure is given to evolve the perturbative equations of motions of the perfect fluid for any value of the harmonic label. Finally, with the problem of a spherical collapsing star in mind, we discuss the high-order perturbative matching conditions across a timelike surface, in particular, the surface separating the perfect fluid interior from the exterior vacuum.
Multiscale high-order/low-order (HOLO) algorithms and applications
Chacón, L., E-mail: chacon@lanl.gov [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Chen, G.; Knoll, D.A.; Newman, C.; Park, H.; Taitano, W. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Willert, J.A. [Institute for Defense Analyses, Alexandria, VA 22311 (United States); Womeldorff, G. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2017-02-01
We review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. The HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.
A High-Order CFS Algorithm for Clustering Big Data
Fanyu Bu
2016-01-01
Full Text Available With the development of Internet of Everything such as Internet of Things, Internet of People, and Industrial Internet, big data is being generated. Clustering is a widely used technique for big data analytics and mining. However, most of current algorithms are not effective to cluster heterogeneous data which is prevalent in big data. In this paper, we propose a high-order CFS algorithm (HOCFS to cluster heterogeneous data by combining the CFS clustering algorithm and the dropout deep learning model, whose functionality rests on three pillars: (i an adaptive dropout deep learning model to learn features from each type of data, (ii a feature tensor model to capture the correlations of heterogeneous data, and (iii a tensor distance-based high-order CFS algorithm to cluster heterogeneous data. Furthermore, we verify our proposed algorithm on different datasets, by comparison with other two clustering schemes, that is, HOPCM and CFS. Results confirm the effectiveness of the proposed algorithm in clustering heterogeneous data.
Hybrid overlay metrology for high order correction by using CDSEM
Leray, Philippe; Halder, Sandip; Lorusso, Gian; Baudemprez, Bart; Inoue, Osamu; Okagawa, Yutaka
2016-03-01
Overlay control has become one of the most critical issues for semiconductor manufacturing. Advanced lithographic scanners use high-order corrections or correction per exposure to reduce the residual overlay. It is not enough in traditional feedback of overlay measurement by using ADI wafer because overlay error depends on other process (etching process and film stress, etc.). It needs high accuracy overlay measurement by using AEI wafer. WIS (Wafer Induced Shift) is the main issue for optical overlay, IBO (Image Based Overlay) and DBO (Diffraction Based Overlay). We design dedicated SEM overlay targets for dual damascene process of N10 by i-ArF multi-patterning. The pattern is same as device-pattern locally. Optical overlay tools select segmented pattern to reduce the WIS. However segmentation has limit, especially the via-pattern, for keeping the sensitivity and accuracy. We evaluate difference between the viapattern and relaxed pitch gratings which are similar to optical overlay target at AEI. CDSEM can estimate asymmetry property of target from image of pattern edge. CDSEM can estimate asymmetry property of target from image of pattern edge. We will compare full map of SEM overlay to full map of optical overlay for high order correction ( correctables and residual fingerprints).
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...
A kernel version of spatial factor analysis
Nielsen, Allan Aasbjerg
2009-01-01
. Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...
kernel oil by lipolytic organisms
USER
2010-08-02
Aug 2, 2010 ... Rancidity of extracted cashew oil was observed with cashew kernel stored at 70, 80 and 90% .... method of American Oil Chemist Society AOCS (1978) using glacial ..... changes occur and volatile products are formed that are.
Multivariate and semiparametric kernel regression
Härdle, Wolfgang; Müller, Marlene
1997-01-01
The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...
Barndorff-Nielsen, Ole E.
The density function of the gamma distribution is used as shift kernel in Brownian semistationary processes modelling the timewise behaviour of the velocity in turbulent regimes. This report presents exact and asymptotic properties of the second order structure function under such a model......, and relates these to results of von Karmann and Horwath. But first it is shown that the gamma kernel is interpretable as a Green’s function....
High-Order Sparse Linear Predictors for Audio Processing
Giacobello, Daniele; van Waterschoot, Toon; Christensen, Mads Græsbøll
2010-01-01
Linear prediction has generally failed to make a breakthrough in audio processing, as it has done in speech processing. This is mostly due to its poor modeling performance, since an audio signal is usually an ensemble of different sources. Nevertheless, linear prediction comes with a whole set...... of interesting features that make the idea of using it in audio processing not far fetched, e.g., the strong ability of modeling the spectral peaks that play a dominant role in perception. In this paper, we provide some preliminary conjectures and experiments on the use of high-order sparse linear predictors...... in audio processing. These predictors, successfully implemented in modeling the short-term and long-term redundancies present in speech signals, will be used to model tonal audio signals, both monophonic and polyphonic. We will show how the sparse predictors are able to model efﬁciently the different...
Wilson loops in very high order lattice perturbation theory
Ilgenfritz, E.M.; Nakamura, Y.; Perlt, H.; Schiller, A.; Rakow, P.E.L.; Schierholz, G.; Regensburg Univ.
2009-10-01
We calculate Wilson loops of various sizes up to loop order n=20 for lattice sizes of L 4 (L=4,6,8,12) using the technique of Numerical Stochastic Perturbation Theory in quenched QCD. This allows to investigate the behaviour of the perturbative series at high orders. We discuss three models to estimate the perturbative series: a renormalon inspired fit, a heuristic fit based on an assumed power-law singularity and boosted perturbation theory. We have found differences in the behavior of the perturbative series for smaller and larger Wilson loops at moderate n. A factorial growth of the coefficients could not be confirmed up to n=20. From Monte Carlo measured plaquette data and our perturbative result we estimate a value of the gluon condensate left angle (α)/(π)GG right angle. (orig.)
High-order hydrodynamic algorithms for exascale computing
Morgan, Nathaniel Ray [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-02-05
Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broad range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.
High-Order Wave Propagation Algorithms for Hyperbolic Systems
Ketcheson, David I.
2013-01-22
We present a finite volume method that is applicable to hyperbolic PDEs including spatially varying and semilinear nonconservative systems. The spatial discretization, like that of the well-known Clawpack software, is based on solving Riemann problems and calculating fluctuations (not fluxes). The implementation employs weighted essentially nonoscillatory reconstruction in space and strong stability preserving Runge--Kutta integration in time. The method can be extended to arbitrarily high order of accuracy and allows a well-balanced implementation for capturing solutions of balance laws near steady state. This well-balancing is achieved through the $f$-wave Riemann solver and a novel wave-slope WENO reconstruction procedure. The wide applicability and advantageous properties of the method are demonstrated through numerical examples, including problems in nonconservative form, problems with spatially varying fluxes, and problems involving near-equilibrium solutions of balance laws.
Very high order lattice perturbation theory for Wilson loops
Horsley, R.
2010-10-01
We calculate perturbativeWilson loops of various sizes up to loop order n=20 at different lattice sizes for pure plaquette and tree-level improved Symanzik gauge theories using the technique of Numerical Stochastic Perturbation Theory. This allows us to investigate the behavior of the perturbative series at high orders. We observe differences in the behavior of perturbative coefficients as a function of the loop order. Up to n=20 we do not see evidence for the often assumed factorial growth of the coefficients. Based on the observed behavior we sum this series in a model with hypergeometric functions. Alternatively we estimate the series in boosted perturbation theory. Subtracting the estimated perturbative series for the average plaquette from the non-perturbative Monte Carlo result we estimate the gluon condensate. (orig.)
High-order multiphoton ionization photoelectron spectroscopy of NO
Carman, H.S. Jr.; Compton, R.N.
1987-01-01
Photoelectron energy angular distributions of NO following three different high-order multiphoton ionization (MPI) schemes have been measured. The 3 + 3 resonantly enhanced multiphoton ionization (REMPI) via the A 2 Σ + (v=O) level yielded a distribution of electron energies corresponding to all accessible vibrational levels (v + =O-6) of the nascent ion. Angular distributions of electrons corresponding to v + =O and v + =3 were significantly different. The 3 + 2 REMPI via the A 2 Σ + (v=1) level produced only one low-energy electron peak (v + =1). Nonresonant MPI at 532 nm yielded a distribution of electron energies corresponding to both four- and five-photon ionization. Prominent peaks in the five-photon photoelectron spectrum (PES) suggest contributions from near-resonant states at the three-photon level. 4 refs., 3 figs
High-order quantum algorithm for solving linear differential equations
Berry, Dominic W
2014-01-01
Linear differential equations are ubiquitous in science and engineering. Quantum computers can simulate quantum systems, which are described by a restricted type of linear differential equations. Here we extend quantum simulation algorithms to general inhomogeneous sparse linear differential equations, which describe many classical physical systems. We examine the use of high-order methods (where the error over a time step is a high power of the size of the time step) to improve the efficiency. These provide scaling close to Δt 2 in the evolution time Δt. As with other algorithms of this type, the solution is encoded in amplitudes of the quantum state, and it is possible to extract global features of the solution. (paper)
Field emission from the surface of highly ordered pyrolytic graphite
Knápek, Alexandr, E-mail: knapek@isibrno.cz [Institute of Scientific Instruments of the ASCR, v.v.i., Královopolská 147, Brno (Czech Republic); Sobola, Dinara; Tománek, Pavel [Department of Physics, FEEC, Brno University of Technology, Technická 8, Brno (Czech Republic); Pokorná, Zuzana; Urbánek, Michal [Institute of Scientific Instruments of the ASCR, v.v.i., Královopolská 147, Brno (Czech Republic)
2017-02-15
Highlights: • HOPG shreds were created and analyzed in the UHV conditions. • Current-voltage measurements have been done to confirm electron tunneling, based on the Fowler-Nordheim theory. • Surface was characterized by other surface evaluation methods, in particular by: SNOM, SEM and AFM. - Abstract: This paper deals with the electrical characterization of highly ordered pyrolytic graphite (HOPG) surface based on field emission of electrons. The effect of field emission occurs only at disrupted surface, i.e. surface containing ripped and warped shreds of the uppermost layers of graphite. These deformations provide the necessary field gradients which are required for measuring tunneling current caused by field electron emission. Results of the field emission measurements are correlated with other surface characterization methods such as scanning near-field optical microscopy (SNOM) or atomic force microscopy.
Field emission from the surface of highly ordered pyrolytic graphite
Knápek, Alexandr; Sobola, Dinara; Tománek, Pavel; Pokorná, Zuzana; Urbánek, Michal
2017-01-01
Highlights: • HOPG shreds were created and analyzed in the UHV conditions. • Current-voltage measurements have been done to confirm electron tunneling, based on the Fowler-Nordheim theory. • Surface was characterized by other surface evaluation methods, in particular by: SNOM, SEM and AFM. - Abstract: This paper deals with the electrical characterization of highly ordered pyrolytic graphite (HOPG) surface based on field emission of electrons. The effect of field emission occurs only at disrupted surface, i.e. surface containing ripped and warped shreds of the uppermost layers of graphite. These deformations provide the necessary field gradients which are required for measuring tunneling current caused by field electron emission. Results of the field emission measurements are correlated with other surface characterization methods such as scanning near-field optical microscopy (SNOM) or atomic force microscopy.
Superconducting linac beam dynamics with high-order maps for RF resonators
Geraci, A A; Pardo, R C; 10.1016/j.nima.2003.11.177
2004-01-01
The arbitrary-order map beam optics code COSY Infinity has recently been adapted to calculate accurate high-order ion-optical maps for electrostatic and radio-frequency accelerating structures. The beam dynamics of the superconducting low-velocity positive-ion injector linac for the ATLAS accelerator at Argonne National Lab is used to demonstrate some advantages of the new simulation capability. The injector linac involves four different types of superconducting accelerating structures and has a total of 18 resonators. The detailed geometry for each of the accelerating cavities is included, allowing an accurate representation of the on- and off-axis electric fields. The fields are obtained within the code from a Poisson-solver for cylindrically symmetric electrodes of arbitrary geometry. The transverse focusing is done with superconducting solenoids. A detailed comparison of the transverse and longitudinal phase space is made with the conventional ray-tracing code LINRAY. The two codes are evaluated for ease ...
Lattice Boltzmann model for high-order nonlinear partial differential equations
Chai, Zhenhua; He, Nanzhong; Guo, Zhaoli; Shi, Baochang
2018-01-01
In this paper, a general lattice Boltzmann (LB) model is proposed for the high-order nonlinear partial differential equation with the form ∂tϕ +∑k=1mαk∂xkΠk(ϕ ) =0 (1 ≤k ≤m ≤6 ), αk are constant coefficients, Πk(ϕ ) are some known differential functions of ϕ . As some special cases of the high-order nonlinear partial differential equation, the classical (m)KdV equation, KdV-Burgers equation, K (n ,n ) -Burgers equation, Kuramoto-Sivashinsky equation, and Kawahara equation can be solved by the present LB model. Compared to the available LB models, the most distinct characteristic of the present model is to introduce some suitable auxiliary moments such that the correct moments of equilibrium distribution function can be achieved. In addition, we also conducted a detailed Chapman-Enskog analysis, and found that the high-order nonlinear partial differential equation can be correctly recovered from the proposed LB model. Finally, a large number of simulations are performed, and it is found that the numerical results agree with the analytical solutions, and usually the present model is also more accurate than the existing LB models [H. Lai and C. Ma, Sci. China Ser. G 52, 1053 (2009), 10.1007/s11433-009-0149-3; H. Lai and C. Ma, Phys. A (Amsterdam) 388, 1405 (2009), 10.1016/j.physa.2009.01.005] for high-order nonlinear partial differential equations.
Lattice Boltzmann model for high-order nonlinear partial differential equations.
Chai, Zhenhua; He, Nanzhong; Guo, Zhaoli; Shi, Baochang
2018-01-01
In this paper, a general lattice Boltzmann (LB) model is proposed for the high-order nonlinear partial differential equation with the form ∂_{t}ϕ+∑_{k=1}^{m}α_{k}∂_{x}^{k}Π_{k}(ϕ)=0 (1≤k≤m≤6), α_{k} are constant coefficients, Π_{k}(ϕ) are some known differential functions of ϕ. As some special cases of the high-order nonlinear partial differential equation, the classical (m)KdV equation, KdV-Burgers equation, K(n,n)-Burgers equation, Kuramoto-Sivashinsky equation, and Kawahara equation can be solved by the present LB model. Compared to the available LB models, the most distinct characteristic of the present model is to introduce some suitable auxiliary moments such that the correct moments of equilibrium distribution function can be achieved. In addition, we also conducted a detailed Chapman-Enskog analysis, and found that the high-order nonlinear partial differential equation can be correctly recovered from the proposed LB model. Finally, a large number of simulations are performed, and it is found that the numerical results agree with the analytical solutions, and usually the present model is also more accurate than the existing LB models [H. Lai and C. Ma, Sci. China Ser. G 52, 1053 (2009)1672-179910.1007/s11433-009-0149-3; H. Lai and C. Ma, Phys. A (Amsterdam) 388, 1405 (2009)PHYADX0378-437110.1016/j.physa.2009.01.005] for high-order nonlinear partial differential equations.
Mixed kernel function support vector regression for global sensitivity analysis
Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng
2017-11-01
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.
Influence Function and Robust Variant of Kernel Canonical Correlation Analysis
Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping
2017-01-01
Many unsupervised kernel methods rely on the estimation of the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). Both kernel CO and kernel CCO are sensitive to contaminated data, even when bounded positive definite kernels are used. To the best of our knowledge, there are few well-founded robust kernel methods for statistical unsupervised learning. In addition, while the influence function (IF) of an estimator can characterize its robustness, asymptotic ...
Kernel versions of some orthogonal transformations
Nielsen, Allan Aasbjerg
Kernel versions of orthogonal transformations such as principal components are based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced...... by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel...... function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA) and kernel minimum noise fraction (MNF) analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function...
Model Selection in Kernel Ridge Regression
Exterkate, Peter
Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...
High-order adaptive secondary mirrors: where are we?
Salinari, Piero; Sandler, David G.
1998-09-01
We discuss the current developments and the perspective performances of adaptive secondary mirrors for high order adaptive a correction on large ground based telescopes. The development of the basic techniques involved a large collaborative effort of public research Institutes and of private companies is now essentially complete. The next crucial step will be the construction of an adaptive secondary mirror for the 6.5 m MMT. Problems such as the fabrication of very thin mirrors, the low cost implementation of fast position sensors, of efficient and compact electromagnetic actuators, of the control and communication electronics, of the actuator control system, of the thermal control and of the mechanical layout can be considered as solved, in some cases with more than one viable solution. To verify performances at system level two complete prototypes have been built and tested, one at ThermoTrex and the other at Arcetri. The two prototypes adopt the same basic approach concerning actuators, sensor and support of the thin mirror, but differ in a number of aspects such as the material of the rigid back plate used as reference for the thin mirror, the number and surface density of the actuators, the solution adopted for the removal of the heat, and the design of the electronics. We discuss how the results obtained by of the two prototypes and by numerical simulations will guide the design of full size adaptive secondary units.
Recursive regularization step for high-order lattice Boltzmann methods
Coreixas, Christophe; Wissocq, Gauthier; Puigt, Guillaume; Boussuge, Jean-François; Sagaut, Pierre
2017-09-01
A lattice Boltzmann method (LBM) with enhanced stability and accuracy is presented for various Hermite tensor-based lattice structures. The collision operator relies on a regularization step, which is here improved through a recursive computation of nonequilibrium Hermite polynomial coefficients. In addition to the reduced computational cost of this procedure with respect to the standard one, the recursive step allows to considerably enhance the stability and accuracy of the numerical scheme by properly filtering out second- (and higher-) order nonhydrodynamic contributions in under-resolved conditions. This is first shown in the isothermal case where the simulation of the doubly periodic shear layer is performed with a Reynolds number ranging from 104 to 106, and where a thorough analysis of the case at Re=3 ×104 is conducted. In the latter, results obtained using both regularization steps are compared against the Bhatnagar-Gross-Krook LBM for standard (D2Q9) and high-order (D2V17 and D2V37) lattice structures, confirming the tremendous increase of stability range of the proposed approach. Further comparisons on thermal and fully compressible flows, using the general extension of this procedure, are then conducted through the numerical simulation of Sod shock tubes with the D2V37 lattice. They confirm the stability increase induced by the recursive approach as compared with the standard one.
Application of high-order uncertainty for severe accident management
Yu, Donghan; Ha, Jaejoo
1998-01-01
The use of probability distribution to represent uncertainty about point-valued probabilities has been a controversial subject. Probability theorists have argued that it is inherently meaningless to be uncertain about a probability since this appears to violate the subjectivists' assumption that individual can develop unique and precise probability judgments. However, many others have found the concept of uncertainty about the probability to be both intuitively appealing and potentially useful. Especially, high-order uncertainty, i.e., the uncertainty about the probability, can be potentially relevant to decision-making when expert's judgment is needed under very uncertain data and imprecise knowledge and where the phenomena and events are frequently complicated and ill-defined. This paper presents two approaches for evaluating the uncertainties inherent in accident management strategies: 'a fuzzy probability' and 'an interval-valued subjective probability'. At first, this analysis considers accident management as a decision problem (i.e., 'applying a strategy' vs. 'do nothing') and uses an influence diagram. Then, the analysis applies two approaches above to evaluate imprecise node probabilities in the influence diagram. For the propagation of subjective probabilities, the analysis uses the Monte-Carlo simulation. In case of fuzzy probabilities, the fuzzy logic is applied to propagate them. We believe that these approaches can allow us to understand uncertainties associated with severe accident management strategy since they offer not only information similar to the classical approach using point-estimate values but also additional information regarding the impact from imprecise input data
High order effects in cross section sensitivity analysis
Greenspan, E.; Karni, Y.; Gilai, D.
1978-01-01
Two types of high order effects associated with perturbations in the flux shape are considered: Spectral Fine Structure Effects (SFSE) and non-linearity between changes in performance parameters and data uncertainties. SFSE are investigated in Part I using a simple single resonance model. Results obtained for each of the resolved and for representative unresolved resonances of 238 U in a ZPR-6/7 like environment indicate that SFSE can have a significant contribution to the sensitivity of group constants to resonance parameters. Methods to account for SFSE both for the propagation of uncertainties and for the adjustment of nuclear data are discussed. A Second Order Sensitivity Theory (SOST) is presented, and its accuracy relative to that of the first order sensitivity theory and of the direct substitution method is investigated in Part II. The investigation is done for the non-linear problem of the effect of changes in the 297 keV sodium minimum cross section on the transport of neutrons in a deep-penetration problem. It is found that the SOST provides a satisfactory accuracy for cross section uncertainty analysis. For the same degree of accuracy, the SOST can be significantly more efficient than the direct substitution method
RCS Leak Rate Calculation with High Order Least Squares Method
Lee, Jeong Hun; Kang, Young Kyu; Kim, Yang Ki
2010-01-01
As a part of action items for Application of Leak before Break(LBB), RCS Leak Rate Calculation Program is upgraded in Kori unit 3 and 4. For real time monitoring of operators, periodic calculation is needed and corresponding noise reduction scheme is used. This kind of study was issued in Korea, so there have upgraded and used real time RCS Leak Rate Calculation Program in UCN unit 3 and 4 and YGN unit 1 and 2. For reduction of the noise in signals, Linear Regression Method was used in those programs. Linear Regression Method is powerful method for noise reduction. But the system is not static with some alternative flow paths and this makes mixed trend patterns of input signal values. In this condition, the trend of signal and average of Linear Regression are not entirely same pattern. In this study, high order Least squares Method is used to follow the trend of signal and the order of calculation is rearranged. The result of calculation makes reasonable trend and the procedure is physically consistence
High-order above-threshold dissociation of molecules
Lu, Peifen; Wang, Junping; Li, Hui; Lin, Kang; Gong, Xiaochun; Song, Qiying; Ji, Qinying; Zhang, Wenbin; Ma, Junyang; Li, Hanxiao; Zeng, Heping; He, Feng; Wu, Jian
2018-03-01
Electrons bound to atoms or molecules can simultaneously absorb multiple photons via the above-threshold ionization featured with discrete peaks in the photoelectron spectrum on account of the quantized nature of the light energy. Analogously, the above-threshold dissociation of molecules has been proposed to address the multiple-photon energy deposition in the nuclei of molecules. In this case, nuclear energy spectra consisting of photon-energy spaced peaks exceeding the binding energy of the molecular bond are predicted. Although the observation of such phenomena is difficult, this scenario is nevertheless logical and is based on the fundamental laws. Here, we report conclusive experimental observation of high-order above-threshold dissociation of H2 in strong laser fields where the tunneling-ionized electron transfers the absorbed multiphoton energy, which is above the ionization threshold to the nuclei via the field-driven inelastic rescattering. Our results provide an unambiguous evidence that the electron and nuclei of a molecule as a whole absorb multiple photons, and thus above-threshold ionization and above-threshold dissociation must appear simultaneously, which is the cornerstone of the nowadays strong-field molecular physics.
High-order harmonic conversion efficiency in helium
Crane, J.K.
1992-01-01
Calculated results are presented for the energy, number of photons, and conversion efficiency for high-order harmonic generation in helium. The results show the maximum values that we should expect to achieve experimentally with our current apparatus and the important parameters for scaling this source to higher output. In the desired operating regime where the coherence length, given by L coh =πb/(q-1), is greater than the gas column length, l, the harmonic output can be summarized by a single equation: N q =[(π z n z b 3 τ q |d q | z )/4h]{(p/q)(2l/b) z }. N q - numbers of photons of q-th harmonic; n - atom density; b - laser confocal parameter; τ q - pulse width of harmonic radiation; q - harmonic order; p - effective order of nonlinearity. (Note the term in brackets, the phase-matching function, has been separated from the rest of the expression in order to be consistent with the relevant literature)
Mode of conception of triplets and high order multiple pregnancies.
Basit, I
2012-03-01
A retrospective audit was performed of all high order multiple pregnancies (HOMPs) delivered in three maternity hospitals in Dublin between 1999 and 2008. The mode of conception for each pregnancy was established with a view to determining means of reducing their incidence. A total of 101 HOMPs occurred, 93 triplet, 7 quadruplet and 1 quintuplet. Information regarding the mode of conception was available for 78 (81%) pregnancies. Twenty eight (27.7%) were spontaneous, 34 (33.7%) followedlVF\\/ICSI\\/FET treatment (in-vitro fertilisation, intracytoplasmic sperm injection, frozen embryo transfer), 16 (15.8%) resulted from Clomiphene Citrate treatment and 6 (6%) followed ovulation induction with gonadotrophins. Triplet and HOMPs are a major cause of maternal, feta land neonatal morbidity. Many are iatrogenic, arising from fertility treatments including Clomiphene. Reducing the numbers of embryos transferred will address IVF\\/ICSI\\/FET-related multiple pregnancy rates and this is currently happening in Ireland. Clomiphene and gonadotrophins should only be prescribed when appropriate resources are available to monitor patients adequately.
Design and high order optimization of the ATF2 lattices
Marin, E; Woodley, M; Kubo, K; Okugi, T; Tauchi, T; Urakawa, J; Tomas, R
2013-01-01
The next generation of future linear colliders (LC) demands nano-meter beam sizes at the interaction point (IP) in order to reach the required luminosity. The final focus system (FFS) of a LC is meant to deliver such small beam sizes. The Accelerator Test Facility (ATF) aims to test the feasibility of the new local chromaticity correction scheme which the future LCs are based on. To this end the ATF2 nominal and ultra-low beta* lattices are design to vertically focus the beam at the IP to 37nm and 23nm, respectively if error-free lattices are considered. However simulations show that the measured field errors of the ATF2 magnets preclude to reach the mentioned spot sizes. This paper describes the optimization of high order aberrations of the ATF2 lattices in order to minimize the detrimental effect of the measured multipole components for both ATF2 lattices. Specifically three solutions are studied, the replacement of the last focusing quadrupole (QF1FF), insertion of octupole magnets and optics modification....
High orders of perturbation theory. Are renormalons significant?
Suslov, I.M.
1999-01-01
According to Lipatov [Sov. Phys. JETP 45, 216 (1977)], the high orders of perturbation theory are determined by saddle-point configurations, i.e., instantons, which correspond to functional integrals. According to another opinion, the contributions of individual large diagrams, i.e., renormalons, which, according to t'Hooft [The Whys of Subnuclear Physics: Proceedings of the 1977 International School of Subnuclear Physics (Erice, Trapani, Sicily, 1977), A. Zichichi (Ed.), Plenum Press, New York (1979)], are not contained in the Lipatov contribution, are also significant. The history of the conception of renormalons is presented, and the arguments in favor of and against their existence are discussed. The analytic properties of the Borel transforms of functional integrals, Green's functions, vertex parts, and scaling functions are investigated in the case of φ 4 theory. Their analyticity in a complex plane with a cut from the first instanton singularity to infinity (the Le Guillou-Zinn-Justin hypothesis [Phys. Rev. Lett. 39, 95 (1977); Phys. Rev. B 21, 3976 (1980)] is proved. It rules out the existence of the renormalon singularities pointed out by t'Hooft and demonstrates the nonconstructiveness of the conception of renormalons as a whole. The results can be interpreted as an indication of the internal consistency of φ 4 theory
High Order Differential Frequency Hopping: Design and Analysis
Yong Li
2015-01-01
Full Text Available This paper considers spectrally efficient differential frequency hopping (DFH system design. Relying on time-frequency diversity over large spectrum and high speed frequency hopping, DFH systems are robust against hostile jamming interference. However, the spectral efficiency of conventional DFH systems is very low due to only using the frequency of each channel. To improve the system capacity, in this paper, we propose an innovative high order differential frequency hopping (HODFH scheme. Unlike in traditional DFH where the message is carried by the frequency relationship between the adjacent hops using one order differential coding, in HODFH, the message is carried by the frequency and phase relationship using two-order or higher order differential coding. As a result, system efficiency is increased significantly since the additional information transmission is achieved by the higher order differential coding at no extra cost on either bandwidth or power. Quantitative performance analysis on the proposed scheme demonstrates that transmission through the frequency and phase relationship using two-order or higher order differential coding essentially introduces another dimension to the signal space, and the corresponding coding gain can increase the system efficiency.
Integral equations with contrasting kernels
Theodore Burton
2008-01-01
Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.
Zhong Xiaolin; Tatineni, Mahidhar
2003-01-01
The direct numerical simulation of receptivity, instability and transition of hypersonic boundary layers requires high-order accurate schemes because lower-order schemes do not have an adequate accuracy level to compute the large range of time and length scales in such flow fields. The main limiting factor in the application of high-order schemes to practical boundary-layer flow problems is the numerical instability of high-order boundary closure schemes on the wall. This paper presents a family of high-order non-uniform grid finite difference schemes with stable boundary closures for the direct numerical simulation of hypersonic boundary-layer transition. By using an appropriate grid stretching, and clustering grid points near the boundary, high-order schemes with stable boundary closures can be obtained. The order of the schemes ranges from first-order at the lowest, to the global spectral collocation method at the highest. The accuracy and stability of the new high-order numerical schemes is tested by numerical simulations of the linear wave equation and two-dimensional incompressible flat plate boundary layer flows. The high-order non-uniform-grid schemes (up to the 11th-order) are subsequently applied for the simulation of the receptivity of a hypersonic boundary layer to free stream disturbances over a blunt leading edge. The steady and unsteady results show that the new high-order schemes are stable and are able to produce high accuracy for computations of the nonlinear two-dimensional Navier-Stokes equations for the wall bounded supersonic flow
Amini-Afshar, Mostafa; Bingham, Harry B.
by a numerical integration over the surface of the body. Motivated by discussions with Prof. Kashiwagi during this workshop (Kashiwagi, 2017), we subsequently applied the Hanaoka transformation (Maruo, 1960) to change the integration domain from Θ to a wave-number like variable m. This allows a method developed......At the 32nd IWWWFB in Dalian, we presented our implementation of the far-field method for second-order wave drift forces based on the Kochin function, using the open-source seakeeping codeOceanWave3D-Seakeeping. In that work we used Maruo's method (Maruo, 1960), and calculated the added resistance...... by a line integral along the azimuthal angle XX around the body in the far-field. Some difficulties were encountered with regard to evaluating the singular and improper integrals, together with identifying the highest frequency limit where we can practically and reliably calculate the Kochin function...
Piatkowski, Marian; Müthing, Steffen; Bastian, Peter
2018-03-01
In this paper we consider discontinuous Galerkin (DG) methods for the incompressible Navier-Stokes equations in the framework of projection methods. In particular we employ symmetric interior penalty DG methods within the second-order rotational incremental pressure correction scheme. The major focus of the paper is threefold: i) We propose a modified upwind scheme based on the Vijayasundaram numerical flux that has favourable properties in the context of DG. ii) We present a novel postprocessing technique in the Helmholtz projection step based on H (div) reconstruction of the pressure correction that is computed locally, is a projection in the discrete setting and ensures that the projected velocity satisfies the discrete continuity equation exactly. As a consequence it also provides local mass conservation of the projected velocity. iii) Numerical results demonstrate the properties of the scheme for different polynomial degrees applied to two-dimensional problems with known solution as well as large-scale three-dimensional problems. In particular we address second-order convergence in time of the splitting scheme as well as its long-time stability.
Kernel learning algorithms for face recognition
Li, Jun-Bao; Pan, Jeng-Shyang
2013-01-01
Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new
Model selection for Gaussian kernel PCA denoising
Jørgensen, Kasper Winther; Hansen, Lars Kai
2012-01-01
We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...
Optimization of Finite-Differencing Kernels for Numerical Relativity Applications
Roberto Alfieri
2018-05-01
Full Text Available A simple optimization strategy for the computation of 3D finite-differencing kernels on many-cores architectures is proposed. The 3D finite-differencing computation is split direction-by-direction and exploits two level of parallelism: in-core vectorization and multi-threads shared-memory parallelization. The main application of this method is to accelerate the high-order stencil computations in numerical relativity codes. Our proposed method provides substantial speedup in computations involving tensor contractions and 3D stencil calculations on different processor microarchitectures, including Intel Knight Landing.
RTOS kernel in portable electrocardiograph
Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.
2011-12-01
This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.
RTOS kernel in portable electrocardiograph
Centeno, C A; Voos, J A; Riva, G G; Zerbini, C; Gonzalez, E A
2011-01-01
This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.
De Basabe, Jonás D.
2010-04-01
We investigate the stability of some high-order finite element methods, namely the spectral element method and the interior-penalty discontinuous Galerkin method (IP-DGM), for acoustic or elastic wave propagation that have become increasingly popular in the recent past. We consider the Lax-Wendroff method (LWM) for time stepping and show that it allows for a larger time step than the classical leap-frog finite difference method, with higher-order accuracy. In particular the fourth-order LWM allows for a time step 73 per cent larger than that of the leap-frog method; the computational cost is approximately double per time step, but the larger time step partially compensates for this additional cost. Necessary, but not sufficient, stability conditions are given for the mentioned methods for orders up to 10 in space and time. The stability conditions for IP-DGM are approximately 20 and 60 per cent more restrictive than those for SEM in the acoustic and elastic cases, respectively. © 2010 The Authors Journal compilation © 2010 RAS.
Nonlinear magnetohydrodynamics simulation using high-order finite elements
Plimpton, Steven James; Schnack, D.D.; Tarditi, A.; Chu, M.S.; Gianakon, T.A.; Kruger, S.E.; Nebel, R.A.; Barnes, D.C.; Sovinec, C.R.; Glasser, A.H.
2005-01-01
A conforming representation composed of 2D finite elements and finite Fourier series is applied to 3D nonlinear non-ideal magnetohydrodynamics using a semi-implicit time-advance. The self-adjoint semi-implicit operator and variational approach to spatial discretization are synergistic and enable simulation in the extremely stiff conditions found in high temperature plasmas without sacrificing the geometric flexibility needed for modeling laboratory experiments. Growth rates for resistive tearing modes with experimentally relevant Lundquist number are computed accurately with time-steps that are large with respect to the global Alfven time and moderate spatial resolution when the finite elements have basis functions of polynomial degree (p) two or larger. An error diffusion method controls the generation of magnetic divergence error. Convergence studies show that this approach is effective for continuous basis functions with p (ge) 2, where the number of test functions for the divergence control terms is less than the number of degrees of freedom in the expansion for vector fields. Anisotropic thermal conduction at realistic ratios of parallel to perpendicular conductivity (x(parallel)/x(perpendicular)) is computed accurately with p (ge) 3 without mesh alignment. A simulation of tearing-mode evolution for a shaped toroidal tokamak equilibrium demonstrates the effectiveness of the algorithm in nonlinear conditions, and its results are used to verify the accuracy of the numerical anisotropic thermal conduction in 3D magnetic topologies.
Walder, Christian; Henao, Ricardo; Mørup, Morten
We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least...... squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets....
High-order harmonics generation from overdense plasmas
Quere, F.; Thaury, C.; Monot, P.; Martin, Ph.; Geindre, J.P.; Audebert, P.; Marjoribanks, R.
2006-01-01
Complete test of publication follows. When an intense laser beam reflects on an overdense plasma generated on a solid target, high-order harmonics of the incident laser frequency are observed in the reflected beam. This process provides a way to produce XUV femtosecond and attosecond pulses in the μJ range from ultrafast ultraintense lasers. Studying the mechanisms responsible for this harmonic emission is also of strong fundamental interest: just as HHG in gases has been instrumental in providing a comprehensive understanding of basic intense laser-atom interactions, HHG from solid-density plasmas is likely to become a unique tool to investigate many key features of laser-plasma interactions at high intensities. We will present both experimental and theoretical evidence that two mechanisms contribute to this harmonic emission: - Coherent Wake Emission: in this process, harmonics are emitted by plasma oscillations in te overdense plasma, triggered in the wake of jets of Brunel electrons generated by the laser field. - The relativistic oscillating mirror: in this process, the intense laser field drives a relativistic oscillation of the plasma surface, which in turn gives rise to a periodic phase modulation of the reflected beam, and hence to the generation of harmonics of the incident frequency. Left graph: experimental harmonic spectrum from a polypropylene target, obtained with 60 fs laser pulses at 10 19 W/cm 2 , with a very high temporal contrast (10 10 ). The plasma frequency of this target corresponds to harmonics 15-16, thus excluding the CWE mechanism for the generation of harmonics of higher orders. Images on the right: harmonic spectra from orders 13 et 18, for different distances z between the target and the best focus. At the highest intensity (z=0), harmonics emitted by the ROM mechanism are observed above the 15th order. These harmonics have a much smaller spectral width then those due to CWE (below the 15th order). These ROM harmonics vanish as soon
Model selection in kernel ridge regression
Exterkate, Peter
2013-01-01
Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...
Multiple Kernel Learning with Data Augmentation
2016-11-22
JMLR: Workshop and Conference Proceedings 63:49–64, 2016 ACML 2016 Multiple Kernel Learning with Data Augmentation Khanh Nguyen nkhanh@deakin.edu.au...University, Australia Editors: Robert J. Durrant and Kee-Eung Kim Abstract The motivations of multiple kernel learning (MKL) approach are to increase... kernel expres- siveness capacity and to avoid the expensive grid search over a wide spectrum of kernels . A large amount of work has been proposed to
A kernel version of multivariate alteration detection
Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack
2013-01-01
Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....
Sun, L.G.; De Visser, C.C.; Chu, Q.P.; Mulder, J.A.
2012-01-01
The optimality of the kernel number and kernel centers plays a significant role in determining the approximation power of nearly all kernel methods. However, the process of choosing optimal kernels is always formulated as a global optimization task, which is hard to accomplish. Recently, an
Zhou, Anran; Xie, Weixin; Pei, Jihong
2018-06-01
Accurate detection of maritime targets in infrared imagery under various sea clutter conditions is always a challenging task. The fractional Fourier transform (FRFT) is the extension of the Fourier transform in the fractional order, and has richer spatial-frequency information. By combining it with the high order statistic filtering, a new ship detection method is proposed. First, the proper range of angle parameter is determined to make it easier for the ship components and background to be separated. Second, a new high order statistic curve (HOSC) at each fractional frequency point is designed. It is proved that maximal peak interval in HOSC reflects the target information, while the points outside the interval reflect the background. And the value of HOSC relative to the ship is much bigger than that to the sea clutter. Then, search the curve's maximal target peak interval and extract the interval by bandpass filtering in fractional Fourier domain. The value outside the peak interval of HOSC decreases rapidly to 0, so the background is effectively suppressed. Finally, the detection result is obtained by the double threshold segmenting and the target region selection method. The results show the proposed method is excellent for maritime targets detection with high clutters.
An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries
Dyson, Rodger W.; Goodrich, John W.
2000-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
Bruno, Oscar P., E-mail: obruno@caltech.edu; Lintner, Stéphane K.
2013-11-01
We present a novel methodology for the numerical solution of problems of diffraction by infinitely thin screens in three-dimensional space. Our approach relies on new integral formulations as well as associated high-order quadrature rules. The new integral formulations involve weighted versions of the classical integral operators related to the thin-screen Dirichlet and Neumann problems as well as a generalization to the open-surface problem of the classical Calderón formulae. The high-order quadrature rules we introduce for these operators, in turn, resolve the multiple Green function and edge singularities (which occur at arbitrarily close distances from each other, and which include weakly singular as well as hypersingular kernels) and thus give rise to super-algebraically fast convergence as the discretization sizes are increased. When used in conjunction with Krylov-subspace linear algebra solvers such as GMRES, the resulting solvers produce results of high accuracy in small numbers of iterations for low and high frequencies alike. We demonstrate our methodology with a variety of numerical results for screen and aperture problems at high frequencies—including simulation of classical experiments such as the diffraction by a circular disc (featuring in particular the famous Poisson spot), evaluation of interference fringes resulting from diffraction across two nearby circular apertures, as well as solution of problems of scattering by more complex geometries consisting of multiple scatterers and cavities.
Complex use of cottonseed kernels
Glushenkova, A I
1977-01-01
A review with 41 references is made on the manufacture of oil, protein, and other products from cottonseed, the effects of gossypol on protein yield and quality and technology of gossypol removal. A process eliminating thermal treatment of the kernels and permitting the production of oil, proteins, phytin, gossypol, sugar, sterols, phosphatides, tocopherols, and residual shells and baggase is described.
Kernel regression with functional response
Ferraty, Frédéric; Laksaci, Ali; Tadj, Amel; Vieu, Philippe
2011-01-01
We consider kernel regression estimate when both the response variable and the explanatory one are functional. The rates of uniform almost complete convergence are stated as function of the small ball probability of the predictor and as function of the entropy of the set on which uniformity is obtained.
GRIM : Leveraging GPUs for Kernel integrity monitoring
Koromilas, Lazaros; Vasiliadis, Giorgos; Athanasopoulos, Ilias; Ioannidis, Sotiris
2016-01-01
Kernel rootkits can exploit an operating system and enable future accessibility and control, despite all recent advances in software protection. A promising defense mechanism against rootkits is Kernel Integrity Monitor (KIM) systems, which inspect the kernel text and data to discover any malicious
Paramecium: An Extensible Object-Based Kernel
van Doorn, L.; Homburg, P.; Tanenbaum, A.S.
1995-01-01
In this paper we describe the design of an extensible kernel, called Paramecium. This kernel uses an object-based software architecture which together with instance naming, late binding and explicit overrides enables easy reconfiguration. Determining which components reside in the kernel protection
Local Observed-Score Kernel Equating
Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.
2014-01-01
Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…
Veto-Consensus Multiple Kernel Learning
Zhou, Y.; Hu, N.; Spanos, C.J.
2016-01-01
We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The
High-Order Finite-Difference Solution of the Poisson Equation with Interface Jump Conditions II
Marques, Alexandre; Nave, Jean-Christophe; Rosales, Rodolfo
2010-11-01
The Poisson equation with jump discontinuities across an interface is of central importance in Computational Fluid Dynamics. In prior work, Marques, Nave, and Rosales have introduced a method to obtain fourth-order accurate solutions for the constant coefficient Poisson problem. Here we present an extension of this method to solve the variable coefficient Poisson problem to fourth-order of accuracy. The extended method is based on local smooth extrapolations of the solution field across the interface. The extrapolation procedure uses a combination of cubic Hermite interpolants and a high-order representation of the interface using the Gradient-Augmented Level-Set technique. This procedure is compatible with the use of standard discretizations for the Laplace operator, and leads to modified linear systems which have the same sparsity pattern as the standard discretizations. As a result, standard Poisson solvers can be used with only minimal modifications. Details of the method and applications will be presented.
Sjogreen, Bjoern; Yee, H. C.
2007-01-01
Flows containing steady or nearly steady strong shocks in parts of the flow field, and unsteady turbulence with shocklets on other parts of the flow field are difficult to capture accurately and efficiently employing the same numerical scheme even under the multiblock grid or adaptive grid refinement framework. On one hand, sixth-order or higher shock-capturing methods are appropriate for unsteady turbulence with shocklets. On the other hand, lower order shock-capturing methods are more effective for strong steady shocks in terms of convergence. In order to minimize the shortcomings of low order and high order shock-capturing schemes for the subject flows,a multi- block overlapping grid with different orders of accuracy on different blocks is proposed. Test cases to illustrate the performance of the new solver are included.
Immersed boundary method combined with a high order compact scheme on half-staggered meshes
Księżyk, M; Tyliszczak, A
2014-01-01
This paper presents the results of computations of incompressible flows performed with a high-order compact scheme and the immersed boundary method. The solution algorithm is based on the projection method implemented using the half-staggered grid arrangement in which the velocity components are stored in the same locations while the pressure nodes are shifted half a cell size. The time discretization is performed using the predictor-corrector method in which the forcing terms used in the immersed boundary method acts in both steps. The solution algorithm is verified based on 2D flow problems (flow in a lid-driven skewed cavity, flow over a backward facing step) and turns out to be very accurate on computational meshes comparable with ones used in the classical approaches, i.e. not based on the immersed boundary method.
Hong, Youngjoon, E-mail: hongy@uic.edu; Nicholls, David P., E-mail: davidn@uic.edu
2017-02-01
The accurate numerical simulation of linear waves interacting with periodic layered media is a crucial capability in engineering applications. In this contribution we study the stable and high-order accurate numerical simulation of the interaction of linear, time-harmonic waves with a periodic, triply layered medium with irregular interfaces. In contrast with volumetric approaches, High-Order Perturbation of Surfaces (HOPS) algorithms are inexpensive interfacial methods which rapidly and recursively estimate scattering returns by perturbation of the interface shape. In comparison with Boundary Integral/Element Methods, the stable HOPS algorithm we describe here does not require specialized quadrature rules, periodization strategies, or the solution of dense non-symmetric positive definite linear systems. In addition, the algorithm is provably stable as opposed to other classical HOPS approaches. With numerical experiments we show the remarkable efficiency, fidelity, and accuracy one can achieve with an implementation of this algorithm.
New high order FDTD method to solve EMC problems
N. Deymier
2015-10-01
Full Text Available In electromagnetic compatibility (EMC context, we are interested in developing new ac- curate methods to solve efficiently and accurately Maxwell’s equations in the time domain. Indeed, usual methods such as FDTD or FVTD present im- portant dissipative and/or dispersive errors which prevent to obtain a good numerical approximation of the physical solution for a given industrial scene unless we use a mesh with a very small cell size. To avoid this problem, schemes like the Discontinuous Galerkin (DG method, based on higher order spa- tial approximations, have been introduced and stud- ied on unstructured meshes. However the cost of this kind of method can become prohibitive accord- ing to the mesh used. In this paper, we first present a higher order spatial approximation method on carte- sian meshes. It is based on a finite element ap- proach and recovers at the order 1 the well-known Yee’s schema. Next, to deal with EMC problem, a non-oriented thin wire formalism is proposed for this method. Finally, several examples are given to present the benefits of this new method by compar- ison with both Yee’s schema and DG approaches.
Senyue Zhang
2016-01-01
Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.
Zhang, Wencan; Leong, Siew Mun; Zhao, Feifei; Zhao, Fangju; Yang, Tiankui; Liu, Shaoquan
2018-05-01
With an interest to enhance the aroma of palm kernel oil (PKO), Viscozyme L, an enzyme complex containing a wide range of carbohydrases, was applied to alter the carbohydrates in palm kernels (PK) to modulate the formation of volatiles upon kernel roasting. After Viscozyme treatment, the content of simple sugars and free amino acids in PK increased by 4.4-fold and 4.5-fold, respectively. After kernel roasting and oil extraction, significantly more 2,5-dimethylfuran, 2-[(methylthio)methyl]-furan, 1-(2-furanyl)-ethanone, 1-(2-furyl)-2-propanone, 5-methyl-2-furancarboxaldehyde and 2-acetyl-5-methylfuran but less 2-furanmethanol and 2-furanmethanol acetate were found in treated PKO; the correlation between their formation and simple sugar profile was estimated by using partial least square regression (PLS1). Obvious differences in pyrroles and Strecker aldehydes were also found between the control and treated PKOs. Principal component analysis (PCA) clearly discriminated the treated PKOs from that of control PKOs on the basis of all volatile compounds. Such changes in volatiles translated into distinct sensory attributes, whereby treated PKO was more caramelic and burnt after aqueous extraction and more nutty, roasty, caramelic and smoky after solvent extraction. Copyright © 2018 Elsevier Ltd. All rights reserved.
Wigner functions defined with Laplace transform kernels.
Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George
2011-10-24
We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America
Dose calculation methods in photon beam therapy using energy deposition kernels
Ahnesjoe, A.
1991-01-01
The problem of calculating accurate dose distributions in treatment planning of megavoltage photon radiation therapy has been studied. New dose calculation algorithms using energy deposition kernels have been developed. The kernels describe the transfer of energy by secondary particles from a primary photon interaction site to its surroundings. Monte Carlo simulations of particle transport have been used for derivation of kernels for primary photon energies form 0.1 MeV to 50 MeV. The trade off between accuracy and calculational speed has been addressed by the development of two algorithms; one point oriented with low computional overhead for interactive use and one for fast and accurate calculation of dose distributions in a 3-dimensional lattice. The latter algorithm models secondary particle transport in heterogeneous tissue by scaling energy deposition kernels with the electron density of the tissue. The accuracy of the methods has been tested using full Monte Carlo simulations for different geometries, and found to be superior to conventional algorithms based on scaling of broad beam dose distributions. Methods have also been developed for characterization of clinical photon beams in entities appropriate for kernel based calculation models. By approximating the spectrum as laterally invariant, an effective spectrum and dose distribution for contaminating charge particles are derived form depth dose distributions measured in water, using analytical constraints. The spectrum is used to calculate kernels by superposition of monoenergetic kernels. The lateral energy fluence distribution is determined by deconvolving measured lateral dose distributions by a corresponding pencil beam kernel. Dose distributions for contaminating photons are described using two different methods, one for estimation of the dose outside of the collimated beam, and the other for calibration of output factors derived from kernel based dose calculations. (au)
irfan abbas
2017-01-01
Full Text Available At this time, the players Forex Trading generally still use the data exchange in the form of a Forex Trading figures from different sources. Thus they only receive or know the data rate of a Forex Trading prevailing at the time just so difficult to analyze or predict exchange rate movements future. Forex players usually use the indicators to enable them to analyze and memperdiksi future value. Indicator is a decision making tool. Trading forex is trading currency of a country, the other country's currency. Trading took place globally between the financial centers of the world with the involvement of the world's major banks as the major transaction. Trading Forex offers profitable investment type with a small capital and high profit, with relatively small capital can earn profits doubled. This is due to the forex trading systems exist leverage which the invested capital will be doubled if the predicted results of buy / sell is accurate, but Trading Forex having high risk level, but by knowing the right time to trade (buy or sell, the losses can be avoided. Traders who invest in the foreign exchange market is expected to have the ability to analyze the circumstances and situations in predicting the difference in currency exchange rates. Forex price movements that form the pattern (curve up and down greatly assist traders in making decisions. The movement of the curve used as an indicator in the decision to purchase (buy or sell (sell. This study compares (Comparation type algorithm kernel on Support Vector Machine (SVM to predict the movement of the curve in live time trading forex using the data GBPUSD, 1H. Results of research on the study of the results and discussion can be concluded that the Kernel Dot, Kernel Multiquaric, Kernel Neural inappropriately used for data is non-linear in the case of data forex to follow the pattern of trend curves, because curves generated curved linear (straight and then to type of kernel is the closest curve
Credit scoring analysis using kernel discriminant
Widiharih, T.; Mukid, M. A.; Mustafid
2018-05-01
Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.
Testing Infrastructure for Operating System Kernel Development
Walter, Maxwell; Karlsson, Sven
2014-01-01
Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....
Kernel parameter dependence in spatial factor analysis
Nielsen, Allan Aasbjerg
2010-01-01
kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....
Theoretical analysis of dynamic chemical imaging with lasers using high-order harmonic generation
Van-Hoang Le; Anh-Thu Le; Xie Ruihua; Lin, C. D.
2007-01-01
We report theoretical investigations of the tomographic procedure suggested by Itatani et al. [Nature (London) 432, 867 (2004)] for reconstructing highest occupied molecular orbitals (HOMOs) using high-order harmonic generation (HHG). Due to the limited range of harmonics from the plateau region, we found that even under the most favorable assumptions, it is still very difficult to obtain accurate HOMO wave functions using the tomographic procedure, but the symmetry of the HOMOs and the internuclear separation between the atoms can be accurately extracted, especially when lasers of longer wavelengths are used to generate the HHG. Since the tomographic procedure relies on approximating the continuum wave functions in the recombination process by plane waves, the method can no longer be applied upon the improvement of the theory. For future chemical imaging with lasers, we suggest that one may want to focus on how to extract the positions of atoms in molecules instead, by developing an iterative method such that the theoretically calculated macroscopic HHG spectra can best fit the experimental HHG data
RKRD: Runtime Kernel Rootkit Detection
Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.
In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.
Kernel Bayesian ART and ARTMAP.
Masuyama, Naoki; Loo, Chu Kiong; Dawood, Farhan
2018-02-01
Adaptive Resonance Theory (ART) is one of the successful approaches to resolving "the plasticity-stability dilemma" in neural networks, and its supervised learning model called ARTMAP is a powerful tool for classification. Among several improvements, such as Fuzzy or Gaussian based models, the state of art model is Bayesian based one, while solving the drawbacks of others. However, it is known that the Bayesian approach for the high dimensional and a large number of data requires high computational cost, and the covariance matrix in likelihood becomes unstable. This paper introduces Kernel Bayesian ART (KBA) and ARTMAP (KBAM) by integrating Kernel Bayes' Rule (KBR) and Correntropy Induced Metric (CIM) to Bayesian ART (BA) and ARTMAP (BAM), respectively, while maintaining the properties of BA and BAM. The kernel frameworks in KBA and KBAM are able to avoid the curse of dimensionality. In addition, the covariance-free Bayesian computation by KBR provides the efficient and stable computational capability to KBA and KBAM. Furthermore, Correntropy-based similarity measurement allows improving the noise reduction ability even in the high dimensional space. The simulation experiments show that KBA performs an outstanding self-organizing capability than BA, and KBAM provides the superior classification ability than BAM, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimal kernel shape and bandwidth for atomistic support of continuum stress
Ulz, Manfred H; Moran, Sean J
2013-01-01
The treatment of atomistic scale interactions via molecular dynamics simulations has recently found favour for multiscale modelling within engineering. The estimation of stress at a continuum point on the atomistic scale requires a pre-defined kernel function. This kernel function derives the stress at a continuum point by averaging the contribution from atoms within a region surrounding the continuum point. This averaging volume, and therefore the associated stress at a continuum point, is highly dependent on the bandwidth and shape of the kernel. In this paper we propose an effective and entirely data-driven strategy for simultaneously computing the optimal shape and bandwidth for the kernel. We thoroughly evaluate our proposed approach on copper using three classical elasticity problems. Our evaluation yields three key findings: firstly, our technique can provide a physically meaningful estimation of kernel bandwidth; secondly, we show that a uniform kernel is preferred, thereby justifying the default selection of this kernel shape in future work; and thirdly, we can reliably estimate both of these attributes in a data-driven manner, obtaining values that lead to an accurate estimation of the stress at a continuum point. (paper)
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
Gao, Kai; Fu, Shubin; Chung, Eric T.
2018-05-01
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructs high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss-Lobatto-Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.
High-order multi-implicit spectral deferred correction methods for problems of reactive flow
Bourlioux, Anne; Layton, Anita T.; Minion, Michael L.
2003-01-01
Models for reacting flow are typically based on advection-diffusion-reaction (A-D-R) partial differential equations. Many practical cases correspond to situations where the relevant time scales associated with each of the three sub-processes can be widely different, leading to disparate time-step requirements for robust and accurate time-integration. In particular, interesting regimes in combustion correspond to systems in which diffusion and reaction are much faster processes than advection. The numerical strategy introduced in this paper is a general procedure to account for this time-scale disparity. The proposed methods are high-order multi-implicit generalizations of spectral deferred correction methods (MISDC methods), constructed for the temporal integration of A-D-R equations. Spectral deferred correction methods compute a high-order approximation to the solution of a differential equation by using a simple, low-order numerical method to solve a series of correction equations, each of which increases the order of accuracy of the approximation. The key feature of MISDC methods is their flexibility in handling several sub-processes implicitly but independently, while avoiding the splitting errors present in traditional operator-splitting methods and also allowing for different time steps for each process. The stability, accuracy, and efficiency of MISDC methods are first analyzed using a linear model problem and the results are compared to semi-implicit spectral deferred correction methods. Furthermore, numerical tests on simplified reacting flows demonstrate the expected convergence rates for MISDC methods of orders three, four, and five. The gain in efficiency by independently controlling the sub-process time steps is illustrated for nonlinear problems, where reaction and diffusion are much stiffer than advection. Although the paper focuses on this specific time-scales ordering, the generalization to any ordering combination is straightforward
Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System
Chunmei Liu
2016-01-01
Full Text Available This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour.
Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System
2016-01-01
This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour. PMID:27379165
A high order multi-resolution solver for the Poisson equation with application to vortex methods
Hejlesen, Mads Mølholm; Spietz, Henrik Juul; Walther, Jens Honore
A high order method is presented for solving the Poisson equation subject to mixed free-space and periodic boundary conditions by using fast Fourier transforms (FFT). The high order convergence is achieved by deriving mollified Green’s functions from a high order regularization function which...
Theory of reproducing kernels and applications
Saitoh, Saburou
2016-01-01
This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...
Convergence of barycentric coordinates to barycentric kernels
Kosinka, Jiří
2016-02-12
We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.
Convergence of barycentric coordinates to barycentric kernels
Kosinka, Jiří
2016-01-01
We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.
Kernel principal component analysis for change detection
Nielsen, Allan Aasbjerg; Morton, J.C.
2008-01-01
region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....
Essadki Mohamed
2016-09-01
Full Text Available Predictive simulation of liquid fuel injection in automotive engines has become a major challenge for science and applications. The key issue in order to properly predict various combustion regimes and pollutant formation is to accurately describe the interaction between the carrier gaseous phase and the polydisperse evaporating spray produced through atomization. For this purpose, we rely on the EMSM (Eulerian Multi-Size Moment Eulerian polydisperse model. It is based on a high order moment method in size, with a maximization of entropy technique in order to provide a smooth reconstruction of the distribution, derived from a Williams-Boltzmann mesoscopic model under the monokinetic assumption [O. Emre (2014 PhD Thesis, École Centrale Paris; O. Emre, R.O. Fox, M. Massot, S. Chaisemartin, S. Jay, F. Laurent (2014 Flow, Turbulence and Combustion 93, 689-722; O. Emre, D. Kah, S. Jay, Q.-H. Tran, A. Velghe, S. de Chaisemartin, F. Laurent, M. Massot (2015 Atomization Sprays 25, 189-254; D. Kah, F. Laurent, M. Massot, S. Jay (2012 J. Comput. Phys. 231, 394-422; D. Kah, O. Emre, Q.-H. Tran, S. de Chaisemartin, S. Jay, F. Laurent, M. Massot (2015 Int. J. Multiphase Flows 71, 38-65; A. Vié, F. Laurent, M. Massot (2013 J. Comp. Phys. 237, 277-310]. The present contribution relies on a major extension of this model [M. Essadki, S. de Chaisemartin, F. Laurent, A. Larat, M. Massot (2016 Submitted to SIAM J. Appl. Math.], with the aim of building a unified approach and coupling with a separated phases model describing the dynamics and atomization of the interface near the injector. The novelty is to be found in terms of modeling, numerical schemes and implementation. A new high order moment approach is introduced using fractional moments in surface, which can be related to geometrical quantities of the gas-liquid interface. We also provide a novel algorithm for an accurate resolution of the evaporation. Adaptive mesh refinement properly scaling on massively
Partial Deconvolution with Inaccurate Blur Kernel.
Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei
2017-10-17
Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning
Nicholls, David P
2018-04-01
The faithful modelling of the propagation of linear waves in a layered, periodic structure is of paramount importance in many branches of the applied sciences. In this paper, we present a novel numerical algorithm for the simulation of such problems which is free of the artificial singularities present in related approaches. We advocate for a surface integral formulation which is phrased in terms of impedance-impedance operators that are immune to the Dirichlet eigenvalues which plague the Dirichlet-Neumann operators that appear in classical formulations. We demonstrate a high-order spectral algorithm to simulate these latter operators based upon a high-order perturbation of surfaces methodology which is rapid, robust and highly accurate. We demonstrate the validity and utility of our approach with a sequence of numerical simulations.
Nicholls, David P.
2018-04-01
The faithful modelling of the propagation of linear waves in a layered, periodic structure is of paramount importance in many branches of the applied sciences. In this paper, we present a novel numerical algorithm for the simulation of such problems which is free of the artificial singularities present in related approaches. We advocate for a surface integral formulation which is phrased in terms of impedance-impedance operators that are immune to the Dirichlet eigenvalues which plague the Dirichlet-Neumann operators that appear in classical formulations. We demonstrate a high-order spectral algorithm to simulate these latter operators based upon a high-order perturbation of surfaces methodology which is rapid, robust and highly accurate. We demonstrate the validity and utility of our approach with a sequence of numerical simulations.
Process for producing metal oxide kernels and kernels so obtained
Lelievre, Bernard; Feugier, Andre.
1974-01-01
The process desbribed is for producing fissile or fertile metal oxide kernels used in the fabrication of fuels for high temperature nuclear reactors. This process consists in adding to an aqueous solution of at least one metallic salt, particularly actinide nitrates, at least one chemical compound capable of releasing ammonia, in dispersing drop by drop the solution thus obtained into a hot organic phase to gel the drops and transform them into solid particles. These particles are then washed, dried and treated to turn them into oxide kernels. The organic phase used for the gel reaction is formed of a mixture composed of two organic liquids, one acting as solvent and the other being a product capable of extracting the anions from the metallic salt of the drop at the time of gelling. Preferably an amine is used as product capable of extracting the anions. Additionally, an alcohol that causes a part dehydration of the drops can be employed as solvent, thus helping to increase the resistance of the particles [fr
Hilbertian kernels and spline functions
Atteia, M
1992-01-01
In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.
Fan, Qiang; Huang, Zhenyu; Zhang, Bing; Chen, Dayue
2013-02-01
Properties of discontinuities, such as bolt joints and cracks in the waveguide structures, are difficult to evaluate by either analytical or numerical methods due to the complexity and uncertainty of the discontinuities. In this paper, the discontinuity in a Timoshenko beam is modeled with high-order parameters and then these parameters are identified by using reflection coefficients at the discontinuity. The high-order model is composed of several one-order sub-models in series and each sub-model consists of inertia, stiffness and damping components in parallel. The order of the discontinuity model is determined based on the characteristics of the reflection coefficient curve and the accuracy requirement of the dynamic modeling. The model parameters are identified through the least-square fitting iteration method, of which the undetermined model parameters are updated in iteration to fit the dynamic reflection coefficient curve with the wave-based one. By using the spectral super-element method (SSEM), simulation cases, including one-order discontinuities on infinite- and finite-beams and a two-order discontinuity on an infinite beam, were employed to evaluate both the accuracy of the discontinuity model and the effectiveness of the identification method. For practical considerations, effects of measurement noise on the discontinuity parameter identification are investigated by adding different levels of noise to the simulated data. The simulation results were then validated by the corresponding experiments. Both the simulation and experimental results show that (1) the one-order discontinuities can be identified accurately with the maximum errors of 6.8% and 8.7%, respectively; (2) and the high-order discontinuities can be identified with the maximum errors of 15.8% and 16.2%, respectively; and (3) the high-order model can predict the complex discontinuity much more accurately than the one-order discontinuity model.
High-Order Hyperbolic Residual-Distribution Schemes on Arbitrary Triangular Grids
Mazaheri, Alireza; Nishikawa, Hiroaki
2015-01-01
In this paper, we construct high-order hyperbolic residual-distribution schemes for general advection-diffusion problems on arbitrary triangular grids. We demonstrate that the second-order accuracy of the hyperbolic schemes can be greatly improved by requiring the scheme to preserve exact quadratic solutions. We also show that the improved second-order scheme can be easily extended to third-order by further requiring the exactness for cubic solutions. We construct these schemes based on the LDA and the SUPG methodology formulated in the framework of the residual-distribution method. For both second- and third-order-schemes, we construct a fully implicit solver by the exact residual Jacobian of the second-order scheme, and demonstrate rapid convergence of 10-15 iterations to reduce the residuals by 10 orders of magnitude. We demonstrate also that these schemes can be constructed based on a separate treatment of the advective and diffusive terms, which paves the way for the construction of hyperbolic residual-distribution schemes for the compressible Navier-Stokes equations. Numerical results show that these schemes produce exceptionally accurate and smooth solution gradients on highly skewed and anisotropic triangular grids, including curved boundary problems, using linear elements. We also present Fourier analysis performed on the constructed linear system and show that an under-relaxation parameter is needed for stabilization of Gauss-Seidel relaxation.
On fully multidimensional and high order non oscillatory finite volume methods, I
Lafon, F.
1992-11-01
A fully multidimensional flux formulation for solving nonlinear conservation laws of hyperbolic type is introduced to perform calculations on unstructured grids made of triangular or quadrangular cells. Fluxes are computed across dual median cells with a multidimensional 2D Riemann Solver (R2D Solver) whose intermediate states depend on either a three (on triangle R2DT solver) of four (on quadrangle, R2DQ solver) state solutions prescribed on the three or four sides of a gravity cell. Approximate Riemann solutions are computed via a linearization process of Roe's type involving multidimensional effects. Moreover, a monotonous scheme using stencil and central Lax-Friedrichs corrections on sonic curves are built in. Finally, high order accurate ENO-like (Essentially Non Oscillatory) reconstructions using plane and higher degree polynomial limitations are defined in the set up of finite element Lagrange spaces P k and Q k for k≥0, on triangles and quadrangles, respectively. Numerical experiments involving both linear and nonlinear conservation laws to be solved on unstructured grids indicate the ability of our techniques when dealing with strong multidimensional effects. An application to Euler's equations for the Mach three step problem illustrates the robustness and usefulness of our techniques using triangular and quadrangular grids. (Author). 33 refs., 13 figs
Simulations of viscous and compressible gas-gas flows using high-order finite difference schemes
Capuano, M.; Bogey, C.; Spelt, P. D. M.
2018-05-01
A computational method for the simulation of viscous and compressible gas-gas flows is presented. It consists in solving the Navier-Stokes equations associated with a convection equation governing the motion of the interface between two gases using high-order finite-difference schemes. A discontinuity-capturing methodology based on sensors and a spatial filter enables capturing shock waves and deformable interfaces. One-dimensional test cases are performed as validation and to justify choices in the numerical method. The results compare well with analytical solutions. Shock waves and interfaces are accurately propagated, and remain sharp. Subsequently, two-dimensional flows are considered including viscosity and thermal conductivity. In Richtmyer-Meshkov instability, generated on an air-SF6 interface, the influence of the mesh refinement on the instability shape is studied, and the temporal variations of the instability amplitude is compared with experimental data. Finally, for a plane shock wave propagating in air and impacting a cylindrical bubble filled with helium or R22, numerical Schlieren pictures obtained using different grid refinements are found to compare well with experimental shadow-photographs. The mass conservation is verified from the temporal variations of the mass of the bubble. The mean velocities of pressure waves and bubble interface are similar to those obtained experimentally.
Dense Medium Machine Processing Method for Palm Kernel/ Shell ...
ADOWIE PERE
Cracked palm kernel is a mixture of kernels, broken shells, dusts and other impurities. In ... machine processing method using dense medium, a separator, a shell collector and a kernel .... efficiency, ease of maintenance and uniformity of.
Mitigation of artifacts in rtm with migration kernel decomposition
Zhan, Ge; Schuster, Gerard T.
2012-01-01
The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently
Ranking Support Vector Machine with Kernel Approximation
Kai Chen
2017-01-01
Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Ranking Support Vector Machine with Kernel Approximation.
Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Sentiment classification with interpolated information diffusion kernels
Raaijmakers, S.
2007-01-01
Information diffusion kernels - similarity metrics in non-Euclidean information spaces - have been found to produce state of the art results for document classification. In this paper, we present a novel approach to global sentiment classification using these kernels. We carry out a large array of
Evolution kernel for the Dirac field
Baaquie, B.E.
1982-06-01
The evolution kernel for the free Dirac field is calculated using the Wilson lattice fermions. We discuss the difficulties due to which this calculation has not been previously performed in the continuum theory. The continuum limit is taken, and the complete energy eigenfunctions as well as the propagator are then evaluated in a new manner using the kernel. (author)
Panel data specifications in nonparametric kernel regression
Czekaj, Tomasz Gerard; Henningsen, Arne
parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...
Improving the Bandwidth Selection in Kernel Equating
Andersson, Björn; von Davier, Alina A.
2014-01-01
We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…
Kernel Korner : The Linux keyboard driver
Brouwer, A.E.
1995-01-01
Our Kernel Korner series continues with an article describing the Linux keyboard driver. This article is not for "Kernel Hackers" only--in fact, it will be most useful to those who wish to use their own keyboard to its fullest potential, and those who want to write programs to take advantage of the
A kernel plus method for quantifying wind turbine performance upgrades
Lee, Giwhyun
2014-04-21
Power curves are commonly estimated using the binning method recommended by the International Electrotechnical Commission, which primarily incorporates wind speed information. When such power curves are used to quantify a turbine\\'s upgrade, the results may not be accurate because many other environmental factors in addition to wind speed, such as temperature, air pressure, turbulence intensity, wind shear and humidity, all potentially affect the turbine\\'s power output. Wind industry practitioners are aware of the need to filter out effects from environmental conditions. Toward that objective, we developed a kernel plus method that allows incorporation of multivariate environmental factors in a power curve model, thereby controlling the effects from environmental factors while comparing power outputs. We demonstrate that the kernel plus method can serve as a useful tool for quantifying a turbine\\'s upgrade because it is sensitive to small and moderate changes caused by certain turbine upgrades. Although we demonstrate the utility of the kernel plus method in this specific application, the resulting method is a general, multivariate model that can connect other physical factors, as long as their measurements are available, with a turbine\\'s power output, which may allow us to explore new physical properties associated with wind turbine performance. © 2014 John Wiley & Sons, Ltd.
Metabolic network prediction through pairwise rational kernels.
Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian
2014-09-26
Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy
Introducing etch kernels for efficient pattern sampling and etch bias prediction
Weisbuch, François; Lutich, Andrey; Schatz, Jirka
2018-01-01
Successful patterning requires good control of the photolithography and etch processes. While compact litho models, mainly based on rigorous physics, can predict very well the contours printed in photoresist, pure empirical etch models are less accurate and more unstable. Compact etch models are based on geometrical kernels to compute the litho-etch biases that measure the distance between litho and etch contours. The definition of the kernels, as well as the choice of calibration patterns, is critical to get a robust etch model. This work proposes to define a set of independent and anisotropic etch kernels-"internal, external, curvature, Gaussian, z_profile"-designed to represent the finest details of the resist geometry to characterize precisely the etch bias at any point along a resist contour. By evaluating the etch kernels on various structures, it is possible to map their etch signatures in a multidimensional space and analyze them to find an optimal sampling of structures. The etch kernels evaluated on these structures were combined with experimental etch bias derived from scanning electron microscope contours to train artificial neural networks to predict etch bias. The method applied to contact and line/space layers shows an improvement in etch model prediction accuracy over standard etch model. This work emphasizes the importance of the etch kernel definition to characterize and predict complex etch effects.
SU-F-SPS-09: Parallel MC Kernel Calculations for VMAT Plan Improvement
Chamberlain, S; French, S; Nazareth, D
2016-01-01
Purpose: Adding kernels (small perturbations in leaf positions) to the existing apertures of VMAT control points may improve plan quality. We investigate the calculation of kernel doses using a parallelized Monte Carlo (MC) method. Methods: A clinical prostate VMAT DICOM plan was exported from Eclipse. An arbitrary control point and leaf were chosen, and a modified MLC file was created, corresponding to the leaf position offset by 0.5cm. The additional dose produced by this 0.5 cm × 0.5 cm kernel was calculated using the DOSXYZnrc component module of BEAMnrc. A range of particle history counts were run (varying from 3 × 10"6 to 3 × 10"7); each job was split among 1, 10, or 100 parallel processes. A particle count of 3 × 10"6 was established as the lower range because it provided the minimal accuracy level. Results: As expected, an increase in particle counts linearly increases run time. For the lowest particle count, the time varied from 30 hours for the single-processor run, to 0.30 hours for the 100-processor run. Conclusion: Parallel processing of MC calculations in the EGS framework significantly decreases time necessary for each kernel dose calculation. Particle counts lower than 1 × 10"6 have too large of an error to output accurate dose for a Monte Carlo kernel calculation. Future work will investigate increasing the number of parallel processes and optimizing run times for multiple kernel calculations.
Gais, Zakkina; Afriansyah, Ekasatya Aldila
2017-01-01
This research aims to know the effect of prior mathematical students ability to solve on high order thinking questions looked by analysis question, evaluation question, creating question and genera question.This research also aims to know about students ability in solving high order thinking question and to know about the factors that cause students to be wrong in solving high order thinking questions. The research method that used is mixed method with embedded concurrent type. From the resu...
Formal Solutions for Polarized Radiative Transfer. II. High-order Methods
Janett, Gioele; Steiner, Oskar; Belluzzi, Luca, E-mail: gioele.janett@irsol.ch [Istituto Ricerche Solari Locarno (IRSOL), 6605 Locarno-Monti (Switzerland)
2017-08-20
When integrating the radiative transfer equation for polarized light, the necessity of high-order numerical methods is well known. In fact, well-performing high-order formal solvers enable higher accuracy and the use of coarser spatial grids. Aiming to provide a clear comparison between formal solvers, this work presents different high-order numerical schemes and applies the systematic analysis proposed by Janett et al., emphasizing their advantages and drawbacks in terms of order of accuracy, stability, and computational cost.
Development of a three-dimensional high-order strand-grids approach
Tong, Oisin
Development of a novel high-order flux correction method on strand grids is presented. The method uses a combination of flux correction in the unstructured plane and summation-by-parts operators in the strand direction to achieve high-fidelity solutions. Low-order truncation errors are cancelled with accurate flux and solution gradients in the flux correction method, thereby achieving a formal order of accuracy of 3, although higher orders are often obtained, especially for highly viscous flows. In this work, the scheme is extended to high-Reynolds number computations in both two and three dimensions. Turbulence closure is achieved with a robust version of the Spalart-Allmaras turbulence model that accommodates negative values of the turbulence working variable, and the Menter SST turbulence model, which blends the k-epsilon and k-o turbulence models for better accuracy. A major advantage of this high-order formulation is the ability to implement traditional finite volume-like limiters to cleanly capture shocked and discontinuous flows. In this work, this approach is explored via a symmetric limited positive (SLIP) limiter. Extensive verification and validation is conducted in two and three dimensions to determine the accuracy and fidelity of the scheme for a number of different cases. Verification studies show that the scheme achieves better than third order accuracy for low and high-Reynolds number flows. Cost studies show that in three-dimensions, the third-order flux correction scheme requires only 30% more walltime than a traditional second-order scheme on strand grids to achieve the same level of convergence. In order to overcome meshing issues at sharp corners and other small-scale features, a unique approach to traditional geometry, coined "asymptotic geometry," is explored. Asymptotic geometry is achieved by filtering out small-scale features in a level set domain through min/max flow. This approach is combined with a curvature based strand shortening
Direct interferometric measurement of the atomic dipole phase in high-order harmonic generation
Chiara Corsi; Angela Pirri; Emiliano Sali
2006-01-01
Complete test of publication follows. For low gas densities and negligible ionization, the so-called atomic dipole phase, connected with the electronic dynamics involved in the generation process, is the main source of phase modulation and incoherence of high-order harmonics. To accurately determine these laser-intensity-induced phase shifts is therefore of great importance, both for the possible spectroscopic applications of harmonics and for the controlled generation of attosecond pulses. In a semiclassical description, only two electronic trajectories contribute to generate plateau harmonics during each pump optical half-cycle. Electrons appearing in the continuum by tunnel ionization may follow two different quantum paths, namely a long (l) and a short (s) trajectory before recombination. According to the SFA approximation, the harmonic of q th order acquires a phase proportional to the electronic classical action, and simply given by: ψ 0 j (r,t) -α q j I(r,t) with j = l, s where α q j are non-linear phase coefficients, roughly proportional to the time that the originating electron spends in the continuum before recombination. The space and time variation of the laser intensity (I(r,t), causes just a little phase modulation for the s-trajectory harmonic component, while the l-trajectory component becomes strongly chirped and spatially defocused; this gives rise to two spatially-separated regions having different temporal coherence. Here we report the first direct measurement of such atomic dipole phase in the process of high-order harmonic generation. Differently from previous measurements based in the most natural way, i.e., by interferometry. Two phase-locked pump pulses generate two phase-locked harmonic pulses in two nearby positions in a gas jet; one of them is used as a fixed phase reference while the generating intensity of the other is varied. The shift of the XUV interference fringes observed in the far field then gives a direct estimate of the
Bayesian Kernel Mixtures for Counts.
Canale, Antonio; Dunson, David B
2011-12-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.
Convergency analysis of the high-order mimetic finite difference method
Lipnikov, Konstantin [Los Alamos National Laboratory; Veiga Da Beirao, L [UNIV DEGLI STUDI; Manzini, G [NON LANL
2008-01-01
We prove second-order convergence of the conservative variable and its flux in the high-order MFD method. The convergence results are proved for unstructured polyhedral meshes and full tensor diffusion coefficients. For the case of non-constant coefficients, we also develop a new family of high-order MFD methods. Theoretical result are confirmed through numerical experiments.
Putting Priors in Mixture Density Mercer Kernels
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2004-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.
Anisotropic hydrodynamics with a scalar collisional kernel
Almaalol, Dekrayat; Strickland, Michael
2018-04-01
Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.
Angel, Jordan B.; Banks, Jeffrey W.; Henshaw, William D.
2018-01-01
High-order accurate upwind approximations for the wave equation in second-order form on overlapping grids are developed. Although upwind schemes are well established for first-order hyperbolic systems, it was only recently shown by Banks and Henshaw [1] how upwinding could be incorporated into the second-order form of the wave equation. This new upwind approach is extended here to solve the time-domain Maxwell's equations in second-order form; schemes of arbitrary order of accuracy are formulated for general curvilinear grids. Taylor time-stepping is used to develop single-step space-time schemes, and the upwind dissipation is incorporated by embedding the exact solution of a local Riemann problem into the discretization. Second-order and fourth-order accurate schemes are implemented for problems in two and three space dimensions, and overlapping grids are used to treat complex geometry and problems with multiple materials. Stability analysis of the upwind-scheme on overlapping grids is performed using normal mode theory. The stability analysis and computations confirm that the upwind scheme remains stable on overlapping grids, including the difficult case of thin boundary grids when the traditional non-dissipative scheme becomes unstable. The accuracy properties of the scheme are carefully evaluated on a series of classical scattering problems for both perfect conductors and dielectric materials in two and three space dimensions. The upwind scheme is shown to be robust and provide high-order accuracy.
Pelties, Christian
2012-02-18
Accurate and efficient numerical methods to simulate dynamic earthquake rupture and wave propagation in complex media and complex fault geometries are needed to address fundamental questions in earthquake dynamics, to integrate seismic and geodetic data into emerging approaches for dynamic source inversion, and to generate realistic physics-based earthquake scenarios for hazard assessment. Modeling of spontaneous earthquake rupture and seismic wave propagation by a high-order discontinuous Galerkin (DG) method combined with an arbitrarily high-order derivatives (ADER) time integration method was introduced in two dimensions by de la Puente et al. (2009). The ADER-DG method enables high accuracy in space and time and discretization by unstructured meshes. Here we extend this method to three-dimensional dynamic rupture problems. The high geometrical flexibility provided by the usage of tetrahedral elements and the lack of spurious mesh reflections in the ADER-DG method allows the refinement of the mesh close to the fault to model the rupture dynamics adequately while concentrating computational resources only where needed. Moreover, ADER-DG does not generate spurious high-frequency perturbations on the fault and hence does not require artificial Kelvin-Voigt damping. We verify our three-dimensional implementation by comparing results of the SCEC TPV3 test problem with two well-established numerical methods, finite differences, and spectral boundary integral. Furthermore, a convergence study is presented to demonstrate the systematic consistency of the method. To illustrate the capabilities of the high-order accurate ADER-DG scheme on unstructured meshes, we simulate an earthquake scenario, inspired by the 1992 Landers earthquake, that includes curved faults, fault branches, and surface topography. Copyright 2012 by the American Geophysical Union.
Parsani, Matteo
2011-09-01
The main goal of this paper is to develop an efficient numerical algorithm to compute the radiated far field noise provided by an unsteady flow field from bodies in arbitrary motion. The method computes a turbulent flow field in the near fields using a high-order spectral difference method coupled with large-eddy simulation approach. The unsteady equations are solved by advancing in time using a second-order backward difference formulae scheme. The nonlinear algebraic system arising from the time discretization is solved with the nonlinear lowerupper symmetric GaussSeidel algorithm. In the second step, the method calculates the far field sound pressure based on the acoustic source information provided by the first step simulation. The method is based on the Ffowcs WilliamsHawkings approach, which provides noise contributions for monopole, dipole and quadrupole acoustic sources. This paper will focus on the validation and assessment of this hybrid approach using different test cases. The test cases used are: a laminar flow over a two-dimensional (2D) open cavity at Re = 1.5 × 10 3 and M = 0.15 and a laminar flow past a 2D square cylinder at Re = 200 and M = 0.5. In order to show the application of the numerical method in industrial cases and to assess its capability for sound field simulation, a three-dimensional turbulent flow in a muffler at Re = 4.665 × 10 4 and M = 0.05 has been chosen as a third test case. The flow results show good agreement with numerical and experimental reference solutions. Comparison of the computed noise results with those of reference solutions also shows that the numerical approach predicts noise accurately. © 2011 IMACS.
Higher-Order Hybrid Gaussian Kernel in Meshsize Boosting Algorithm
In this paper, we shall use higher-order hybrid Gaussian kernel in a meshsize boosting algorithm in kernel density estimation. Bias reduction is guaranteed in this scheme like other existing schemes but uses the higher-order hybrid Gaussian kernel instead of the regular fixed kernels. A numerical verification of this scheme ...
NLO corrections to the Kernel of the BKP-equations
Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Fadin, V.S. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirskij Gosudarstvennyj Univ., Novosibirsk (Russian Federation); Lipatov, L.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg (Russian Federation); Vacca, G.P. [INFN, Sezione di Bologna (Italy)
2012-10-02
We present results for the NLO kernel of the BKP equations for composite states of three reggeized gluons in the Odderon channel, both in QCD and in N=4 SYM. The NLO kernel consists of the NLO BFKL kernel in the color octet representation and the connected 3{yields}3 kernel, computed in the tree approximation.
Adaptive Kernel in Meshsize Boosting Algorithm in KDE ...
This paper proposes the use of adaptive kernel in a meshsize boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...
Adaptive Kernel In The Bootstrap Boosting Algorithm In KDE ...
This paper proposes the use of adaptive kernel in a bootstrap boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...
Kernel maximum autocorrelation factor and minimum noise fraction transformations
Nielsen, Allan Aasbjerg
2010-01-01
in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt...
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...
7 CFR 51.2296 - Three-fourths half kernel.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...
7 CFR 981.401 - Adjusted kernel weight.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel weight... kernels in excess of five percent; less shells, if applicable; less processing loss of one percent for...
7 CFR 51.1403 - Kernel color classification.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...
The Linux kernel as flexible product-line architecture
M. de Jonge (Merijn)
2002-01-01
textabstractThe Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what
High-order fractional partial differential equation transform for molecular surface construction.
Hu, Langhua; Chen, Duan; Wei, Guo-Wei
2013-01-01
Fractional derivative or fractional calculus plays a significant role in theoretical modeling of scientific and engineering problems. However, only relatively low order fractional derivatives are used at present. In general, it is not obvious what role a high fractional derivative can play and how to make use of arbitrarily high-order fractional derivatives. This work introduces arbitrarily high-order fractional partial differential equations (PDEs) to describe fractional hyperdiffusions. The fractional PDEs are constructed via fractional variational principle. A fast fractional Fourier transform (FFFT) is proposed to numerically integrate the high-order fractional PDEs so as to avoid stringent stability constraints in solving high-order evolution PDEs. The proposed high-order fractional PDEs are applied to the surface generation of proteins. We first validate the proposed method with a variety of test examples in two and three-dimensional settings. The impact of high-order fractional derivatives to surface analysis is examined. We also construct fractional PDE transform based on arbitrarily high-order fractional PDEs. We demonstrate that the use of arbitrarily high-order derivatives gives rise to time-frequency localization, the control of the spectral distribution, and the regulation of the spatial resolution in the fractional PDE transform. Consequently, the fractional PDE transform enables the mode decomposition of images, signals, and surfaces. The effect of the propagation time on the quality of resulting molecular surfaces is also studied. Computational efficiency of the present surface generation method is compared with the MSMS approach in Cartesian representation. We further validate the present method by examining some benchmark indicators of macromolecular surfaces, i.e., surface area, surface enclosed volume, surface electrostatic potential and solvation free energy. Extensive numerical experiments and comparison with an established surface model
Digital signal processing with kernel methods
Rojo-Alvarez, José Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustavo
2018-01-01
A realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors. * Presents the necess...
Parsimonious Wavelet Kernel Extreme Learning Machine
Wang Qin
2015-11-01
Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.
Ensemble Approach to Building Mercer Kernels
National Aeronautics and Space Administration — This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive...
Generation of High-order Group-velocity-locked Vector Solitons
Jin, X. X.; Wu, Z. C.; Zhang, Q.; Li, L.; Tang, D. Y.; Shen, D. Y.; Fu, S. N.; Liu, D. M.; Zhao, L. M.
2015-01-01
We report numerical simulations on the high-order group-velocity-locked vector soliton (GVLVS) generation based on the fundamental GVLVS. The high-order GVLVS generated is characterized with a two-humped pulse along one polarization while a single-humped pulse along the orthogonal polarization. The phase difference between the two humps could be 180 degree. It is found that by appropriate setting the time separation between the two components of the fundamental GVLVS, the high-order GVLVS wit...
Control Transfer in Operating System Kernels
1994-05-13
microkernel system that runs less code in the kernel address space. To realize the performance benefit of allocating stacks in unmapped kseg0 memory, the...review how I modified the Mach 3.0 kernel to use continuations. Because of Mach’s message-passing microkernel structure, interprocess communication was...critical control transfer paths, deeply- nested call chains are undesirable in any case because of the function call overhead. 4.1.3 Microkernel Operating
Uranium kernel formation via internal gelation
Hunt, R.D.; Collins, J.L.
2004-01-01
In the 1970s and 1980s, U.S. Department of Energy (DOE) conducted numerous studies on the fabrication of nuclear fuel particles using the internal gelation process. These amorphous kernels were prone to flaking or breaking when gases tried to escape from the kernels during calcination and sintering. These earlier kernels would not meet today's proposed specifications for reactor fuel. In the interim, the internal gelation process has been used to create hydrous metal oxide microspheres for the treatment of nuclear waste. With the renewed interest in advanced nuclear fuel by the DOE, the lessons learned from the nuclear waste studies were recently applied to the fabrication of uranium kernels, which will become tri-isotropic (TRISO) fuel particles. These process improvements included equipment modifications, small changes to the feed formulations, and a new temperature profile for the calcination and sintering. The modifications to the laboratory-scale equipment and its operation as well as small changes to the feed composition increased the product yield from 60% to 80%-99%. The new kernels were substantially less glassy, and no evidence of flaking was found. Finally, key process parameters were identified, and their effects on the uranium microspheres and kernels are discussed. (orig.)
Quantum tomography, phase-space observables and generalized Markov kernels
Pellonpaeae, Juha-Pekka
2009-01-01
We construct a generalized Markov kernel which transforms the observable associated with the homodyne tomography into a covariant phase-space observable with a regular kernel state. Illustrative examples are given in the cases of a 'Schroedinger cat' kernel state and the Cahill-Glauber s-parametrized distributions. Also we consider an example of a kernel state when the generalized Markov kernel cannot be constructed.
Sitompul, Monica Angelina
2015-01-01
Have been conducted Determination of Iodin Value by method titration to some Hydrogenated Palm Kernel Oil (HPKO) and Refined Bleached Deodorized Palm Kernel Oil (RBDPKO). The result of analysis obtained the Iodin Value in Hydrogenated Palm Kernel Oil (A) = 0,16 gr I2/100gr, Hydrogenated Palm Kernel Oil (B) = 0,20 gr I2/100gr, Hydrogenated Palm Kernel Oil (C) = 0,24 gr I2/100gr. And in Refined Bleached Deodorized Palm Kernel Oil (A) = 17,51 gr I2/100gr, Refined Bleached Deodorized Palm Kernel ...
Belyi, VN
2011-05-01
Full Text Available The authors investigate the generation and transformation of Bessel beams through linear and nonlinear optical crystals. They outline the generation of high-order vortices due to propagation of Bessel beams along the optical axis of uniaxial...
A Novel Method for Decoding Any High-Order Hidden Markov Model
Fei Ye
2014-01-01
Full Text Available This paper proposes a novel method for decoding any high-order hidden Markov model. First, the high-order hidden Markov model is transformed into an equivalent first-order hidden Markov model by Hadar’s transformation. Next, the optimal state sequence of the equivalent first-order hidden Markov model is recognized by the existing Viterbi algorithm of the first-order hidden Markov model. Finally, the optimal state sequence of the high-order hidden Markov model is inferred from the optimal state sequence of the equivalent first-order hidden Markov model. This method provides a unified algorithm framework for decoding hidden Markov models including the first-order hidden Markov model and any high-order hidden Markov model.
Guo, Zhongyi; Zhu, Lie; Guo, Kai; Shen, Fei; Yin, Zhiping
2017-08-01
In this paper, a high-order dielectric metasurface based on silicon nanobrick array is proposed and investigated. By controlling the length and width of the nanobricks, the metasurfaces could supply two different incremental transmission phases for the X-linear-polarized (XLP) and Y-linear-polarized (YLP) light with extremely high efficiency over 88%. Based on the designed metasurface, two polarization beam splitters working in high-order diffraction modes have been designed successfully, which demonstrated a high transmitted efficiency. In addition, we have also designed two vortex-beam generators working in high-order diffraction modes to create vortex beams with the topological charges of 2 and 3. The employment of dielectric metasurfaces operating in high-order diffraction modes could pave the way for a variety of new ultra-efficient optical devices.
High-Order Quadratures for the Solution of Scattering Problems in Two Dimensions
Duan, Ran; Rokhlin, Vladimir
2008-01-01
.... The scheme is based on the combination of high-order quadrature formulae, fast application of integral operators in Lippmann-Schwinger equations, and the stabilized biconjugate gradient method (BI-CGSTAB...
Model Following and High Order Augmentation for Rotorcraft Control, Applied via Partial Authority
Spires, James Michael
This dissertation consists of two main studies, a few small studies, and design documentation, all aimed at improving rotorcraft control by employing multi-input multi-output (MIMO) command-modelfollowing control as a baseline, together with a selectable (and de-selectable) MIMO high order compensator that augments the baseline. Two methods of MIMO command-model-following control design are compared for rotorcraft flight control. The first, Explicit Model Following (EMF), employs SISO inverse plants with a dynamic decoupling matrix, which is a purely feed-forward approach to inverting the plant. The second is Dynamic Inversion (DI), which involves both feed-forward and feedback path elements to invert the plant. The EMF design is purely linear, while the DI design has some nonlinear elements in vertical rate control. For each of these methods, an architecture is presented that provides angular rate model-following with selectable vertical rate model-following. Implementation challenges of both EMF and DI are covered, and methods of dealing with them are presented. These two MIMO model-following approaches are evaluated regarding (1) fidelity to the command model, and (2) turbulence rejection. Both are found to provide good tracking of commands and reduction of cross coupling. Next, an architecture and design methodology for high order compensator (HOC) augmentation of a baseline controller for rotorcraft is presented. With this architecture, the HOC compensator is selectable and can easily be authority-limited, which might ease certification. Also, the plant for this augmentative MIMO compensator design is a stabilized helicopter system, so good flight test data could be safely gathered for more accurate plant identification. The design methodology is carried out twice on an example helicopter model, once with turbulence rejection as the objective, and once with the additional objective of closely following pilot commands. The turbulence rejection HOC is feedback
Technical Training on High-Order Spectral Analysis and Thermal Anemometry Applications
Maslov, A. A.; Shiplyuk, A. N.; Sidirenko, A. A.; Bountin, D. A.
2003-01-01
The topics of thermal anemometry and high-order spectral analyses were the subject of the technical training. Specifically, the objective of the technical training was to study: (i) the recently introduced constant voltage anemometer (CVA) for high-speed boundary layer; and (ii) newly developed high-order spectral analysis techniques (HOSA). Both CVA and HOSA are relevant tools for studies of boundary layer transition and stability.
Uchida, Isao; Yamada, Yasuhiko; Yamashita, Takashi; Okigaki, Shigeyasu; Oyamada, Hiyoshimaru; Ito, Akira.
1995-01-01
In radiotherapy with radiopharmaceuticals, more accurate estimates of the three-dimensional (3-D) distribution of absorbed dose is important in specifying the activity to be administered to patients to deliver a prescribed absorbed dose to target volumes without exceeding the toxicity limit of normal tissues in the body. A calculation algorithm for the purpose has already been developed by the authors. An accurate 3-D distribution of absorbed dose based on the algorithm is given by convolution of the 3-D dose matrix for a unit cubic voxel containing unit cumulated activity, which is obtained by transforming a dose point kernel into a 3-D cubic dose matrix, with the 3-D cumulated activity distribution given by the same voxel size. However, beta-dose point kernels affecting accurate estimates of the 3-D absorbed dose distribution have been different among the investigators. The purpose of this study is to elucidate how different beta-dose point kernels in water influence on the estimates of the absorbed dose distribution due to the dose point kernel convolution method by the authors. Computer simulations were performed using the MIRD thyroid and lung phantoms under assumption of uniform activity distribution of 32 P. Using beta-dose point kernels derived from Monte Carlo simulations (EGS-4 or ACCEPT computer code), the differences among their point kernels gave little differences for the mean and maximum absorbed dose estimates for the MIRD phantoms used. In the estimates of mean and maximum absorbed doses calculated using different cubic voxel sizes (4x4x4 mm and 8x8x8 mm) for the MIRD thyroid phantom, the maximum absorbed doses for the 4x4x4 mm-voxel were estimated approximately 7% greater than the cases of the 8x8x8 mm-voxel. They were found in every beta-dose point kernel used in this study. On the other hand, the percentage difference of the mean absorbed doses in the both voxel sizes for each beta-dose point kernel was less than approximately 0.6%. (author)
Influence of Misalignment on High-Order Aberration Correction for Normal Human Eyes
Zhao, Hao-Xin; Xu, Bing; Xue, Li-Xia; Dai, Yun; Liu, Qian; Rao, Xue-Jun
2008-04-01
Although a compensation device can correct aberrations of human eyes, the effect will be degraded by its misalignment, especially for high-order aberration correction. We calculate the positioning tolerance of correction device for high-order aberrations, and within what degree the correcting effect is better than low-order aberration (defocus and astigmatism) correction. With fixed certain misalignment within the positioning tolerance, we calculate the residual wavefront rms aberration of the first-6 to first-35 terms along with the 3rd-5th terms of aberrations corrected, and the combined first-13 terms of aberrations are also studied under the same quantity of misalignment. However, the correction effect of high-order aberrations does not meliorate along with the increase of the high-order terms under some misalignment, moreover, some simple combined terms correction can achieve similar result as complex combinations. These results suggest that it is unnecessary to correct too much the terms of high-order aberrations which are difficult to accomplish in practice, and gives confidence to correct high-order aberrations out of the laboratory.
High-brightness high-order harmonic generation at 13 nm with a long gas jet
Kim, Hyung Taek; Kim, I Jong; Lee, Dong Gun; Park, Jong Ju; Hong, Kyung Han; Nam, Chang Hee
2002-01-01
The generation of high-order harmonics is well-known method producing coherent extreme-ultraviolet radiation with pulse duration in the femtosecond regime. High-order harmonics have attracted much attention due to their unique features such as coherence, ultrashort pulse duration, and table-top scale system. Due to these unique properties, high-order harmonics have many applications of atomic and molecular spectroscopy, plasma diagnostics and solid-state physics. Bright generation of high-order harmonics is important for actual applications. Especially, the generation of strong well-collimated harmonics at 13 nm can be useful for the metrology of EUV lithography optics because of the high reflectivity of Mo-Si mirrors at this wavelength. The generation of bright high-order harmonics is rather difficult in the wavelength region below 15nm. Though argon and xenon gases have large conversion efficiency, harmonic generation from these gases is restricted to wavelengths over 20 nm due to low ionization potential. Hence, we choose neon for the harmonic generation around 13 nm; it has larger conversion efficiency than helium and higher ionization potential than argon. In this experiment, we have observed enhanced harmonic generation efficiency and low beam divergence of high-order harmonics from a elongated neon gas jet by the enhancement of laser propagation in an elongated gas jet. A uniform plasma column was produced when the gas jet was exposed to converging laser pulses.
Influence of Misalignment on High-Order Aberration Correction for Normal Human Eyes
Hao-Xin, Zhao; Bing, Xu; Li-Xia, Xue; Yun, Dai; Qian, Liu; Xue-Jun, Rao
2008-01-01
Although a compensation device can correct aberrations of human eyes, the effect will be degraded by its misalignment, especially for high-order aberration correction. We calculate the positioning tolerance of correction device for high-order aberrations, and within what degree the correcting effect is better than low-order aberration (defocus and astigmatism) correction. With fixed certain misalignment within the positioning tolerance, we calculate the residual wavefront rms aberration of the first-6 to first-35 terms along with the 3rd-5th terms of aberrations corrected, and the combined first-13 terms of aberrations are also studied under the same quantity of misalignment. However, the correction effect of high-order aberrations does not meliorate along with the increase of the high-order terms under some misalignment, moreover, some simple combined terms correction can achieve similar result as complex combinations. These results suggest that it is unnecessary to correct too much the terms of high-order aberrations which are difficult to accomplish in practice, and gives confidence to correct high-order aberrations out of the laboratory
A Fourier-series-based kernel-independent fast multipole method
Zhang Bo; Huang Jingfang; Pitsianis, Nikos P.; Sun Xiaobai
2011-01-01
We present in this paper a new kernel-independent fast multipole method (FMM), named as FKI-FMM, for pairwise particle interactions with translation-invariant kernel functions. FKI-FMM creates, using numerical techniques, sufficiently accurate and compressive representations of a given kernel function over multi-scale interaction regions in the form of a truncated Fourier series. It provides also economic operators for the multipole-to-multipole, multipole-to-local, and local-to-local translations that are typical and essential in the FMM algorithms. The multipole-to-local translation operator, in particular, is readily diagonal and does not dominate in arithmetic operations. FKI-FMM provides an alternative and competitive option, among other kernel-independent FMM algorithms, for an efficient application of the FMM, especially for applications where the kernel function consists of multi-physics and multi-scale components as those arising in recent studies of biological systems. We present the complexity analysis and demonstrate with experimental results the FKI-FMM performance in accuracy and efficiency.
Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F
2012-01-01
Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.
Exact Heat Kernel on a Hypersphere and Its Applications in Kernel SVM
Chenchao Zhao
2018-01-01
Full Text Available Many contemporary statistical learning methods assume a Euclidean feature space. This paper presents a method for defining similarity based on hyperspherical geometry and shows that it often improves the performance of support vector machine compared to other competing similarity measures. Specifically, the idea of using heat diffusion on a hypersphere to measure similarity has been previously proposed and tested by Lafferty and Lebanon [1], demonstrating promising results based on a heuristic heat kernel obtained from the zeroth order parametrix expansion; however, how well this heuristic kernel agrees with the exact hyperspherical heat kernel remains unknown. This paper presents a higher order parametrix expansion of the heat kernel on a unit hypersphere and discusses several problems associated with this expansion method. We then compare the heuristic kernel with an exact form of the heat kernel expressed in terms of a uniformly and absolutely convergent series in high-dimensional angular momentum eigenmodes. Being a natural measure of similarity between sample points dwelling on a hypersphere, the exact kernel often shows superior performance in kernel SVM classifications applied to text mining, tumor somatic mutation imputation, and stock market analysis.
Schwing, Alan Michael
For computational fluid dynamics, the governing equations are solved on a discretized domain of nodes, faces, and cells. The quality of the grid or mesh can be a driving source for error in the results. While refinement studies can help guide the creation of a mesh, grid quality is largely determined by user expertise and understanding of the flow physics. Adaptive mesh refinement is a technique for enriching the mesh during a simulation based on metrics for error, impact on important parameters, or location of important flow features. This can offload from the user some of the difficult and ambiguous decisions necessary when discretizing the domain. This work explores the implementation of adaptive mesh refinement in an implicit, unstructured, finite-volume solver. Consideration is made for applying modern computational techniques in the presence of hanging nodes and refined cells. The approach is developed to be independent of the flow solver in order to provide a path for augmenting existing codes. It is designed to be applicable for unsteady simulations and refinement and coarsening of the grid does not impact the conservatism of the underlying numerics. The effect on high-order numerical fluxes of fourth- and sixth-order are explored. Provided the criteria for refinement is appropriately selected, solutions obtained using adapted meshes have no additional error when compared to results obtained on traditional, unadapted meshes. In order to leverage large-scale computational resources common today, the methods are parallelized using MPI. Parallel performance is considered for several test problems in order to assess scalability of both adapted and unadapted grids. Dynamic repartitioning of the mesh during refinement is crucial for load balancing an evolving grid. Development of the methods outlined here depend on a dual-memory approach that is described in detail. Validation of the solver developed here against a number of motivating problems shows favorable
Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah
2016-01-01
One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel "trick" concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.
Azcona, J; Burguete, J
2014-01-01
Purpose: To obtain the pencil beam kernels that characterize a megavoltage photon beam generated in a FFF linac by experimental measurements, and to apply them for dose calculation in modulated fields. Methods: Several Kodak EDR2 radiographic films were irradiated with a 10 MV FFF photon beam from a Varian True Beam (Varian Medical Systems, Palo Alto, CA) linac, at the depths of 5, 10, 15, and 20cm in polystyrene (RW3 water equivalent phantom, PTW Freiburg, Germany). The irradiation field was a 50 mm diameter circular field, collimated with a lead block. Measured dose leads to the kernel characterization, assuming that the energy fluence exiting the linac head and further collimated is originated on a point source. The three-dimensional kernel was obtained by deconvolution at each depth using the Hankel transform. A correction on the low dose part of the kernel was performed to reproduce accurately the experimental output factors. The kernels were used to calculate modulated dose distributions in six modulated fields and compared through the gamma index to their absolute dose measured by film in the RW3 phantom. Results: The resulting kernels properly characterize the global beam penumbra. The output factor-based correction was carried out adding the amount of signal necessary to reproduce the experimental output factor in steps of 2mm, starting at a radius of 4mm. There the kernel signal was in all cases below 10% of its maximum value. With this correction, the number of points that pass the gamma index criteria (3%, 3mm) in the modulated fields for all cases are at least 99.6% of the total number of points. Conclusion: A system for independent dose calculations in modulated fields from FFF beams has been developed. Pencil beam kernels were obtained and their ability to accurately calculate dose in homogeneous media was demonstrated
A high-order doubly asymptotic open boundary for scalar waves in semi-infinite layered systems
Prempramote, S; Song, Ch; Birk, C
2010-01-01
Wave propagation in semi-infinite layered systems is of interest in earthquake engineering, acoustics, electromagnetism, etc. The numerical modelling of this problem is particularly challenging as evanescent waves exist below the cut-off frequency. Most of the high-order transmitting boundaries are unable to model the evanescent waves. As a result, spurious reflection occurs at late time. In this paper, a high-order doubly asymptotic open boundary is developed for scalar waves propagating in semi-infinite layered systems. It is derived from the equation of dynamic stiffness matrix obtained in the scaled boundary finite-element method in the frequency domain. A continued-fraction solution of the dynamic stiffness matrix is determined recursively by satisfying the scaled boundary finite-element equation at both high- and low-frequency limits. In the time domain, the continued-fraction solution permits the force-displacement relationship to be formulated as a system of first-order ordinary differential equations. Standard time-step schemes in structural dynamics can be directly applied to evaluate the response history. Examples of a semi-infinite homogeneous layer and a semi-infinite two-layered system are investigated herein. The displacement results obtained from the open boundary converge rapidly as the order of continued fractions increases. Accurate results are obtained at early time and late time.
High-order dynamic lattice method for seismic simulation in anisotropic media
Hu, Xiaolin; Jia, Xiaofeng
2018-03-01
The discrete particle-based dynamic lattice method (DLM) offers an approach to simulate elastic wave propagation in anisotropic media by calculating the anisotropic micromechanical interactions between these particles based on the directions of the bonds that connect them in the lattice. To build such a lattice, the media are discretized into particles. This discretization inevitably leads to numerical dispersion. The basic lattice unit used in the original DLM only includes interactions between the central particle and its nearest neighbours; therefore, it represents the first-order form of a particle lattice. The first-order lattice suffers from numerical dispersion compared with other numerical methods, such as high-order finite-difference methods, in terms of seismic wave simulation. Due to its unique way of discretizing the media, the particle-based DLM no longer solves elastic wave equations; this means that one cannot build a high-order DLM by simply creating a high-order discrete operator to better approximate a partial derivative operator. To build a high-order DLM, we carry out a thorough dispersion analysis of the method and discover that by adding more neighbouring particles into the lattice unit, the DLM will yield different spatial accuracy. According to the dispersion analysis, the high-order DLM presented here can adapt the requirement of spatial accuracy for seismic wave simulations. For any given spatial accuracy, we can design a corresponding high-order lattice unit to satisfy the accuracy requirement. Numerical tests show that the high-order DLM improves the accuracy of elastic wave simulation in anisotropic media.
Cheng, Rendy P.; Tischler, Mark B.; Celi, Roberto
2006-01-01
This research describes a new methodology for the extraction of a high-order, linear time invariant model, which allows the periodicity of the helicopter response to be accurately captured. This model provides the needed level of dynamic fidelity to permit an analysis and optimization of the AFCS and HHC algorithms. The key results of this study indicate that the closed-loop HHC system has little influence on the AFCS or on the vehicle handling qualities, which indicates that the AFCS does not need modification to work with the HHC system. However, the results show that the vibration response to maneuvers must be considered during the HHC design process, and this leads to much higher required HHC loop crossover frequencies. This research also demonstrates that the transient vibration responses during maneuvers can be reduced by optimizing the closed-loop higher harmonic control algorithm using conventional control system analyses.
Li, Mao; Qiu, Zihua; Liang, Chunlei; Sprague, Michael; Xu, Min
2017-01-13
In the present study, a new spectral difference (SD) method is developed for viscous flows on meshes with a mixture of triangular and quadrilateral elements. The standard SD method for triangular elements, which employs Lagrangian interpolating functions for fluxes, is not stable when the designed accuracy of spatial discretization is third-order or higher. Unlike the standard SD method, the method examined here uses vector interpolating functions in the Raviart-Thomas (RT) spaces to construct continuous flux functions on reference elements. Studies have been performed for 2D wave equation and Euler equa- tions. Our present results demonstrated that the SDRT method is stable and high-order accurate for a number of test problems by using triangular-, quadrilateral-, and mixed- element meshes.
Aflatoxin contamination of developing corn kernels.
Amer, M A
2005-01-01
Preharvest of corn and its contamination with aflatoxin is a serious problem. Some environmental and cultural factors responsible for infection and subsequent aflatoxin production were investigated in this study. Stage of growth and location of kernels on corn ears were found to be one of the important factors in the process of kernel infection with A. flavus & A. parasiticus. The results showed positive correlation between the stage of growth and kernel infection. Treatment of corn with aflatoxin reduced germination, protein and total nitrogen contents. Total and reducing soluble sugar was increase in corn kernels as response to infection. Sucrose and protein content were reduced in case of both pathogens. Shoot system length, seeding fresh weigh and seedling dry weigh was also affected. Both pathogens induced reduction of starch content. Healthy corn seedlings treated with aflatoxin solution were badly affected. Their leaves became yellow then, turned brown with further incubation. Moreover, their total chlorophyll and protein contents showed pronounced decrease. On the other hand, total phenolic compounds were increased. Histopathological studies indicated that A. flavus & A. parasiticus could colonize corn silks and invade developing kernels. Germination of A. flavus spores was occurred and hyphae spread rapidly across the silk, producing extensive growth and lateral branching. Conidiophores and conidia had formed in and on the corn silk. Temperature and relative humidity greatly influenced the growth of A. flavus & A. parasiticus and aflatoxin production.
Analog forecasting with dynamics-adapted kernels
Zhao, Zhizhen; Giannakis, Dimitrios
2016-09-01
Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.
A Precise Drunk Driving Detection Using Weighted Kernel Based on Electrocardiogram.
Wu, Chung Kit; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei
2016-05-09
Globally, 1.2 million people die and 50 million people are injured annually due to traffic accidents. These traffic accidents cost $500 billion dollars. Drunk drivers are found in 40% of the traffic crashes. Existing drunk driving detection (DDD) systems do not provide accurate detection and pre-warning concurrently. Electrocardiogram (ECG) is a proven biosignal that accurately and simultaneously reflects human's biological status. In this letter, a classifier for DDD based on ECG is investigated in an attempt to reduce traffic accidents caused by drunk drivers. At this point, it appears that there is no known research or literature found on ECG classifier for DDD. To identify drunk syndromes, the ECG signals from drunk drivers are studied and analyzed. As such, a precise ECG-based DDD (ECG-DDD) using a weighted kernel is developed. From the measurements, 10 key features of ECG signals were identified. To incorporate the important features, the feature vectors are weighted in the customization of kernel functions. Four commonly adopted kernel functions are studied. Results reveal that weighted feature vectors improve the accuracy by 11% compared to the computation using the prime kernel. Evaluation shows that ECG-DDD improved the accuracy by 8% to 18% compared to prevailing methods.
A Precise Drunk Driving Detection Using Weighted Kernel Based on Electrocardiogram
Chung Kit Wu
2016-05-01
Full Text Available Globally, 1.2 million people die and 50 million people are injured annually due to traffic accidents. These traffic accidents cost $500 billion dollars. Drunk drivers are found in 40% of the traffic crashes. Existing drunk driving detection (DDD systems do not provide accurate detection and pre-warning concurrently. Electrocardiogram (ECG is a proven biosignal that accurately and simultaneously reflects human’s biological status. In this letter, a classifier for DDD based on ECG is investigated in an attempt to reduce traffic accidents caused by drunk drivers. At this point, it appears that there is no known research or literature found on ECG classifier for DDD. To identify drunk syndromes, the ECG signals from drunk drivers are studied and analyzed. As such, a precise ECG-based DDD (ECG-DDD using a weighted kernel is developed. From the measurements, 10 key features of ECG signals were identified. To incorporate the important features, the feature vectors are weighted in the customization of kernel functions. Four commonly adopted kernel functions are studied. Results reveal that weighted feature vectors improve the accuracy by 11% compared to the computation using the prime kernel. Evaluation shows that ECG-DDD improved the accuracy by 8% to 18% compared to prevailing methods.
Tumor Classification Using High-Order Gene Expression Profiles Based on Multilinear ICA
Ming-gang Du
2009-01-01
Full Text Available Motivation. Independent Components Analysis (ICA maximizes the statistical independence of the representational components of a training gene expression profiles (GEP ensemble, but it cannot distinguish relations between the different factors, or different modes, and it is not available to high-order GEP Data Mining. In order to generalize ICA, we introduce Multilinear-ICA and apply it to tumor classification using high order GEP. Firstly, we introduce the basis conceptions and operations of tensor and recommend Support Vector Machine (SVM classifier and Multilinear-ICA. Secondly, the higher score genes of original high order GEP are selected by using t-statistics and tabulate tensors. Thirdly, the tensors are performed by Multilinear-ICA. Finally, the SVM is used to classify the tumor subtypes. Results. To show the validity of the proposed method, we apply it to tumor classification using high order GEP. Though we only use three datasets, the experimental results show that the method is effective and feasible. Through this survey, we hope to gain some insight into the problem of high order GEP tumor classification, in aid of further developing more effective tumor classification algorithms.
OS X and iOS Kernel Programming
Halvorsen, Ole Henry
2011-01-01
OS X and iOS Kernel Programming combines essential operating system and kernel architecture knowledge with a highly practical approach that will help you write effective kernel-level code. You'll learn fundamental concepts such as memory management and thread synchronization, as well as the I/O Kit framework. You'll also learn how to write your own kernel-level extensions, such as device drivers for USB and Thunderbolt devices, including networking, storage and audio drivers. OS X and iOS Kernel Programming provides an incisive and complete introduction to the XNU kernel, which runs iPhones, i
The Classification of Diabetes Mellitus Using Kernel k-means
Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.
2018-01-01
Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.
Object classification and detection with context kernel descriptors
Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping
2014-01-01
Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...... consistency of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature selection, Kernel Entropy Component Analysis (KECA) is exploited to learn a subset of discriminative CKD. Different from Kernel Principal Component...
Alumina Concentration Detection Based on the Kernel Extreme Learning Machine.
Zhang, Sen; Zhang, Tao; Yin, Yixin; Xiao, Wendong
2017-09-01
The concentration of alumina in the electrolyte is of great significance during the production of aluminum. The amount of the alumina concentration may lead to unbalanced material distribution and low production efficiency and affect the stability of the aluminum reduction cell and current efficiency. The existing methods cannot meet the needs for online measurement because industrial aluminum electrolysis has the characteristics of high temperature, strong magnetic field, coupled parameters, and high nonlinearity. Currently, there are no sensors or equipment that can detect the alumina concentration on line. Most companies acquire the alumina concentration from the electrolyte samples which are analyzed through an X-ray fluorescence spectrometer. To solve the problem, the paper proposes a soft sensing model based on a kernel extreme learning machine algorithm that takes the kernel function into the extreme learning machine. K-fold cross validation is used to estimate the generalization error. The proposed soft sensing algorithm can detect alumina concentration by the electrical signals such as voltages and currents of the anode rods. The predicted results show that the proposed approach can give more accurate estimations of alumina concentration with faster learning speed compared with the other methods such as the basic ELM, BP, and SVM.
High-order conservative discretizations for some cases of the rigid body motion
Kozlov, Roman
2008-01-01
Modified vector fields can be used to construct high-order structure-preserving numerical integrators for ordinary differential equations. In the present Letter we consider high-order integrators based on the implicit midpoint rule, which conserve quadratic first integrals. It is shown that these integrators are particularly suitable for the rigid body motion with an additional quadratic first integral. In this case high-order integrators preserve all four first integrals of motion. The approach is illustrated on the Lagrange top (a rotationally symmetric rigid body with a fixed point on the symmetry axis). The equations of motion are considered in the space fixed frame because in this frame Lagrange top admits a neat description. The Lagrange top motion includes the spherical pendulum and the planar pendulum, which swings in a vertical plane, as particular cases
High-order harmonic propagation in gases within the discrete dipole approximation
Hernandez-Garcia, C.; Perez-Hernandez, J. A.; Ramos, J.; Jarque, E. Conejero; Plaja, L.; Roso, L.
2010-01-01
We present an efficient approach for computing high-order harmonic propagation based on the discrete dipole approximation. In contrast with other approaches, our strategy is based on computing the total field as the superposition of the driving field with the field radiated by the elemental emitters of the sample. In this way we avoid the numerical integration of the wave equation, as Maxwell's equations have an analytical solution for an elementary (pointlike) emitter. The present strategy is valid for low-pressure gases interacting with strong fields near the saturation threshold (i.e., partially ionized), which is a common situation in the experiments of high-order harmonic generation. We use this tool to study the dependence of phase matching of high-order harmonics with the relative position between the beam focus and the gas jet.
Zhang, Guobo; Chen, Min; Liu, Feng; Yuan, Xiaohui; Weng, Suming; Zheng, Jun; Ma, Yanyun; Shao, Fuqiu; Sheng, Zhengming; Zhang, Jie
2017-10-02
Relativistically intense laser solid target interaction has been proved to be a promising way to generate high-order harmonics, which can be used to diagnose ultrafast phenomena. However, their emission direction and spectra still lack tunability. Based upon two-dimensional particle-in-cell simulations, we show that directional enhancement of selected high-order-harmonics can be realized using blazed grating targets. Such targets can select harmonics with frequencies being integer times of the grating frequency. Meanwhile, the radiation intensity and emission area of the harmonics are increased. The emission direction is controlled by tailoring the local blazed structure. Theoretical and electron dynamics analysis for harmonics generation, selection and directional enhancement from the interaction between multi-cycle laser and grating target are carried out. These studies will benefit the generation and application of laser plasma-based high order harmonics.
Polarization control of high order harmonics in the EUV photon energy range.
Vodungbo, Boris; Barszczak Sardinha, Anna; Gautier, Julien; Lambert, Guillaume; Valentin, Constance; Lozano, Magali; Iaquaniello, Grégory; Delmotte, Franck; Sebban, Stéphane; Lüning, Jan; Zeitoun, Philippe
2011-02-28
We report the generation of circularly polarized high order harmonics in the extreme ultraviolet range (18-27 nm) from a linearly polarized infrared laser (40 fs, 0.25 TW) focused into a neon filled gas cell. To circularly polarize the initially linearly polarized harmonics we have implemented a four-reflector phase-shifter. Fully circularly polarized radiation has been obtained with an efficiency of a few percents, thus being significantly more efficient than currently demonstrated direct generation of elliptically polarized harmonics. This demonstration opens up new experimental capabilities based on high order harmonics, for example, in biology and materials science. The inherent femtosecond time resolution of high order harmonic generating table top laser sources renders these an ideal tool for the investigation of ultrafast magnetization dynamics now that the magnetic circular dichroism at the absorption M-edges of transition metals can be exploited.
Giant Faraday Rotation of High-Order Plasmonic Modes in Graphene-Covered Nanowires.
Kuzmin, Dmitry A; Bychkov, Igor V; Shavrov, Vladimir G; Temnov, Vasily V
2016-07-13
Plasmonic Faraday rotation in nanowires manifests itself in the rotation of the spatial intensity distribution of high-order surface plasmon polariton (SPP) modes around the nanowire axis. Here we predict theoretically the giant Faraday rotation for SPPs propagating on graphene-coated magneto-optically active nanowires. Upon the reversal of the external magnetic field pointing along the nanowire axis some high-order plasmonic modes may be rotated by up to ∼100° on the length scale of about 500 nm at mid-infrared frequencies. Tuning the carrier concentration in graphene by chemical doping or gate voltage allows for controlling SPP-properties and notably the rotation angle of high-order azimuthal modes. Our results open the door to novel plasmonic applications ranging from nanowire-based Faraday isolators to the magnetic control in quantum-optical applications.
Aprà, E; Kowalski, K
2016-03-08
In this paper we discuss the implementation of multireference coupled-cluster formalism with singles, doubles, and noniterative triples (MRCCSD(T)), which is capable of taking advantage of the processing power of the Intel Xeon Phi coprocessor. We discuss the integration of two levels of parallelism underlying the MRCCSD(T) implementation with computational kernels designed to offload the computationally intensive parts of the MRCCSD(T) formalism to Intel Xeon Phi coprocessors. Special attention is given to the enhancement of the parallel performance by task reordering that has improved load balancing in the noniterative part of the MRCCSD(T) calculations. We also discuss aspects regarding efficient optimization and vectorization strategies.
High order scheme for the non-local transport in ICF plasmas
Feugeas, J.L.; Nicolai, Ph.; Schurtz, G. [Bordeaux-1 Univ., Centre Lasers Intenses et Applications (UMR 5107), 33 - Talence (France); Charrier, P.; Ahusborde, E. [Bordeaux-1 Univ., MAB, 33 - Talence (France)
2006-06-15
A high order practical scheme for a model of non-local transport is here proposed to be used in multidimensional radiation hydrodynamic codes. A high order scheme is necessary to solve non-local problems on strongly deformed meshes that are on hot point or ablation front zones. It is shown that the errors made by a classical 5 point scheme on a disturbed grid can be of the same order of magnitude as the non-local effects. The use of a 9 point scheme in a simulation of inertial confinement fusion appears to be essential.
On the exact solutions of high order wave equations of KdV type (I)
Bulut, Hasan; Pandir, Yusuf; Baskonus, Haci Mehmet
2014-12-01
In this paper, by means of a proper transformation and symbolic computation, we study high order wave equations of KdV type (I). We obtained classification of exact solutions that contain soliton, rational, trigonometric and elliptic function solutions by using the extended trial equation method. As a result, the motivation of this paper is to utilize the extended trial equation method to explore new solutions of high order wave equation of KdV type (I). This method is confirmed by applying it to this kind of selected nonlinear equations.
High-Order Entropy Stable Finite Difference Schemes for Nonlinear Conservation Laws: Finite Domains
Fisher, Travis C.; Carpenter, Mark H.
2013-01-01
Developing stable and robust high-order finite difference schemes requires mathematical formalism and appropriate methods of analysis. In this work, nonlinear entropy stability is used to derive provably stable high-order finite difference methods with formal boundary closures for conservation laws. Particular emphasis is placed on the entropy stability of the compressible Navier-Stokes equations. A newly derived entropy stable weighted essentially non-oscillatory finite difference method is used to simulate problems with shocks and a conservative, entropy stable, narrow-stencil finite difference approach is used to approximate viscous terms.
Study of a high-order-mode gyrotron traveling-wave amplifier
Chiu, C. C.; Tsai, C. Y.; Kao, S. H.; Chu, K. R.; Barnett, L. R.; Luhmann, N. C. Jr.
2010-01-01
Physics and performance issues of a TE 01 -mode gyrotron traveling-wave amplifier are studied in theory. For a high order mode, absolute instabilities on neighboring modes at the fundamental and higher cyclotron harmonic frequencies impose severe constraints to the device capability. Methods for their stabilization are outlined, on the basis of which the performance characteristics are examined in a multidimensional parameter space under the marginal stability criterion. The results demonstrate the viability of a high-order-mode traveling-wave amplifier and provide a roadmap for design tradeoffs among power, bandwidth, and efficiency. General trends are observed and illustrated with specific examples.
Scattering of a high-order Bessel beam by a spheroidal particle
Han, Lu
2018-05-01
Within the framework of generalized Lorenz-Mie theory (GLMT), scattering from a homogeneous spheroidal particle illuminated by a high-order Bessel beam is formulated analytically. The high-order Bessel beam is expanded in terms of spheroidal vector wave functions, where the spheroidal beam shape coefficients (BSCs) are computed conveniently using an intrinsic method. Numerical results concerning scattered field in the far zone are displayed for various parameters of the incident Bessel beam and of the scatter. These results are expected to provide useful insights into the scattering of a Bessel beam by nonspherical particles and particle manipulation applications using Bessel beams.
Wave-mixing with high-order harmonics in extreme ultraviolet region
Dao, Lap Van; Dinh, Khuong Ba; Le, Hoang Vu; Gaffney, Naylyn; Hannaford, Peter
2015-01-01
We report studies of the wave-mixing process in the extreme ultraviolet region with two near-infrared driving and controlling pulses with incommensurate frequencies (at 1400 nm and 800 nm). A non-collinear scheme for the two beams is used in order to spatially separate and to characterise the properties of the high-order wave-mixing field. We show that the extreme ultraviolet frequency mixing can be treated by perturbative, very high-order nonlinear optics; the modification of the wave-packet of the free electron needs to be considered in this process
Nuclear material enrichment identification method based on cross-correlation and high order spectra
Yang Fan; Wei Biao; Feng Peng; Mi Deling; Ren Yong
2013-01-01
In order to enhance the sensitivity of nuclear material identification system (NMIS) against the change of nuclear material enrichment, the principle of high order statistic feature is introduced and applied to traditional NMIS. We present a new enrichment identification method based on cross-correlation and high order spectrum algorithm. By applying the identification method to NMIS, the 3D graphs with nuclear material character are presented and can be used as new signatures to identify the enrichment of nuclear materials. The simulation result shows that the identification method could suppress the background noises, electronic system noises, and improve the sensitivity against enrichment change to exponential order with no system structure modification. (authors)
Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu
2017-12-15
Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.
Kernel abortion in maize. II. Distribution of 14C among kernel carboydrates
Hanft, J.M.; Jones, R.J.
1986-01-01
This study was designed to compare the uptake and distribution of 14 C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35 0 C were transferred to [ 14 C]sucrose media 10 days after pollination. Kernels cultured at 35 0 C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on [ 14 C]sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35 0 C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35 0 C compared to kernels cultured at 30 0 C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35 0 C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30 0 C (89%). Kernels cultured at 35 0 C had a correspondingly higher proportion of 14 C in endosperm fructose, glucose, and sucrose
Fluidization calculation on nuclear fuel kernel coating
Sukarsono; Wardaya; Indra-Suryawan
1996-01-01
The fluidization of nuclear fuel kernel coating was calculated. The bottom of the reactor was in the from of cone on top of the cone there was a cylinder, the diameter of the cylinder for fluidization was 2 cm and at the upper part of the cylinder was 3 cm. Fluidization took place in the cone and the first cylinder. The maximum and the minimum velocity of the gas of varied kernel diameter, the porosity and bed height of varied stream gas velocity were calculated. The calculation was done by basic program
Reduced multiple empirical kernel learning machine.
Wang, Zhe; Lu, MingZhe; Gao, Daqi
2015-02-01
Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3
The Influence of Oxidation on the Quality of U3O8 Kernels
Damunir; Sukarsono; Indra Suryawan
2002-01-01
The influence of oxidation on quality of U 3 O 8 kernels have been studied. The investigated the influence was changed of time and temperature oxidation of Uranyl-4(ammonia)-2(polyvinyl alcohol) gel on surface area, pore radius, pore volume, porosity and diameter size of U 3 O 8 kernel. The spherical of uranyl-4(ammonia)-2(polyvinyl alcohol) containing 150g U/l were oxidized at 200-800 o C temperature for 2-24 hours, formed U 3 O 8 kernel. After that, the quality of U 3 O 8 kernel were measured by their physical properties i.e. the surface area and pore radius using Surface areameter with N 2 gas as absorbent. The pore volume and porosity using pycnometer with aquabidest of water as a solvent, diameter size using a optical microscope. The experiment results, showed that the time and temperature oxidation of uranyl-4(ammonia)-2(polyvinyl alcohol) grain the influence to quality of U 3 O 8 in formed the surface area of specific, pore radius, pore volume of specific, porosity, and diameter size of U 3 O 8 kernel. The best accurred at 600-800 o C oxidation temperature and oxidation time was 2-5 hours. The resulted quality of U 3 O 8 kernel i.e surface area of specific was 10.84 - 5.99 m 2 /g, pore volume of specific was 10.35x10 -2 - 3.23x10 -2 cc/g, pore radius was 21.05 - 24.62 Angstrom, diameter size was 1264 - 1456 μm and porosity was 49.49 - 21.36 % Vol with Cumulative analysis error was 8.55 % Vol. (author)
Comparative Analysis of Kernel Methods for Statistical Shape Learning
Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen
2006-01-01
.... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...
Variable kernel density estimation in high-dimensional feature spaces
Van der Walt, Christiaan M
2017-02-01
Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...
Influence of differently processed mango seed kernel meal on ...
Influence of differently processed mango seed kernel meal on performance response of west African ... and TD( consisted spear grass and parboiled mango seed kernel meal with concentrate diet in a ratio of 35:30:35). ... HOW TO USE AJOL.
On methods to increase the security of the Linux kernel
Matvejchikov, I.V.
2014-01-01
Methods to increase the security of the Linux kernel for the implementation of imposed protection tools have been examined. The methods of incorporation into various subsystems of the kernel on the x86 architecture have been described [ru
Linear and kernel methods for multi- and hypervariate change detection
Nielsen, Allan Aasbjerg; Canty, Morton J.
2010-01-01
. Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual...... formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution......, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component...
Kernel methods in orthogonalization of multi- and hypervariate data
Nielsen, Allan Aasbjerg
2009-01-01
A kernel version of maximum autocorrelation factor (MAF) analysis is described very briefly and applied to change detection in remotely sensed hyperspectral image (HyMap) data. The kernel version is based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis...... via inner products in the Gram matrix only. In the kernel version the inner products are replaced by inner products between nonlinear mappings into higher dimensional feature space of the original data. Via kernel substitution also known as the kernel trick these inner products between the mappings...... are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MAF analysis handle nonlinearities by implicitly transforming data into high (even infinite...
Green's Kernels and meso-scale approximations in perforated domains
Maz'ya, Vladimir; Nieves, Michael
2013-01-01
There are a wide range of applications in physics and structural mechanics involving domains with singular perturbations of the boundary. Examples include perforated domains and bodies with defects of different types. The accurate direct numerical treatment of such problems remains a challenge. Asymptotic approximations offer an alternative, efficient solution. Green’s function is considered here as the main object of study rather than a tool for generating solutions of specific boundary value problems. The uniformity of the asymptotic approximations is the principal point of attention. We also show substantial links between Green’s functions and solutions of boundary value problems for meso-scale structures. Such systems involve a large number of small inclusions, so that a small parameter, the relative size of an inclusion, may compete with a large parameter, represented as an overall number of inclusions. The main focus of the present text is on two topics: (a) asymptotics of Green’s kernels in domai...
Overlay control methodology comparison: field-by-field and high-order methods
Huang, Chun-Yen; Chiu, Chui-Fu; Wu, Wen-Bin; Shih, Chiang-Lin; Huang, Chin-Chou Kevin; Huang, Healthy; Choi, DongSub; Pierson, Bill; Robinson, John C.
2012-03-01
Overlay control in advanced integrated circuit (IC) manufacturing is becoming one of the leading lithographic challenges in the 3x and 2x nm process nodes. Production overlay control can no longer meet the stringent emerging requirements based on linear composite wafer and field models with sampling of 10 to 20 fields and 4 to 5 sites per field, which was the industry standard for many years. Methods that have emerged include overlay metrology in many or all fields, including the high order field model method called high order control (HOC), and field by field control (FxFc) methods also called correction per exposure. The HOC and FxFc methods were initially introduced as relatively infrequent scanner qualification activities meant to supplement linear production schemes. More recently, however, it is clear that production control is also requiring intense sampling, similar high order and FxFc methods. The added control benefits of high order and FxFc overlay methods need to be balanced with the increased metrology requirements, however, without putting material at risk. Of critical importance is the proper control of edge fields, which requires intensive sampling in order to minimize signatures. In this study we compare various methods of overlay control including the performance levels that can be achieved.
Developing Student-Centered Learning Model to Improve High Order Mathematical Thinking Ability
Saragih, Sahat; Napitupulu, Elvis
2015-01-01
The purpose of this research was to develop student-centered learning model aiming to improve high order mathematical thinking ability of junior high school students of based on curriculum 2013 in North Sumatera, Indonesia. The special purpose of this research was to analyze and to formulate the purpose of mathematics lesson in high order…
Etches, Adam; Madsen, Christian Bruun; Madsen, Lars Bojer
2010-01-01
A recent paper reported elliptically polarized high-order harmonics from aligned N2 using a linearly polarized driving field [X. Zhou et al., Phys. Rev. Lett. 102, 073902 (2009)]. This observation cannot be explained in the standard treatment of the Lewenstein model and has been ascribed to many...
Zhang, Kemei; Zhao, Cong-Ran; Xie, Xue-Jun
2015-12-01
This paper considers the problem of output feedback stabilisation for stochastic high-order feedforward nonlinear systems with time-varying delay. By using the homogeneous domination theory and solving several troublesome obstacles in the design and analysis, an output feedback controller is constructed to drive the closed-loop system globally asymptotically stable in probability.
Highly ordered three-dimensional macroporous carbon spheres for determination of heavy metal ions
Zhang, Yuxiao; Zhang, Jianming [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Liu, Yang, E-mail: yangl@suda.edu.cn [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Huang, Hui [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China); Kang, Zhenhui, E-mail: zhkang@suda.edu.cn [Institute of Functional Nano and Soft Materials (FUNSOM) and Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123 (China)
2012-04-15
Highlights: Black-Right-Pointing-Pointer Highly ordered three dimensional macroporous carbon spheres (MPCSs) were prepared. Black-Right-Pointing-Pointer MPCS was covalently modified by cysteine (MPCS-CO-Cys). Black-Right-Pointing-Pointer MPCS-CO-Cys was first time used in electrochemical detection of heavy metal ions. Black-Right-Pointing-Pointer Heavy metal ions such as Pb{sup 2+} and Cd{sup 2+} can be simultaneously determined. -- Abstract: An effective voltammetric method for detection of trace heavy metal ions using chemically modified highly ordered three dimensional macroporous carbon spheres electrode surfaces is described. The highly ordered three dimensional macroporous carbon spheres were prepared by carbonization of glucose in silica crystal bead template, followed by removal of the template. The highly ordered three dimensional macroporous carbon spheres were covalently modified by cysteine, an amino acid with high affinities towards some heavy metals. The materials were characterized by physical adsorption of nitrogen, scanning electron microscopy, and transmission electron microscopy techniques. While the Fourier-transform infrared spectroscopy was used to characterize the functional groups on the surface of carbon spheres. High sensitivity was exhibited when this material was used in electrochemical detection (square wave anodic stripping voltammetry) of heavy metal ions due to the porous structure. And the potential application for simultaneous detection of heavy metal ions was also investigated.
Highly ordered three-dimensional macroporous carbon spheres for determination of heavy metal ions
Zhang, Yuxiao; Zhang, Jianming; Liu, Yang; Huang, Hui; Kang, Zhenhui
2012-01-01
Highlights: ► Highly ordered three dimensional macroporous carbon spheres (MPCSs) were prepared. ► MPCS was covalently modified by cysteine (MPCS–CO–Cys). ► MPCS–CO–Cys was first time used in electrochemical detection of heavy metal ions. ► Heavy metal ions such as Pb 2+ and Cd 2+ can be simultaneously determined. -- Abstract: An effective voltammetric method for detection of trace heavy metal ions using chemically modified highly ordered three dimensional macroporous carbon spheres electrode surfaces is described. The highly ordered three dimensional macroporous carbon spheres were prepared by carbonization of glucose in silica crystal bead template, followed by removal of the template. The highly ordered three dimensional macroporous carbon spheres were covalently modified by cysteine, an amino acid with high affinities towards some heavy metals. The materials were characterized by physical adsorption of nitrogen, scanning electron microscopy, and transmission electron microscopy techniques. While the Fourier-transform infrared spectroscopy was used to characterize the functional groups on the surface of carbon spheres. High sensitivity was exhibited when this material was used in electrochemical detection (square wave anodic stripping voltammetry) of heavy metal ions due to the porous structure. And the potential application for simultaneous detection of heavy metal ions was also investigated.
2007-12-06
high order well-balanced schemes to a class of hyperbolic systems with source terms, Boletin de la Sociedad Espanola de Matematica Aplicada, v34 (2006...schemes to a class of hyperbolic systems with source terms, Boletin de la Sociedad Espanola de Matematica Aplicada, v34 (2006), pp.69-80. 39. Y. Xu and C.-W
Bayesian Modeling of ChIP-chip Data Through a High-Order Ising Model
Mo, Qianxing; Liang, Faming
2010-01-01
approach to ChIP-chip data through an Ising model with high-order interactions. The proposed method naturally takes into account the intrinsic spatial structure of the data and can be used to analyze data from multiple platforms with different genomic
Dynamic analysis of high-order Cohen-Grossberg neural networks with time delay
Chen Zhang; Zhao Donghua; Ruan Jiong
2007-01-01
In this paper, a class of high-order Cohen-Grossberg neural networks with time delay is studied. Several sufficient conditions are obtained for global asymptotic stability and global exponential stability using Lyapunov and LMI method. Finally, two examples are given to illustrate the effectiveness of our method
M. Denche; A. L. Marhoune
2003-01-01
In this paper, we study a mixed problem with integral boundary conditions for a high order partial differential equation of mixed type. We prove the existence and uniqueness of the solution. The proof is based on energy inequality, and on the density of the range of the operator generated by the considered problem.
A hierarchy of high-order theories for modes in an elastic layer
Sorokin, Sergey V.; Chapman, C. John
2015-01-01
A hierarchy of high-order theories for symmetric and skew-symmetric modes in an infinitely long elastic layer of the constant thickness is derived. For each member of the hierarchy, boundary conditions for layers of the finite length are formulated. The forcing problems at several approximation...
High order curvilinear finite elements for elastic–plastic Lagrangian dynamics
Dobrev, Veselin A.; Kolev, Tzanio V.; Rieben, Robert N.
2014-01-01
This paper presents a high-order finite element method for calculating elastic–plastic flow on moving curvilinear meshes and is an extension of our general high-order curvilinear finite element approach for solving the Euler equations of gas dynamics in a Lagrangian frame [1,2]. In order to handle transition to plastic flow, we formulate the stress–strain relation in rate (or incremental) form and augment our semi-discrete equations for Lagrangian hydrodynamics with an additional evolution equation for the deviatoric stress which is valid for arbitrary order spatial discretizations of the kinematic and thermodynamic variables. The semi-discrete equation for the deviatoric stress rate is developed for 2D planar, 2D axisymmetric and full 3D geometries. For each case, the strain rate is approximated via a collocation method at zone quadrature points while the deviatoric stress is approximated using an L 2 projection onto the thermodynamic basis. We apply high order, energy conserving, explicit time stepping methods to the semi-discrete equations to develop the fully discrete method. We conclude with numerical results from an extensive series of verification tests that demonstrate several practical advantages of using high-order finite elements for elastic–plastic flow
High-order harmonics from bow wave caustics driven by a high-intensity laser
Pirozhkov, A.S.; Kando, M.; Esirkepov, T.Zh.
2012-01-01
We propose a new mechanism of high-order harmonic generation during an interaction of a high-intensity laser pulse with underdense plasma. A tightly focused laser pulse creates a cavity in plasma pushing electrons aside and exciting the wake wave and the bow wave. At the joint of the cavity wall and the bow wave boundary, an annular spike of electron density is formed. This spike surrounds the cavity and moves together with the laser pulse. Collective motion of electrons in the spike driven by the laser field generates high-order harmonics. A strong localization of the electron spike, its robustness to oscillations imposed by the laser field and, consequently, its ability to produce high-order harmonics is explained by catastrophe theory. The proposed mechanism explains the experimental observations of high-order harmonics with the 9 TW J-KAREN laser (JAEA, Japan) and the 120 TW Astra Gemini laser (CLF RAL, UK) [A. S. Pirozhkov, et al., arXiv:1004.4514 (2010); A. S. Pirozhkov et al, AIP Proceedings, this volume]. The theory is corroborated by high-resolution two-and three-dimensional particle-in-cell simulations.
Exact Sampling and Decoding in High-Order Hidden Markov Models
Carter, S.; Dymetman, M.; Bouchard, G.
2012-01-01
We present a method for exact optimization and sampling from high order Hidden Markov Models (HMMs), which are generally handled by approximation techniques. Motivated by adaptive rejection sampling and heuristic search, we propose a strategy based on sequentially refining a lower-order language
High-Order Approximation of Chromatographic Models using a Nodal Discontinuous Galerkin Approach
Meyer, Kristian; Huusom, Jakob Kjøbsted; Abildskov, Jens
2018-01-01
by Javeed et al. (2011a,b, 2013) with an efficient quadrature-free implementation. The framework is used to simulate linear and non-linear multicomponent chromatographic systems. The results confirm arbitrary high-order accuracy and demonstrate the potential for accuracy and speed-up gains obtainable...
High Order Sliding Mode Control of Doubly-fed Induction Generator under Unbalanced Grid Faults
Zhu, Rongwu; Chen, Zhe; Wu, Xiaojie
2013-01-01
This paper deals with a doubly-fed induction generator-based (DFIG) wind turbine system under grid fault conditions such as: unbalanced grid voltage, three-phase grid fault, using a high order sliding mode control (SMC). A second order sliding mode controller, which is robust with respect...
J.F. Schouten revisited : pitch of complex tones having many high-order harmonics
Smurzynski, J.; Houtsma, A.J.M.
1988-01-01
Four experiments are reported which deal with pitch perception of harmonic complex tones containing many high-order, aurally unresolvable partials. Melodic-interval identilication performance ill the case of sounds with increasing harmonic order remains significantly above chalice level, even if the
Etches, Adam; Madsen, Christian Bruun; Madsen, Lars Bojer
A correction term is introduced in the stationary-point analysis on high-order harmonic generation (HHG) from aligned molecules. Arising from a multi-centre expansion of the electron wave function, this term brings our numerical calculations of the Lewenstein model into qualitative agreement...
High-order finite difference solution for 3D nonlinear wave-structure interaction
Ducrozet, Guillaume; Bingham, Harry B.; Engsig-Karup, Allan Peter
2010-01-01
This contribution presents our recent progress on developing an efficient fully-nonlinear potential flow model for simulating 3D wave-wave and wave-structure interaction over arbitrary depths (i.e. in coastal and offshore environment). The model is based on a high-order finite difference scheme O...
Comparison of high order algorithms in Aerosol and Aghora for compressible flows
Mbengoue D. A.
2013-12-01
Full Text Available This article summarizes the work done within the Colargol project during CEMRACS 2012. The aim of this project is to compare the implementations of high order finite element methods for compressible flows that have been developed at ONERA and at INRIA for about one year, within the Aghora and Aerosol libraries.
Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim
2018-01-01
The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.
Picot, Adeline; Barreau, Christian; Pinson-Gadais, Laëtitia; Piraux, François; Caron, Daniel; Lannou, Christian; Richard-Forget, Florence
2011-01-01
The fungal pathogen Fusarium verticillioides infects maize ears and produces fumonisins, known for their adverse effects on human and animal health. Basic questions remain unanswered regarding the kernel stage(s) associated with fumonisin biosynthesis and the kernel components involved in fumonisin regulation during F. verticillioides-maize interaction under field conditions. In this 2-year field study, the time course of F. verticillioides growth and fumonisin accumulation in developing maize kernels, along with the variations in kernel pH and amylopectin content, were monitored using relevant and accurate analytical tools. In all experiments, the most significant increase in fumonisin accumulation or in fumonisin productivity (i.e., fumonisin production per unit of fungus) was shown to occur within a very short period of time, between 22/32 and 42 days after inoculation and corresponding to the dent stage. This stage was also characterized by acidification in the kernel pH and a maximum level of amylopectin content. Our data clearly support published results based on in vitro experiments suggesting that the physiological stages of the maize kernel play a major role in regulating fumonisin production. Here we have validated this result for in planta and field conditions, and we demonstrate that under such conditions the dent stage is the most conducive for fumonisin accumulation. PMID:21984235
Mitigation of artifacts in rtm with migration kernel decomposition
Zhan, Ge
2012-01-01
The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.
Sparse Event Modeling with Hierarchical Bayesian Kernel Methods
2016-01-05
SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function
Relationship between attenuation coefficients and dose-spread kernels
Boyer, A.L.
1988-01-01
Dose-spread kernels can be used to calculate the dose distribution in a photon beam by convolving the kernel with the primary fluence distribution. The theoretical relationships between various types and components of dose-spread kernels relative to photon attenuation coefficients are explored. These relations can be valuable as checks on the conservation of energy by dose-spread kernels calculated by analytic or Monte Carlo methods
Fabrication of Uranium Oxycarbide Kernels for HTR Fuel
Barnes, Charles; Richardson, Clay; Nagley, Scott; Hunn, John; Shaber, Eric
2010-01-01
Babcock and Wilcox (B and W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-(micro)m, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B and W produced 425-(micro)m, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B and W also produced 500-(micro)m, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B and W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.
Consistent Estimation of Pricing Kernels from Noisy Price Data
Vladislav Kargin
2003-01-01
If pricing kernels are assumed non-negative then the inverse problem of finding the pricing kernel is well-posed. The constrained least squares method provides a consistent estimate of the pricing kernel. When the data are limited, a new method is suggested: relaxed maximization of the relative entropy. This estimator is also consistent. Keywords: $\\epsilon$-entropy, non-parametric estimation, pricing kernel, inverse problems.
Construction of Low Dissipative High Order Well-Balanced Filter Schemes for Non-Equilibrium Flows
Wang, Wei; Yee, H. C.; Sjogreen, Bjorn; Magin, Thierry; Shu, Chi-Wang
2009-01-01
The goal of this paper is to generalize the well-balanced approach for non-equilibrium flow studied by Wang et al. [26] to a class of low dissipative high order shock-capturing filter schemes and to explore more advantages of well-balanced schemes in reacting flows. The class of filter schemes developed by Yee et al. [30], Sjoegreen & Yee [24] and Yee & Sjoegreen [35] consist of two steps, a full time step of spatially high order non-dissipative base scheme and an adaptive nonlinear filter containing shock-capturing dissipation. A good property of the filter scheme is that the base scheme and the filter are stand alone modules in designing. Therefore, the idea of designing a well-balanced filter scheme is straightforward, i.e., choosing a well-balanced base scheme with a well-balanced filter (both with high order). A typical class of these schemes shown in this paper is the high order central difference schemes/predictor-corrector (PC) schemes with a high order well-balanced WENO filter. The new filter scheme with the well-balanced property will gather the features of both filter methods and well-balanced properties: it can preserve certain steady state solutions exactly; it is able to capture small perturbations, e.g., turbulence fluctuations; it adaptively controls numerical dissipation. Thus it shows high accuracy, efficiency and stability in shock/turbulence interactions. Numerical examples containing 1D and 2D smooth problems, 1D stationary contact discontinuity problem and 1D turbulence/shock interactions are included to verify the improved accuracy, in addition to the well-balanced behavior.
High-order FDTD methods via derivative matching for Maxwell's equations with material interfaces
Zhao Shan; Wei, G.W.
2004-01-01
This paper introduces a series of novel hierarchical implicit derivative matching methods to restore the accuracy of high-order finite-difference time-domain (FDTD) schemes of computational electromagnetics (CEM) with material interfaces in one (1D) and two spatial dimensions (2D). By making use of fictitious points, systematic approaches are proposed to locally enforce the physical jump conditions at material interfaces in a preprocessing stage, to arbitrarily high orders of accuracy in principle. While often limited by numerical instability, orders up to 16 and 12 are achieved, respectively, in 1D and 2D. Detailed stability analyses are presented for the present approach to examine the upper limit in constructing embedded FDTD methods. As natural generalizations of the high-order FDTD schemes, the proposed derivative matching methods automatically reduce to the standard FDTD schemes when the material interfaces are absent. An interesting feature of the present approach is that it encompasses a variety of schemes of different orders in a single code. Another feature of the present approach is that it can be robustly implemented with other high accuracy time-domain approaches, such as the multiresolution time-domain method and the local spectral time-domain method, to cope with material interfaces. Numerical experiments on both 1D and 2D problems are carried out to test the convergence, examine the stability, access the efficiency, and explore the limitation of the proposed methods. It is found that operating at their best capacity, the proposed high-order schemes could be over 2000 times more efficient than their fourth-order versions in 2D. In conclusion, the present work indicates that the proposed hierarchical derivative matching methods might lead to practical high-order schemes for numerical solution of time-domain Maxwell's equations with material interfaces
Quantum-orbit theory of high-order atomic processes in strong fields
Milosevic, D.B.
2005-01-01
Full text: Atoms submitted to strong laser fields can emit electrons and photons of very high energies. These processes find a highly intuitive and also quantitative explanation in terms of Feynman's path integral and the concept of quantum orbits. The quantum-orbit formalism is particularly useful for high-order atomic processes in strong laser fields. For such multi-step processes there is an intermediate step during which the electron is approximately under the influence of the laser field only and can absorb energy from the field. This leads to the appearance of the plateau structures in the emitted electron or photon spectra. Usual examples of such processes are high-order harmonic generation (HHG) and high-order above threshold ionization (HATI). These structures were also observed in high-order above-threshold detachment, laser-assisted x-ray-atom scattering, laser-assisted electron-ion recombination, and electron-atom scattering. We will present high-order strong-field approximation (SFA) and show how the quantum-orbit formalism follows from it. This will be done for various above-mentioned processes. For HHG a classification of quantum orbits will be given [10) and generalized to the presence of a static field. The low-energy part of the HHG spectra and the enhancement of HHG near the channel closings can be explained taking into account a large number of quantum orbits. For HATI we will concentrate on the case of few-cycle laser pulse. The influence of the carrier-envelope relative phase on the HATI spectrum can easily be explained in terms of quantum orbits. The SFA and the quantum-orbit results will be compared with the results obtained by Dieter Bauer using ab initio solutions of the time-dependent Schroedinger equation. It will be shown that the Coulomb effects are important for low-energy electron spectra. Refs. 11 (author)
Quantum logic in dagger kernel categories
Heunen, C.; Jacobs, B.P.F.
2009-01-01
This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial
Quantum logic in dagger kernel categories
Heunen, C.; Jacobs, B.P.F.; Coecke, B.; Panangaden, P.; Selinger, P.
2011-01-01
This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial
Symbol recognition with kernel density matching.
Zhang, Wan; Wenyin, Liu; Zhang, Kun
2006-12-01
We propose a novel approach to similarity assessment for graphic symbols. Symbols are represented as 2D kernel densities and their similarity is measured by the Kullback-Leibler divergence. Symbol orientation is found by gradient-based angle searching or independent component analysis. Experimental results show the outstanding performance of this approach in various situations.
Flexible Scheduling in Multimedia Kernels: An Overview
Jansen, P.G.; Scholten, Johan; Laan, Rene; Chow, W.S.
1999-01-01
Current Hard Real-Time (HRT) kernels have their timely behaviour guaranteed on the cost of a rather restrictive use of the available resources. This makes current HRT scheduling techniques inadequate for use in a multimedia environment where we can make a considerable profit by a better and more
Reproducing kernel Hilbert spaces of Gaussian priors
Vaart, van der A.W.; Zanten, van J.H.; Clarke, B.; Ghosal, S.
2008-01-01
We review definitions and properties of reproducing kernel Hilbert spaces attached to Gaussian variables and processes, with a view to applications in nonparametric Bayesian statistics using Gaussian priors. The rate of contraction of posterior distributions based on Gaussian priors can be described
A synthesis of empirical plant dispersal kernels
Bullock, J. M.; González, L. M.; Tamme, R.; Götzenberger, Lars; White, S. M.; Pärtel, M.; Hooftman, D. A. P.
2017-01-01
Roč. 105, č. 1 (2017), s. 6-19 ISSN 0022-0477 Institutional support: RVO:67985939 Keywords : dispersal kernel * dispersal mode * probability density function Subject RIV: EH - Ecology, Behaviour OBOR OECD: Ecology Impact factor: 5.813, year: 2016
Analytic continuation of weighted Bergman kernels
Engliš, Miroslav
2010-01-01
Roč. 94, č. 6 (2010), s. 622-650 ISSN 0021-7824 R&D Projects: GA AV ČR IAA100190802 Keywords : Bergman kernel * analytic continuation * Toeplitz operator Subject RIV: BA - General Mathematics Impact factor: 1.450, year: 2010 http://www.sciencedirect.com/science/article/pii/S0021782410000942
On convergence of kernel learning estimators
Norkin, V.I.; Keyzer, M.A.
2009-01-01
The paper studies convex stochastic optimization problems in a reproducing kernel Hilbert space (RKHS). The objective (risk) functional depends on functions from this RKHS and takes the form of a mathematical expectation (integral) of a nonnegative integrand (loss function) over a probability
Analytic properties of the Virasoro modular kernel
Nemkov, Nikita [Moscow Institute of Physics and Technology (MIPT), Dolgoprudny (Russian Federation); Institute for Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); National University of Science and Technology MISIS, The Laboratory of Superconducting metamaterials, Moscow (Russian Federation)
2017-06-15
On the space of generic conformal blocks the modular transformation of the underlying surface is realized as a linear integral transformation. We show that the analytic properties of conformal block implied by Zamolodchikov's formula are shared by the kernel of the modular transformation and illustrate this by explicit computation in the case of the one-point toric conformal block. (orig.)
Kernel based subspace projection of hyperspectral images
Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten
In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF...
Kernel Temporal Differences for Neural Decoding
Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.
2015-01-01
We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504
Scattering kernels and cross sections working group
Russell, G.; MacFarlane, B.; Brun, T.
1998-01-01
Topics addressed by this working group are: (1) immediate needs of the cold-moderator community and how to fill them; (2) synthetic scattering kernels; (3) very simple synthetic scattering functions; (4) measurements of interest; and (5) general issues. Brief summaries are given for each of these topics
Enhanced gluten properties in soft kernel durum wheat
Soft kernel durum wheat is a relatively recent development (Morris et al. 2011 Crop Sci. 51:114). The soft kernel trait exerts profound effects on kernel texture, flour milling including break flour yield, milling energy, and starch damage, and dough water absorption (DWA). With the caveat of reduce...
Predictive Model Equations for Palm Kernel (Elaeis guneensis J ...
Estimated error of ± 0.18 and ± 0.2 are envisaged while applying the models for predicting palm kernel and sesame oil colours respectively. Keywords: Palm kernel, Sesame, Palm kernel, Oil Colour, Process Parameters, Model. Journal of Applied Science, Engineering and Technology Vol. 6 (1) 2006 pp. 34-38 ...
Stable Kernel Representations as Nonlinear Left Coprime Factorizations
Paice, A.D.B.; Schaft, A.J. van der
1994-01-01
A representation of nonlinear systems based on the idea of representing the input-output pairs of the system as elements of the kernel of a stable operator has been recently introduced. This has been denoted the kernel representation of the system. In this paper it is demonstrated that the kernel
7 CFR 981.60 - Determination of kernel weight.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...
21 CFR 176.350 - Tamarind seed kernel powder.
2010-04-01
... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...
End-use quality of soft kernel durum wheat
Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...
Heat kernel analysis for Bessel operators on symmetric cones
Möllers, Jan
2014-01-01
. The heat kernel is explicitly given in terms of a multivariable $I$-Bessel function on $Ω$. Its corresponding heat kernel transform defines a continuous linear operator between $L^p$-spaces. The unitary image of the $L^2$-space under the heat kernel transform is characterized as a weighted Bergmann space...
A Fast and Simple Graph Kernel for RDF
de Vries, G.K.D.; de Rooij, S.
2013-01-01
In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster
7 CFR 981.61 - Redetermination of kernel weight.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds...
Single pass kernel k-means clustering method
paper proposes a simple and faster version of the kernel k-means clustering ... It has been considered as an important tool ... On the other hand, kernel-based clustering methods, like kernel k-means clus- ..... able at the UCI machine learning repository (Murphy 1994). ... All the data sets have only numeric valued features.
Scuba: scalable kernel-based gene prioritization.
Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio
2018-01-25
The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .
Li, Zhi-Hui; Peng, Ao-Ping; Zhang, Han-Xin; Yang, Jaw-Yen
2015-04-01
This article reviews rarefied gas flow computations based on nonlinear model Boltzmann equations using deterministic high-order gas-kinetic unified algorithms (GKUA) in phase space. The nonlinear Boltzmann model equations considered include the BGK model, the Shakhov model, the Ellipsoidal Statistical model and the Morse model. Several high-order gas-kinetic unified algorithms, which combine the discrete velocity ordinate method in velocity space and the compact high-order finite-difference schemes in physical space, are developed. The parallel strategies implemented with the accompanying algorithms are of equal importance. Accurate computations of rarefied gas flow problems using various kinetic models over wide ranges of Mach numbers 1.2-20 and Knudsen numbers 0.0001-5 are reported. The effects of different high resolution schemes on the flow resolution under the same discrete velocity ordinate method are studied. A conservative discrete velocity ordinate method to ensure the kinetic compatibility condition is also implemented. The present algorithms are tested for the one-dimensional unsteady shock-tube problems with various Knudsen numbers, the steady normal shock wave structures for different Mach numbers, the two-dimensional flows past a circular cylinder and a NACA 0012 airfoil to verify the present methodology and to simulate gas transport phenomena covering various flow regimes. Illustrations of large scale parallel computations of three-dimensional hypersonic rarefied flows over the reusable sphere-cone satellite and the re-entry spacecraft using almost the largest computer systems available in China are also reported. The present computed results are compared with the theoretical prediction from gas dynamics, related DSMC results, slip N-S solutions and experimental data, and good agreement can be found. The numerical experience indicates that although the direct model Boltzmann equation solver in phase space can be computationally expensive
Xu, Chuanfu, E-mail: xuchuanfu@nudt.edu.cn [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Deng, Xiaogang; Zhang, Lilun [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Fang, Jianbin [Parallel and Distributed Systems Group, Delft University of Technology, Delft 2628CD (Netherlands); Wang, Guangxue; Jiang, Yi [State Key Laboratory of Aerodynamics, P.O. Box 211, Mianyang 621000 (China); Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua [College of Computer Science, National University of Defense Technology, Changsha 410073 (China)
2014-12-01
Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations
Xu, Chuanfu; Deng, Xiaogang; Zhang, Lilun; Fang, Jianbin; Wang, Guangxue; Jiang, Yi; Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua
2014-01-01
Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations
Quantum Key Distribution with High Order Fibonacci-like Orbital Angular Momentum States
Pan, Ziwen; Cai, Jiarui; Wang, Chuan
2017-08-01
The coding space in quantum communication could be expanded to high-dimensional space by using orbital angular momentum (OAM) states of photons, as both the capacity of the channel and security are enhanced. Here we present a novel approach to realize high-capacity quantum key distribution (QKD) by exploiting OAM states. The innovation of the proposed approach relies on a unique type of entangled-photon source which produces entangled photons with OAM randomly distributed among high order Fiboncci-like numbers and a new physical mechanism for efficiently sharing keys. This combination of entanglement with mathematical properties of high order Fibonacci sequences provides the QKD protocol immunity to photon-number-splitting attacks and allows secure generation of long keys from few photons. Unlike other protocols, reference frame alignment and active modulation of production and detection bases are unnecessary.
High order aberrations calculation of a hexapole corrector using a differential algebra method
Kang, Yongfeng, E-mail: yfkang@mail.xjtu.edu.cn [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Liu, Xing [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China); Zhao, Jingyi, E-mail: jingyi.zhao@foxmail.com [School of Science, Chang’an University, Xi’an 710064 (China); Tang, Tiantong [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an 710049 (China)
2017-02-21
A differential algebraic (DA) method is proved as an unusual and effective tool in numerical analysis. It implements conveniently differentiation up to arbitrary high order, based on the nonstandard analysis. In this paper, the differential algebra (DA) method has been employed to compute the high order aberrations up to the fifth order of a practical hexapole corrector including round lenses and hexapole lenses. The program has been developed and tested as well. The electro-magnetic fields of arbitrary point are obtained by local analytic expressions, then field potentials are transformed into new forms which can be operated in the DA calculation. In this paper, the geometric and chromatic aberrations up to fifth order of a practical hexapole corrector system are calculated by the developed program.
High-order dispersion control of 10-petawatt Ti:sapphire laser facility.
Li, Shuai; Wang, Cheng; Liu, Yanqi; Xu, Yi; Li, Yanyan; Liu, Xingyan; Gan, Zebiao; Yu, Lianghong; Liang, Xiaoyan; Leng, Yuxin; Li, Ruxin
2017-07-24
A grism pair is utilized to control the high-order dispersion of the Shanghai Superintense Ultrafast Lasers Facility, which is a large-scale project aimed at delivering 10-PW laser pulses. We briefly present the characteristics of the laser system and calculate the cumulative B-integral, which determines the nonlinear phase shift influence on material dispersion. Three parameters are selected, grism separation, angle of incidence and slant distance of grating compressor, to determine their optimal values through an iterative searching procedure. Both the numerical and experimental results confirm that the spectral phase distortion is controlled, and the recompressed pulse with a duration of 24 fs is obtained in the single-shot mode. The distributions and stabilities of the pulse duration at different positions of the recompressed beam are also investigated. This approach offers a new feasible solution for the high-order dispersion compensation of femtosecond petawatt laser systems.
A multiresolution method for solving the Poisson equation using high order regularization
Hejlesen, Mads Mølholm; Walther, Jens Honore
2016-01-01
We present a novel high order multiresolution Poisson solver based on regularized Green's function solutions to obtain exact free-space boundary conditions while using fast Fourier transforms for computational efficiency. Multiresolution is a achieved through local refinement patches and regulari......We present a novel high order multiresolution Poisson solver based on regularized Green's function solutions to obtain exact free-space boundary conditions while using fast Fourier transforms for computational efficiency. Multiresolution is a achieved through local refinement patches...... and regularized Green's functions corresponding to the difference in the spatial resolution between the patches. The full solution is obtained utilizing the linearity of the Poisson equation enabling super-position of solutions. We show that the multiresolution Poisson solver produces convergence rates...
A Reconstruction Approach to High-Order Schemes Including Discontinuous Galerkin for Diffusion
Huynh, H. T.
2009-01-01
We introduce a new approach to high-order accuracy for the numerical solution of diffusion problems by solving the equations in differential form using a reconstruction technique. The approach has the advantages of simplicity and economy. It results in several new high-order methods including a simplified version of discontinuous Galerkin (DG). It also leads to new definitions of common value and common gradient quantities at each interface shared by the two adjacent cells. In addition, the new approach clarifies the relations among the various choices of new and existing common quantities. Fourier stability and accuracy analyses are carried out for the resulting schemes. Extensions to the case of quadrilateral meshes are obtained via tensor products. For the two-point boundary value problem (steady state), it is shown that these schemes, which include most popular DG methods, yield exact common interface quantities as well as exact cell average solutions for nearly all cases.
Quantum-path control in high-order harmonic generation at high photon energies
Zhang Xiaoshi; Lytle, Amy L; Cohen, Oren; Murnane, Margaret M; Kapteyn, Henry C
2008-01-01
We show through experiment and calculations how all-optical quasi-phase-matching of high-order harmonic generation can be used to selectively enhance emission from distinct quantum trajectories at high photon energies. Electrons rescattered in a strong field can traverse short and long quantum trajectories that exhibit differing coherence lengths as a result of variations in intensity of the driving laser along the direction of propagation. By varying the separation of the pulses in a counterpropagating pulse train, we selectively enhance either the long or the short quantum trajectory, and observe distinct spectral signatures in each case. This demonstrates a new type of coupling between the coherence of high-order harmonic beams and the attosecond time-scale quantum dynamics inherent in the process
Optimal Design of High-Order Passive-Damped Filters for Grid-Connected Applications
Beres, Remus Narcis; Wang, Xiongfei; Blaabjerg, Frede
2016-01-01
Harmonic stability problems caused by the resonance of high-order filters in power electronic systems are ever increasing. The use of passive damping does provide a robust solution to address these issues, but at the price of reduced efficiency due to the presence of additional passive components....... Hence, a new method is proposed in this paper to optimally design the passive damping circuit for the LCL filters and LCL with multi-tuned LC traps. In short, the optimization problem reduces to the proper choice of the multi-split capacitors or inductors in the high-order filter. Compared to existing...... filter resonance. The passive filters are designed, built and validated both analytically and experimentally for verification....
Pricing Exotic Options under a High-Order Markovian Regime Switching Model
Wai-Ki Ching
2007-10-01
Full Text Available We consider the pricing of exotic options when the price dynamics of the underlying risky asset are governed by a discrete-time Markovian regime-switching process driven by an observable, high-order Markov model (HOMM. We assume that the market interest rate, the drift, and the volatility of the underlying risky asset's return switch over time according to the states of the HOMM, which are interpreted as the states of an economy. We will then employ the well-known tool in actuarial science, namely, the Esscher transform to determine an equivalent martingale measure for option valuation. Moreover, we will also investigate the impact of the high-order effect of the states of the economy on the prices of some path-dependent exotic options, such as Asian options, lookback options, and barrier options.
Multiple-state Feshbach resonances mediated by high-order couplings
Hemming, Christopher J.; Krems, Roman V.
2008-01-01
We present a study of multistate Feshbach resonances mediated by high-order couplings. Our analysis focuses on a system with one open scattering state and multiple bound states. The scattering state is coupled to one off-resonant bound state and multiple Feshbach resonances are induced by a sequence of indirect couplings between the closed channels. We derive a general recursive expression that can be used to fit the experimental data on multistate Feshbach resonances involving one continuum state and several bound states and present numerical solutions for several model systems. Our results elucidate general features of multistate Feshbach resonances induced by high-order couplings and suggest mechanisms for controlling collisions of ultracold atoms and molecules with external fields
Decomposition of conditional probability for high-order symbolic Markov chains
Melnik, S. S.; Usatenko, O. V.
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
Efficient and tunable high-order harmonic light sources for photoelectron spectroscopy at surfaces
Chiang, Cheng-Tien; Huth, Michael; Trützschler, Andreas; Schumann, Frank O.; Kirschner, Jürgen; Widdra, Wolf
2015-01-01
Highlights: • An overview of photoelectron spectroscopy using high-order harmonics is presented. • Photoemission spectra on Ag(0 0 1) using megahertz harmonics are shown. • A gas recycling system for harmonic generation is presented. • Non-stop operation of megahertz harmonics up to 76 h is demonstrated. • The bandwidth and pulse duration of the harmonics are discussed. - Abstract: With the recent progress in high-order harmonic generation (HHG) using femtosecond lasers, laboratory photoelectron spectroscopy with an ultrafast, widely tunable vacuum-ultraviolet light source has become available. Despite the well-established technique of HHG-based photoemission experiments at kilohertz repetition rates, the efficiency of these setups can be intrinsically limited by the space-charge effects. Here we present recent developments of compact HHG light sources for photoelectron spectroscopy at high repetition rates up to megahertz, and examples for angle-resolved photoemission experiments are demonstrated.
Level set methods for detonation shock dynamics using high-order finite elements
Dobrev, V. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Grogan, F. C. [Univ. of California, San Diego, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kolev, T. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rieben, R [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Tomov, V. Z. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2017-05-26
Level set methods are a popular approach to modeling evolving interfaces. We present a level set ad- vection solver in two and three dimensions using the discontinuous Galerkin method with high-order nite elements. During evolution, the level set function is reinitialized to a signed distance function to maintain ac- curacy. Our approach leads to stable front propagation and convergence on high-order, curved, unstructured meshes. The ability of the solver to implicitly track moving fronts lends itself to a number of applications; in particular, we highlight applications to high-explosive (HE) burn and detonation shock dynamics (DSD). We provide results for two- and three-dimensional benchmark problems as well as applications to DSD.
European Workshop on High Order Nonlinear Numerical Schemes for Evolutionary PDEs
Beaugendre, Héloïse; Congedo, Pietro; Dobrzynski, Cécile; Perrier, Vincent; Ricchiuto, Mario
2014-01-01
This book collects papers presented during the European Workshop on High Order Nonlinear Numerical Methods for Evolutionary PDEs (HONOM 2013) that was held at INRIA Bordeaux Sud-Ouest, Talence, France in March, 2013. The central topic is high order methods for compressible fluid dynamics. In the workshop, and in this proceedings, greater emphasis is placed on the numerical than the theoretical aspects of this scientific field. The range of topics is broad, extending through algorithm design, accuracy, large scale computing, complex geometries, discontinuous Galerkin, finite element methods, Lagrangian hydrodynamics, finite difference methods and applications and uncertainty quantification. These techniques find practical applications in such fields as fluid mechanics, magnetohydrodynamics, nonlinear solid mechanics, and others for which genuinely nonlinear methods are needed.
Validation of a RANS transition model using a high-order weighted compact nonlinear scheme
Tu, GuoHua; Deng, XiaoGang; Mao, MeiLiang
2013-04-01
A modified transition model is given based on the shear stress transport (SST) turbulence model and an intermittency transport equation. The energy gradient term in the original model is replaced by flow strain rate to saving computational costs. The model employs local variables only, and then it can be conveniently implemented in modern computational fluid dynamics codes. The fifth-order weighted compact nonlinear scheme and the fourth-order staggered scheme are applied to discrete the governing equations for the purpose of minimizing discretization errors, so as to mitigate the confusion between numerical errors and transition model errors. The high-order package is compared with a second-order TVD method on simulating the transitional flow of a flat plate. Numerical results indicate that the high-order package give better grid convergence property than that of the second-order method. Validation of the transition model is performed for transitional flows ranging from low speed to hypersonic speed.
Frequency dependence of quantum path interference in non-collinear high-order harmonic generation
Zhong Shi-Yang; He Xin-Kui; Teng Hao; Ye Peng; Wang Li-Feng; He Peng; Wei Zhi-Yi
2016-01-01
High-order harmonic generation (HHG) driven by two non-collinear beams including a fundamental and its weak second harmonic is numerically studied. The interference of harmonics from adjacent electron quantum paths is found to be dependent on the relative delay of the driving pulse, and the dependences are different for different harmonic orders. This frequency dependence of the interference is attributed to the spatial frequency chirp in the HHG beam resulting from the harmonic dipole phase, which in turn provides a potential way to gain an insight into the generation of high-order harmonics. As an example, the intensity dependent dipole phase coefficient α is retrieved from the interference fringe. (paper)
Intense multimicrojoule high-order harmonics generated from neutral atoms of In2O3 nanoparticles
Elouga Bom, L. B.; Abdul-Hadi, J.; Vidal, F.; Ozaki, T.; Ganeev, R. A.
2009-01-01
We studied high-order harmonic generation from plasma that contains an abundance of indium oxide nanoparticles. We found that harmonics from nanoparticle-containing plasma are considerably more intense than from plasma produced on the In 2 O 3 bulk target, with high-order harmonic energy ranging from 6 μJ (for the ninth harmonic) to 1 μJ (for the 17th harmonic) in the former case. The harmonic cutoff from nanoparticles was at the 21st order, which is lower than that observed using indium oxide solid target. By comparing the harmonic spectra obtained from solid and nanoparticle indium oxide targets, we concluded that intense harmonics in the latter case are dominantly generated from neutral atoms of the In 2 O 3 nanoparticles
Dependence of high order harmonics intensity on laser focal spot position in preformed plasma plumes
Singhal, H.; Ganeev, R.; Naik, P. A.; Arora, V.; Chakravarty, U.; Gupta, P. D.
2008-01-01
The dependence of the high-order harmonic intensity on the laser focal spot position in laser produced plasma plumes is experimentally studied. High order harmonics up to the 59th order (λ∼13.5 nm) were generated by focusing 48 fs laser pulses from a Ti:sapphire laser system in silver plasma plume produced using 300 ps uncompressed laser radiation as the prepulse. The intensity of harmonics nearly vanished when the best focus was located in the plume center, whereas it peaked on either side with unequal intensity. The focal spot position corresponding to the peak harmonic intensity moved away from the plume center for higher order harmonics. The results are explained in terms of the variation of phase mismatch between the driving laser beam and harmonics radiation produced, relativistic drift of electrons, and defocusing effect due to radial ionization gradient in the plasma for different focal spot positions
Development of a high-order finite volume method with multiblock partition techniques
E. M. Lemos
2012-03-01
Full Text Available This work deals with a new numerical methodology to solve the Navier-Stokes equations based on a finite volume method applied to structured meshes with co-located grids. High-order schemes used to approximate advective, diffusive and non-linear terms, connected with multiblock partition techniques, are the main contributions of this paper. Combination of these two techniques resulted in a computer code that involves high accuracy due the high-order schemes and great flexibility to generate locally refined meshes based on the multiblock approach. This computer code has been able to obtain results with higher or equal accuracy in comparison with results obtained using classical procedures, with considerably less computational effort.
Enhancement of high-order harmonic generation in the presence of noise
Yavuz, I; Altun, Z [Department of Physics, Marmara University, 34722 Ziverbey, Istanbul (Turkey); Topcu, T, E-mail: ilhan.yavuz@marmara.edu.tr [Department of Physics, Auburn University, AL 36849-5311 (United States)
2011-07-14
We report on our simulations of the generation of high-order harmonics from atoms driven by an intense femtosecond laser field in the presence of noise. We numerically solve the non-perturbative stochastic time-dependent Schroedinger equation and observe how varying noise levels affect the frequency components of the high harmonic spectrum. Our calculations show that when an optimum amount of noise is present in the driving laser field, roughly a factor of 45 net enhancement can be achieved in high-order harmonic yield, especially, around the cut-off region. We observe that, for a relatively weak noise, the enhancement mechanism is sensitive to the carrier-envelope phase. We also investigate the possibility of generating ultra-short intense attosecond pulses by combining the laser field and noise and observe that a roughly four orders of magnitude enhanced isolated attosecond burst can be generated.
Enhancement of high-order harmonic generation in the presence of noise
Yavuz, I; Altun, Z; Topcu, T
2011-01-01
We report on our simulations of the generation of high-order harmonics from atoms driven by an intense femtosecond laser field in the presence of noise. We numerically solve the non-perturbative stochastic time-dependent Schroedinger equation and observe how varying noise levels affect the frequency components of the high harmonic spectrum. Our calculations show that when an optimum amount of noise is present in the driving laser field, roughly a factor of 45 net enhancement can be achieved in high-order harmonic yield, especially, around the cut-off region. We observe that, for a relatively weak noise, the enhancement mechanism is sensitive to the carrier-envelope phase. We also investigate the possibility of generating ultra-short intense attosecond pulses by combining the laser field and noise and observe that a roughly four orders of magnitude enhanced isolated attosecond burst can be generated.
Highly ordered uniform single-crystal Bi nanowires: fabrication and characterization
Bisrat, Y; Luo, Z P; Davis, D; Lagoudas, D
2007-01-01
A mechanical pressure injection technique has been used to fabricate uniform bismuth (Bi) nanowires in the pores of an anodic aluminum oxide (AAO) template. The AAO template was prepared from general purity aluminum by a two-step anodization followed by heat treatment to achieve highly ordered nanochannels. The nanowires were then fabricated by an injection technique whereby the molten Bi was injected into the AAO template using a hydraulic pressure method. The Bi nanowires prepared by this method were found to be dense and continuous with uniform diameter throughout the length. Electron diffraction experiments using the transmission electron microscope on cross-sectional and free-standing longitudinal Bi nanowires showed that the majority of the individual nanowires were single crystalline, with preferred orientation of growth along the [011] zone axis of the pseudo-cubic structure. The work presented here provides an inexpensive and effective way of fabricating highly ordered single-crystalline Bi nanowires, with uniform size distributions
Determining the minimum required uranium carbide content for HTGR UCO fuel kernels
McMurray, Jacob W.; Lindemer, Terrence B.; Brown, Nicholas R.; Reif, Tyler J.; Morris, Robert N.; Hunn, John D.
2017-01-01
Highlights: • The minimum required uranium carbide content for HTGR UCO fuel kernels is calculated. • More nuclear and chemical factors have been included for more useful predictions. • The effect of transmutation products, like Pu and Np, on the oxygen distribution is included for the first time. - Abstract: Three important failure mechanisms that must be controlled in high-temperature gas-cooled reactor (HTGR) fuel for certain higher burnup applications are SiC layer rupture, SiC corrosion by CO, and coating compromise from kernel migration. All are related to high CO pressures stemming from O release when uranium present as UO 2 fissions and the O is not subsequently bound by other elements. In the HTGR kernel design, CO buildup from excess O is controlled by the inclusion of additional uranium apart from UO 2 in the form of a carbide, UC x and this fuel form is designated UCO. Here general oxygen balance formulas were developed for calculating the minimum UC x content to ensure negligible CO formation for 15.5% enriched UCO taken to 16.1% actinide burnup. Required input data were obtained from CALPHAD (CALculation of PHAse Diagrams) chemical thermodynamic models and the Serpent 2 reactor physics and depletion analysis tool. The results are intended to be more accurate than previous estimates by including more nuclear and chemical factors, in particular the effect of transmuted Pu and Np oxides on the oxygen distribution as the fuel kernel composition evolves with burnup.
Music recommendation according to human motion based on kernel CCA-based relationship
Ohkushi, Hiroyuki; Ogawa, Takahiro; Haseyama, Miki
2011-12-01
In this article, a method for recommendation of music pieces according to human motions based on their kernel canonical correlation analysis (CCA)-based relationship is proposed. In order to perform the recommendation between different types of multimedia data, i.e., recommendation of music pieces from human motions, the proposed method tries to estimate their relationship. Specifically, the correlation based on kernel CCA is calculated as the relationship in our method. Since human motions and music pieces have various time lengths, it is necessary to calculate the correlation between time series having different lengths. Therefore, new kernel functions for human motions and music pieces, which can provide similarities between data that have different time lengths, are introduced into the calculation of the kernel CCA-based correlation. This approach effectively provides a solution to the conventional problem of not being able to calculate the correlation from multimedia data that have various time lengths. Therefore, the proposed method can perform accurate recommendation of best matched music pieces according to a target human motion from the obtained correlation. Experimental results are shown to verify the performance of the proposed method.
GPU-accelerated Kernel Regression Reconstruction for Freehand 3D Ultrasound Imaging.
Wen, Tiexiang; Li, Ling; Zhu, Qingsong; Qin, Wenjian; Gu, Jia; Yang, Feng; Xie, Yaoqin
2017-07-01
Volume reconstruction method plays an important role in improving reconstructed volumetric image quality for freehand three-dimensional (3D) ultrasound imaging. By utilizing the capability of programmable graphics processing unit (GPU), we can achieve a real-time incremental volume reconstruction at a speed of 25-50 frames per second (fps). After incremental reconstruction and visualization, hole-filling is performed on GPU to fill remaining empty voxels. However, traditional pixel nearest neighbor-based hole-filling fails to reconstruct volume with high image quality. On the contrary, the kernel regression provides an accurate volume reconstruction method for 3D ultrasound imaging but with the cost of heavy computational complexity. In this paper, a GPU-based fast kernel regression method is proposed for high-quality volume after the incremental reconstruction of freehand ultrasound. The experimental results show that improved image quality for speckle reduction and details preservation can be obtained with the parameter setting of kernel window size of [Formula: see text] and kernel bandwidth of 1.0. The computational performance of the proposed GPU-based method can be over 200 times faster than that on central processing unit (CPU), and the volume with size of 50 million voxels in our experiment can be reconstructed within 10 seconds.
Kernel based orthogonalization for change detection in hyperspectral images
Nielsen, Allan Aasbjerg
function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MNF analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via...... analysis all 126 spectral bands of the HyMap are included. Changes on the ground are most likely due to harvest having taken place between the two acquisitions and solar effects (both solar elevation and azimuth have changed). Both types of kernel analysis emphasize change and unlike kernel PCA, kernel MNF...
A laser optical method for detecting corn kernel defects
Gunasekaran, S.; Paulsen, M. R.; Shove, G. C.
1984-01-01
An opto-electronic instrument was developed to examine individual corn kernels and detect various kernel defects according to reflectance differences. A low power helium-neon (He-Ne) laser (632.8 nm, red light) was used as the light source in the instrument. Reflectance from good and defective parts of corn kernel surfaces differed by approximately 40%. Broken, chipped, and starch-cracked kernels were detected with nearly 100% accuracy; while surface-split kernels were detected with about 80% accuracy. (author)
Generalization Performance of Regularized Ranking With Multiscale Kernels.
Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin
2016-05-01
The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.
Windows Vista Kernel-Mode: Functions, Security Enhancements and Flaws
Mohammed D. ABDULMALIK
2008-06-01
Full Text Available Microsoft has made substantial enhancements to the kernel of the Microsoft Windows Vista operating system. Kernel improvements are significant because the kernel provides low-level operating system functions, including thread scheduling, interrupt and exception dispatching, multiprocessor synchronization, and a set of routines and basic objects.This paper describes some of the kernel security enhancements for 64-bit edition of Windows Vista. We also point out some weakness areas (flaws that can be attacked by malicious leading to compromising the kernel.
Difference between standard and quasi-conformal BFKL kernels
Fadin, V.S.; Fiore, R.; Papa, A.
2012-01-01
As it was recently shown, the colour singlet BFKL kernel, taken in Möbius representation in the space of impact parameters, can be written in quasi-conformal shape, which is unbelievably simple compared with the conventional form of the BFKL kernel in momentum space. It was also proved that the total kernel is completely defined by its Möbius representation. In this paper we calculated the difference between standard and quasi-conformal BFKL kernels in momentum space and discovered that it is rather simple. Therefore we come to the conclusion that the simplicity of the quasi-conformal kernel is caused mainly by using the impact parameter space.
Siswoyo, Siswoyo; Sunaryo, Sunaryo
2017-01-01
Abstract This study investigated the implementation of High Order Thinking Skills in high school physics teaching focused on the analysis of the questions that were developed by teachers in Jakarta. Data obtained in training physics teachers in Jakarta. Teachers followed training on how to develop the learning of physics to develop higher order thinking skills. Then the teachers were asked to develop physics problems test as an instrument measuring tool of learning physics at school. Pro...
Further results on global state feedback stabilization of nonlinear high-order feedforward systems.
Xie, Xue-Jun; Zhang, Xing-Hui
2014-03-01
In this paper, by introducing a combined method of sign function, homogeneous domination and adding a power integrator, and overcoming several troublesome obstacles in the design and analysis, the problem of state feedback control for a class of nonlinear high-order feedforward systems with the nonlinearity's order being relaxed to an interval rather than a fixed point is solved. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Wilcox, Lucas C.; Stadler, Georg; Burstedde, Carsten; Ghattas, Omar
2010-01-01
We introduce a high-order discontinuous Galerkin (dG) scheme for the numerical solution of three-dimensional (3D) wave propagation problems in coupled elastic-acoustic media. A velocity-strain formulation is used, which allows for the solution of the acoustic and elastic wave equations within the same unified framework. Careful attention is directed at the derivation of a numerical flux that preserves high-order accuracy in the presence of material discontinuities, including elastic-acoustic interfaces. Explicit expressions for the 3D upwind numerical flux, derived as an exact solution for the relevant Riemann problem, are provided. The method supports h-non-conforming meshes, which are particularly effective at allowing local adaptation of the mesh size to resolve strong contrasts in the local wavelength, as well as dynamic adaptivity to track solution features. The use of high-order elements controls numerical dispersion, enabling propagation over many wave periods. We prove consistency and stability of the proposed dG scheme. To study the numerical accuracy and convergence of the proposed method, we compare against analytical solutions for wave propagation problems with interfaces, including Rayleigh, Lamb, Scholte, and Stoneley waves as well as plane waves impinging on an elastic-acoustic interface. Spectral rates of convergence are demonstrated for these problems, which include a non-conforming mesh case. Finally, we present scalability results for a parallel implementation of the proposed high-order dG scheme for large-scale seismic wave propagation in a simplified earth model, demonstrating high parallel efficiency for strong scaling to the full size of the Jaguar Cray XT5 supercomputer.
Fast magnetic energy dissipation in relativistic plasma induced by high order laser modes
Gu, Yanjun; Yu, Q.; Klimo, Ondřej; Esirkepov, T.Z.; Bulanov, S.V.; Weber, Stefan A.; Korn, Georg
2016-01-01
Roč. 4, Jun (2016), 1-5, č. článku e19. ISSN 2095-4719 R&D Projects: GA MŠk EF15_008/0000162 Grant - others:ELI Beamlines(XE) CZ.02.1.01/0.0/0.0/15_008/0000162 Institutional support: RVO:68378271 Keywords : high order laser mode * laser–plasma interaction * magnetic annihilation Subject RIV: BL - Plasma and Gas Discharge Physics
On global exponential stability of high-order neural networks with time-varying delays
Zhang Baoyong; Xu Shengyuan; Li Yongmin; Chu Yuming
2007-01-01
This Letter investigates the problem of stability analysis for a class of high-order neural networks with time-varying delays. The delays are bounded but not necessarily differentiable. Based on the Lyapunov stability theory together with the linear matrix inequality (LMI) approach and the use of Halanay inequality, sufficient conditions guaranteeing the global exponential stability of the equilibrium point of the considered neural networks are presented. Two numerical examples are provided to demonstrate the effectiveness of the proposed stability criteria
On global exponential stability of high-order neural networks with time-varying delays
Zhang Baoyong [School of Automation, Nanjing University of Science and Technology, Nanjing 210094, Jiangsu (China)]. E-mail: baoyongzhang@yahoo.com.cn; Xu Shengyuan [School of Automation, Nanjing University of Science and Technology, Nanjing 210094, Jiangsu (China)]. E-mail: syxu02@yahoo.com.cn; Li Yongmin [School of Automation, Nanjing University of Science and Technology, Nanjing 210094, Jiangsu (China) and Department of Mathematics, Huzhou Teacher' s College, Huzhou 313000, Zhejiang (China)]. E-mail: ymlwww@163.com; Chu Yuming [Department of Mathematics, Huzhou Teacher' s College, Huzhou 313000, Zhejiang (China)
2007-06-18
This Letter investigates the problem of stability analysis for a class of high-order neural networks with time-varying delays. The delays are bounded but not necessarily differentiable. Based on the Lyapunov stability theory together with the linear matrix inequality (LMI) approach and the use of Halanay inequality, sufficient conditions guaranteeing the global exponential stability of the equilibrium point of the considered neural networks are presented. Two numerical examples are provided to demonstrate the effectiveness of the proposed stability criteria.
Mamou, M.; Xu, H.; Khalid, M.
2004-01-01
The present paper contains a comprehensive literature survey on helicopter flow analyses and describes some true unsteady flows past helicopter rotors obtained using low and high order CFD models. The low order model is based on a panel method coupled with a viscous boundary layer approach and a compressibility correction. The USAERO software is used for the computations. The high order model is based on Euler and Navier-Stokes equations. For the high order models, a true unsteady scheme, as implemented in the CFD-FASTRAN code using the Euler equations, is considered for flows past hovering rotor. On the other hand, a quasi-steady approach, using the WIND code with the Navier-Stokes equations and the SST turbulence model, is used to assess the validity of the approach for the simulation of flows past a helicopter in forward flight conditions. When using the high order models, a Chimera grid technique is used to describe the blade motions within the parent stationary grid. Comparisons with experimental data are performed and the true unsteady simulations provide a reasonable agreement with the available experimental data. The panel method and the quasisteady approach are found to overestimate the loads on the helicopter rotors. The USAERO panel code is found to produce more thrust owing to some error sources in the computations when a wake-surface collision occurs, as the blades interact with their own wakes. The automatic cutting of the wake sheets, as they approach the model surface, is not working properly at every time step. (author)
Mamou, M.; Xu, H.; Khalid, M. [National Research Council of Canada, Inst. for Aerospace Research, Ottawa, Ontario (Canada)]. E-mail: Mahmoud.Mamou@nrc-cnrc.gc.ca
2004-07-01
The present paper contains a comprehensive literature survey on helicopter flow analyses and describes some true unsteady flows past helicopter rotors obtained using low and high order CFD models. The low order model is based on a panel method coupled with a viscous boundary layer approach and a compressibility correction. The USAERO software is used for the computations. The high order model is based on Euler and Navier-Stokes equations. For the high order models, a true unsteady scheme, as implemented in the CFD-FASTRAN code using the Euler equations, is considered for flows past hovering rotor. On the other hand, a quasi-steady approach, using the WIND code with the Navier-Stokes equations and the SST turbulence model, is used to assess the validity of the approach for the simulation of flows past a helicopter in forward flight conditions. When using the high order models, a Chimera grid technique is used to describe the blade motions within the parent stationary grid. Comparisons with experimental data are performed and the true unsteady simulations provide a reasonable agreement with the available experimental data. The panel method and the quasisteady approach are found to overestimate the loads on the helicopter rotors. The USAERO panel code is found to produce more thrust owing to some error sources in the computations when a wake-surface collision occurs, as the blades interact with their own wakes. The automatic cutting of the wake sheets, as they approach the model surface, is not working properly at every time step. (author)
A high-order q-difference equation for q-Hahn multiple orthogonal polynomials
Arvesú, J.; Esposito, Chiara
2012-01-01
A high-order linear q-difference equation with polynomial coefficients having q-Hahn multiple orthogonal polynomials as eigenfunctions is given. The order of the equation coincides with the number of orthogonality conditions that these polynomials satisfy. Some limiting situations when are studie....... Indeed, the difference equation for Hahn multiple orthogonal polynomials given in Lee [J. Approx. Theory (2007), ), doi: 10.1016/j.jat.2007.06.002] is obtained as a limiting case....
Entropy Viscosity Method for High-Order Approximations of Conservation Laws
Guermond, J. L.
2010-09-17
A stabilization technique for conservation laws is presented. It introduces in the governing equations a nonlinear dissipation function of the residual of the associated entropy equation and bounded from above by a first order viscous term. Different two-dimensional test cases are simulated - a 2D Burgers problem, the "KPP rotating wave" and the Euler system - using high order methods: spectral elements or Fourier expansions. Details on the tuning of the parameters controlling the entropy viscosity are given. © 2011 Springer.
Research on Appraisal System of Procurator Performance by Using High-Order CFA Model
Yong-mao Huang
2014-01-01
Full Text Available The prosecutor is the main body of procuratorial organs. The performance appraisal system plays an important role in promoting the work efficiency of procurator. In this paper, we establish the performance appraisal system of procurators by high-order confirmatory factor analysis method and evaluate procurators’ performance by fuzzy comprehensive evaluation method based on the 360 degrees. The results have some help to performance management of procuratorial organs.
High order P-G finite elements for convection-dominated problems
Carmo, E.D. do; Galeao, A.C.
1989-06-01
From the error analysis presented in this paper it is shown that de CCAU method derived by Dutra do Carmo and Galeao [3] preserves the same order of approximation obtained with SUPH (cf. Books and Hughes [2]) when advection-diffusion regular solutions are considered, and improves the accuracy of the approximate boundary layer solution when high order interpolating polynomials are used near sharp layers [pt
High-Order Hyperbolic Residual-Distribution Schemes on Arbitrary Triangular Grids
2015-06-22
for efficient CFD calculations in high-order methods,3 because the grid adaptation almost necessarily introduces irregularity in the grid. In fact...problems. References 1P.A. Gnoffo. Multi-dimensional, inviscid flux reconstruction for simulation of hypersonic heating on tetrahedral grids. In Proc. of...Kitamura, E. Shima, Y. Nakamura, and P.L. Roe. Evaluation of euler fluxes for hypersonic heating computations. AIAA J., 48(4):763–776, 2010. 3Z.J. Wang, K
Detecting high-order interactions of single nucleotide polymorphisms using genetic programming.
Nunkesser, Robin; Bernholt, Thorsten; Schwender, Holger; Ickstadt, Katja; Wegener, Ingo
2007-12-15
Not individual single nucleotide polymorphisms (SNPs), but high-order interactions of SNPs are assumed to be responsible for complex diseases such as cancer. Therefore, one of the major goals of genetic association studies concerned with such genotype data is the identification of these high-order interactions. This search is additionally impeded by the fact that these interactions often are only explanatory for a relatively small subgroup of patients. Most of the feature selection methods proposed in the literature, unfortunately, fail at this task, since they can either only identify individual variables or interactions of a low order, or try to find rules that are explanatory for a high percentage of the observations. In this article, we present a procedure based on genetic programming and multi-valued logic that enables the identification of high-order interactions of categorical variables such as SNPs. This method called GPAS cannot only be used for feature selection, but can also be employed for discrimination. In an application to the genotype data from the GENICA study, an association study concerned with sporadic breast cancer, GPAS is able to identify high-order interactions of SNPs leading to a considerably increased breast cancer risk for different subsets of patients that are not found by other feature selection methods. As an application to a subset of the HapMap data shows, GPAS is not restricted to association studies comprising several 10 SNPs, but can also be employed to analyze whole-genome data. Software can be downloaded from http://ls2-www.cs.uni-dortmund.de/~nunkesser/#Software
Tsunami generation, propagation, and run-up with a high-order Boussinesq model
Fuhrman, David R.; Madsen, Per A.
2009-01-01
In this work we extend a high-order Boussinesq-type (finite difference) model, capable of simulating waves out to wavenumber times depth kh landslide-induced tsunamis. The extension is straight forward, requiring only....... The Boussinesq-type model is then used to simulate numerous tsunami-type events generated from submerged landslides, in both one and two horizontal dimensions. The results again compare well against previous experiments and/or numerical simulations. The new extension compliments recently developed run...
Entropy Viscosity Method for High-Order Approximations of Conservation Laws
Guermond, J. L.; Pasquetti, R.
2010-01-01
A stabilization technique for conservation laws is presented. It introduces in the governing equations a nonlinear dissipation function of the residual of the associated entropy equation and bounded from above by a first order viscous term. Different two-dimensional test cases are simulated - a 2D Burgers problem, the "KPP rotating wave" and the Euler system - using high order methods: spectral elements or Fourier expansions. Details on the tuning of the parameters controlling the entropy viscosity are given. © 2011 Springer.
Modulated phase matching and high-order harmonic enhancement mediated by the carrier-envelope phase
Faccio, Daniele; Serrat, Carles; Cela, Jose M.; Farres, Albert; Di Trapani, Paolo; Biegert, Jens
2010-01-01
The process of high-order harmonic generation in gases is numerically investigated in the presence of a few-cycle pulsed-Bessel-beam pump, featuring a periodic modulation in the peak intensity due to large carrier-envelope-phase mismatch. A two-decade enhancement in the conversion efficiency is observed and interpreted as the consequence of a mechanism known as a nonlinearly induced modulation in the phase mismatch.
Calculation of neutron flux and reactivity by perturbation theory at high order
Silva, W.L.P. da; Silva, F.C. da; Thome Filho, Z.D.
1982-01-01
A high order pertubation theory is studied, independent of time, applied to integral parameter calculation of a nuclear reactor. A pertubative formulation, based on flux difference technique, which gives directy the reactivity and neutron flux up to the aproximation order required, is presented. As an application of the method, global pertubations represented by fuel temperature variations, are used. Tests were done aiming to verify the relevancy of the approximation order for several intensities of the pertubations considered. (E.G.) [pt
High-Order Multioperator Compact Schemes for Numerical Simulation of Unsteady Subsonic Airfoil Flow
Savel'ev, A. D.
2018-02-01
On the basis of high-order schemes, the viscous gas flow over the NACA2212 airfoil is numerically simulated at a free-stream Mach number of 0.3 and Reynolds numbers ranging from 103 to 107. Flow regimes sequentially varying due to variations in the free-stream viscosity are considered. Vortex structures developing on the airfoil surface are investigated, and a physical interpretation of this phenomenon is given.
Global stability of stochastic high-order neural networks with discrete and distributed delays
Wang Zidong; Fang Jianan; Liu Xiaohui
2008-01-01
High-order neural networks can be considered as an expansion of Hopfield neural networks, and have stronger approximation property, faster convergence rate, greater storage capacity, and higher fault tolerance than lower-order neural networks. In this paper, the global asymptotic stability analysis problem is considered for a class of stochastic high-order neural networks with discrete and distributed time-delays. Based on an Lyapunov-Krasovskii functional and the stochastic stability analysis theory, several sufficient conditions are derived, which guarantee the global asymptotic convergence of the equilibrium point in the mean square. It is shown that the stochastic high-order delayed neural networks under consideration are globally asymptotically stable in the mean square if two linear matrix inequalities (LMIs) are feasible, where the feasibility of LMIs can be readily checked by the Matlab LMI toolbox. It is also shown that the main results in this paper cover some recently published works. A numerical example is given to demonstrate the usefulness of the proposed global stability criteria
Jiang, Zhen-Hua; Yan, Chao; Yu, Jian
2013-08-01
Two types of implicit algorithms have been improved for high order discontinuous Galerkin (DG) method to solve compressible Navier-Stokes (NS) equations on triangular grids. A block lower-upper symmetric Gauss-Seidel (BLU-SGS) approach is implemented as a nonlinear iterative scheme. And a modified LU-SGS (LLU-SGS) approach is suggested to reduce the memory requirements while retain the good convergence performance of the original LU-SGS approach. Both implicit schemes have the significant advantage that only the diagonal block matrix is stored. The resulting implicit high-order DG methods are applied, in combination with Hermite weighted essentially non-oscillatory (HWENO) limiters, to solve viscous flow problems. Numerical results demonstrate that the present implicit methods are able to achieve significant efficiency improvements over explicit counterparts and for viscous flows with shocks, and the HWENO limiters can be used to achieve the desired essentially non-oscillatory shock transition and the designed high-order accuracy simultaneously.
A high-order mode extended interaction klystron at 0.34 THz
Wang, Dongyang; Wang, Guangqiang; Wang, Jianguo; Li, Shuang; Zeng, Peng; Teng, Yan
2017-02-01
We propose the concept of high-order mode extended interaction klystron (EIK) at the terahertz band. Compared to the conventional fundamental mode EIK, it operates at the TM31-2π mode, and its remarkable advantage is to obtain a large structure and good performance. The proposed EIK consists of five identical cavities with five gaps in each cavity. The method is discussed to suppress the mode competition and self-oscillation in the high-order mode cavity. Particle-in-cell simulation demonstrates that the EIK indeed operates at TM31-2π mode without self-oscillation while other modes are well suppressed. Driven by the electron beam with a voltage of 15 kV and a current of 0.3 A, the saturation gain of 43 dB and the output power of 60 W are achieved at the center frequency of 342.4 GHz. The EIK operating at high-order mode seems a promising approach to generate high power terahertz waves.
Time-Frequency Analysis Using Warped-Based High-Order Phase Modeling
Ioana Cornel
2005-01-01
Full Text Available The high-order ambiguity function (HAF was introduced for the estimation of polynomial-phase signals (PPS embedded in noise. Since the HAF is a nonlinear operator, it suffers from noise-masking effects and from the appearance of undesired cross-terms when multicomponents PPS are analyzed. In order to improve the performances of the HAF, the multi-lag HAF concept was proposed. Based on this approach, several advanced methods (e.g., product high-order ambiguity function (PHAF have been recently proposed. Nevertheless, performances of these new methods are affected by the error propagation effect which drastically limits the order of the polynomial approximation. This phenomenon acts especially when a high-order polynomial modeling is needed: representation of the digital modulation signals or the acoustic transient signals. This effect is caused by the technique used for polynomial order reduction, common for existing approaches: signal multiplication with the complex conjugated exponentials formed with the estimated coefficients. In this paper, we introduce an alternative method to reduce the polynomial order, based on the successive unitary signal transformation, according to each polynomial order. We will prove that this method reduces considerably the effect of error propagation. Namely, with this order reduction method, the estimation error at a given order will depend only on the performances of the estimation method.
Ray, Jaideep; Lefantzi, Sophia; Najm, Habib N.; Kennedy, Christopher A.
2006-01-01
Block-structured adaptively refined meshes (SAMR) strive for efficient resolution of partial differential equations (PDEs) solved on large computational domains by clustering mesh points only where required by large gradients. Previous work has indicated that fourth-order convergence can be achieved on such meshes by using a suitable combination of high-order discretizations, interpolations, and filters and can deliver significant computational savings over conventional second-order methods at engineering error tolerances. In this paper, we explore the interactions between the errors introduced by discretizations, interpolations and filters. We develop general expressions for high-order discretizations, interpolations, and filters, in multiple dimensions, using a Fourier approach, facilitating the high-order SAMR implementation. We derive a formulation for the necessary interpolation order for given discretization and derivative orders. We also illustrate this order relationship empirically using one and two-dimensional model problems on refined meshes. We study the observed increase in accuracy with increasing interpolation order. We also examine the empirically observed order of convergence, as the effective resolution of the mesh is increased by successively adding levels of refinement, with different orders of discretization, interpolation, or filtering.
photon-plasma: A modern high-order particle-in-cell code
Haugbølle, Troels; Frederiksen, Jacob Trier; Nordlund, Åke
2013-01-01
We present the photon-plasma code, a modern high order charge conserving particle-in-cell code for simulating relativistic plasmas. The code is using a high order implicit field solver and a novel high order charge conserving interpolation scheme for particle-to-cell interpolation and charge deposition. It includes powerful diagnostics tools with on-the-fly particle tracking, synthetic spectra integration, 2D volume slicing, and a new method to correctly account for radiative cooling in the simulations. A robust technique for imposing (time-dependent) particle and field fluxes on the boundaries is also presented. Using a hybrid OpenMP and MPI approach, the code scales efficiently from 8 to more than 250.000 cores with almost linear weak scaling on a range of architectures. The code is tested with the classical benchmarks particle heating, cold beam instability, and two-stream instability. We also present particle-in-cell simulations of the Kelvin-Helmholtz instability, and new results on radiative collisionless shocks
Impact of leakage delay on bifurcation in high-order fractional BAM neural networks.
Huang, Chengdai; Cao, Jinde
2018-02-01
The effects of leakage delay on the dynamics of neural networks with integer-order have lately been received considerable attention. It has been confirmed that fractional neural networks more appropriately uncover the dynamical properties of neural networks, but the results of fractional neural networks with leakage delay are relatively few. This paper primarily concentrates on the issue of bifurcation for high-order fractional bidirectional associative memory(BAM) neural networks involving leakage delay. The first attempt is made to tackle the stability and bifurcation of high-order fractional BAM neural networks with time delay in leakage terms in this paper. The conditions for the appearance of bifurcation for the proposed systems with leakage delay are firstly established by adopting time delay as a bifurcation parameter. Then, the bifurcation criteria of such system without leakage delay are successfully acquired. Comparative analysis wondrously detects that the stability performance of the proposed high-order fractional neural networks is critically weakened by leakage delay, they cannot be overlooked. Numerical examples are ultimately exhibited to attest the efficiency of the theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Balsara, Dinshaw S.; Dumbser, Michael
2015-10-01
Several advances have been reported in the recent literature on divergence-free finite volume schemes for Magnetohydrodynamics (MHD). Almost all of these advances are restricted to structured meshes. To retain full geometric versatility, however, it is also very important to make analogous advances in divergence-free schemes for MHD on unstructured meshes. Such schemes utilize a staggered Yee-type mesh, where all hydrodynamic quantities (mass, momentum and energy density) are cell-centered, while the magnetic fields are face-centered and the electric fields, which are so useful for the time update of the magnetic field, are centered at the edges. Three important advances are brought together in this paper in order to make it possible to have high order accurate finite volume schemes for the MHD equations on unstructured meshes. First, it is shown that a divergence-free WENO reconstruction of the magnetic field can be developed for unstructured meshes in two and three space dimensions using a classical cell-centered WENO algorithm, without the need to do a WENO reconstruction for the magnetic field on the faces. This is achieved via a novel constrained L2-projection operator that is used in each time step as a postprocessor of the cell-centered WENO reconstruction so that the magnetic field becomes locally and globally divergence free. Second, it is shown that recently-developed genuinely multidimensional Riemann solvers (called MuSIC Riemann solvers) can be used on unstructured meshes to obtain a multidimensionally upwinded representation of the electric field at each edge. Third, the above two innovations work well together with a high order accurate one-step ADER time stepping strategy, which requires the divergence-free nonlinear WENO reconstruction procedure to be carried out only once per time step. The resulting divergence-free ADER-WENO schemes with MuSIC Riemann solvers give us an efficient and easily-implemented strategy for divergence-free MHD on
Analytic scattering kernels for neutron thermalization studies
Sears, V.F.
1990-01-01
Current plans call for the inclusion of a liquid hydrogen or deuterium cold source in the NRU replacement vessel. This report is part of an ongoing study of neutron thermalization in such a cold source. Here, we develop a simple analytical model for the scattering kernel of monatomic and diatomic liquids. We also present the results of extensive numerical calculations based on this model for liquid hydrogen, liquid deuterium, and mixtures of the two. These calculations demonstrate the dependence of the scattering kernel on the incident and scattered-neutron energies, the behavior near rotational thresholds, the dependence on the centre-of-mass pair correlations, the dependence on the ortho concentration, and the dependence on the deuterium concentration in H 2 /D 2 mixtures. The total scattering cross sections are also calculated and compared with available experimental results
Quantized kernel least mean square algorithm.
Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C
2012-01-01
In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.
Kernel-based tests for joint independence
Pfister, Niklas; Bühlmann, Peter; Schölkopf, Bernhard
2018-01-01
if the $d$ variables are jointly independent, as long as the kernel is characteristic. Based on an empirical estimate of dHSIC, we define three different non-parametric hypothesis tests: a permutation test, a bootstrap test and a test based on a Gamma approximation. We prove that the permutation test......We investigate the problem of testing whether $d$ random variables, which may or may not be continuous, are jointly (or mutually) independent. Our method builds on ideas of the two variable Hilbert-Schmidt independence criterion (HSIC) but allows for an arbitrary number of variables. We embed...... the $d$-dimensional joint distribution and the product of the marginals into a reproducing kernel Hilbert space and define the $d$-variable Hilbert-Schmidt independence criterion (dHSIC) as the squared distance between the embeddings. In the population case, the value of dHSIC is zero if and only...
Wilson Dslash Kernel From Lattice QCD Optimization
Joo, Balint [Jefferson Lab, Newport News, VA; Smelyanskiy, Mikhail [Parallel Computing Lab, Intel Corporation, California, USA; Kalamkar, Dhiraj D. [Parallel Computing Lab, Intel Corporation, India; Vaidyanathan, Karthikeyan [Parallel Computing Lab, Intel Corporation, India
2015-07-01
Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.
A Kernel for Protein Secondary Structure Prediction
Guermeur , Yann; Lifchitz , Alain; Vert , Régis
2004-01-01
http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=10338&mode=toc; International audience; Multi-class support vector machines have already proved efficient in protein secondary structure prediction as ensemble methods, to combine the outputs of sets of classifiers based on different principles. In this chapter, their implementation as basic prediction methods, processing the primary structure or the profile of multiple alignments, is investigated. A kernel devoted to the task is in...
Scalar contribution to the BFKL kernel
Gerasimov, R. E.; Fadin, V. S.
2010-01-01
The contribution of scalar particles to the kernel of the Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation is calculated. A great cancellation between the virtual and real parts of this contribution, analogous to the cancellation in the quark contribution in QCD, is observed. The reason of this cancellation is discovered. This reason has a common nature for particles with any spin. Understanding of this reason permits to obtain the total contribution without the complicated calculations, which are necessary for finding separate pieces.
Weighted Bergman Kernels for Logarithmic Weights
Engliš, Miroslav
2010-01-01
Roč. 6, č. 3 (2010), s. 781-813 ISSN 1558-8599 R&D Projects: GA AV ČR IAA100190802 Keywords : Bergman kernel * Toeplitz operator * logarithmic weight * pseudodifferential operator Subject RIV: BA - General Mathematics Impact factor: 0.462, year: 2010 http://www.intlpress.com/site/pub/pages/journals/items/pamq/content/vols/0006/0003/a008/
Heat kernels and zeta functions on fractals
Dunne, Gerald V
2012-01-01
On fractals, spectral functions such as heat kernels and zeta functions exhibit novel features, very different from their behaviour on regular smooth manifolds, and these can have important physical consequences for both classical and quantum physics in systems having fractal properties. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker's 75th birthday devoted to ‘Applications of zeta functions and other spectral functions in mathematics and physics’. (paper)
Liu, Xiaolin; Lauer, Kathryn K; Ward, Barney D; Rao, Stephen M; Li, Shi-Jiang; Hudetz, Anthony G
2012-10-01
Current theories suggest that disrupting cortical information integration may account for the mechanism of general anesthesia in suppressing consciousness. Human cognitive operations take place in hierarchically structured neural organizations in the brain. The process of low-order neural representation of sensory stimuli becoming integrated in high-order cortices is also known as cognitive binding. Combining neuroimaging, cognitive neuroscience, and anesthetic manipulation, we examined how cognitive networks involved in auditory verbal memory are maintained in wakefulness, disrupted in propofol-induced deep sedation, and re-established in recovery. Inspired by the notion of cognitive binding, an functional magnetic resonance imaging-guided connectivity analysis was utilized to assess the integrity of functional interactions within and between different levels of the task-defined brain regions. Task-related responses persisted in the primary auditory cortex (PAC), but vanished in the inferior frontal gyrus (IFG) and premotor areas in deep sedation. For connectivity analysis, seed regions representing sensory and high-order processing of the memory task were identified in the PAC and IFG. Propofol disrupted connections from the PAC seed to the frontal regions and thalamus, but not the connections from the IFG seed to a set of widely distributed brain regions in the temporal, frontal, and parietal lobes (with exception of the PAC). These later regions have been implicated in mediating verbal comprehension and memory. These results suggest that propofol disrupts cognition by blocking the projection of sensory information to high-order processing networks and thus preventing information integration. Such findings contribute to our understanding of anesthetic mechanisms as related to information and integration in the brain. Copyright © 2011 Wiley Periodicals, Inc.
A comparison of high-order polynomial and wave-based methods for Helmholtz problems
Lieu, Alice; Gabard, Gwénaël; Bériot, Hadrien
2016-09-01
The application of computational modelling to wave propagation problems is hindered by the dispersion error introduced by the discretisation. Two common strategies to address this issue are to use high-order polynomial shape functions (e.g. hp-FEM), or to use physics-based, or Trefftz, methods where the shape functions are local solutions of the problem (typically plane waves). Both strategies have been actively developed over the past decades and both have demonstrated their benefits compared to conventional finite-element methods, but they have yet to be compared. In this paper a high-order polynomial method (p-FEM with Lobatto polynomials) and the wave-based discontinuous Galerkin method are compared for two-dimensional Helmholtz problems. A number of different benchmark problems are used to perform a detailed and systematic assessment of the relative merits of these two methods in terms of interpolation properties, performance and conditioning. It is generally assumed that a wave-based method naturally provides better accuracy compared to polynomial methods since the plane waves or Bessel functions used in these methods are exact solutions of the Helmholtz equation. Results indicate that this expectation does not necessarily translate into a clear benefit, and that the differences in performance, accuracy and conditioning are more nuanced than generally assumed. The high-order polynomial method can in fact deliver comparable, and in some cases superior, performance compared to the wave-based DGM. In addition to benchmarking the intrinsic computational performance of these methods, a number of practical issues associated with realistic applications are also discussed.
Large-eddy simulation in a mixing tee junction: High-order turbulent statistics analysis
Howard, Richard J.A.; Serre, Eric
2015-01-01
Highlights: • Mixing and thermal fluctuations in a junction are studied using large eddy simulation. • Adiabatic and conducting steel wall boundaries are tested. • Wall thermal fluctuations are not the same between the flow and the solid. • Solid thermal fluctuations cannot be predicted from the fluid thermal fluctuations. • High-order turbulent statistics show that the turbulent transport term is important. - Abstract: This study analyses the mixing and thermal fluctuations induced in a mixing tee junction with circular cross-sections when cold water flowing in a pipe is joined by hot water from a branch pipe. This configuration is representative of industrial piping systems in which temperature fluctuations in the fluid may cause thermal fatigue damage on the walls. Implicit large-eddy simulations (LES) are performed for equal inflow rates corresponding to a bulk Reynolds number Re = 39,080. Two different thermal boundary conditions are studied for the pipe walls; an insulating adiabatic boundary and a conducting steel wall boundary. The predicted flow structures show a satisfactory agreement with the literature. The velocity and thermal fields (including high-order statistics) are not affected by the heat transfer with the steel walls. However, predicted thermal fluctuations at the boundary are not the same between the flow and the solid, showing that solid thermal fluctuations cannot be predicted by the knowledge of the fluid thermal fluctuations alone. The analysis of high-order turbulent statistics provides a better understanding of the turbulence features. In particular, the budgets of the turbulent kinetic energy and temperature variance allows a comparative analysis of dissipation, production and transport terms. It is found that the turbulent transport term is an important term that acts to balance the production. We therefore use a priori tests to evaluate three different models for the triple correlation
Spectrally accurate contour dynamics
Van Buskirk, R.D.; Marcus, P.S.
1994-01-01
We present an exponentially accurate boundary integral method for calculation the equilibria and dynamics of piece-wise constant distributions of potential vorticity. The method represents contours of potential vorticity as a spectral sum and solves the Biot-Savart equation for the velocity by spectrally evaluating a desingularized contour integral. We use the technique in both an initial-value code and a newton continuation method. Our methods are tested by comparing the numerical solutions with known analytic results, and it is shown that for the same amount of computational work our spectral methods are more accurate than other contour dynamics methods currently in use
Bayne, Michael G; Scher, Jeremy A; Ellis, Benjamin H; Chakraborty, Arindam
2018-05-21
Electron-hole or quasiparticle representation plays a central role in describing electronic excitations in many-electron systems. For charge-neutral excitation, the electron-hole interaction kernel is the quantity of interest for calculating important excitation properties such as optical gap, optical spectra, electron-hole recombination and electron-hole binding energies. The electron-hole interaction kernel can be formally derived from the density-density correlation function using both Green's function and TDDFT formalism. The accurate determination of the electron-hole interaction kernel remains a significant challenge for precise calculations of optical properties in the GW+BSE formalism. From the TDDFT perspective, the electron-hole interaction kernel has been viewed as a path to systematic development of frequency-dependent exchange-correlation functionals. Traditional approaches, such as MBPT formalism, use unoccupied states (which are defined with respect to Fermi vacuum) to construct the electron-hole interaction kernel. However, the inclusion of unoccupied states has long been recognized as the leading computational bottleneck that limits the application of this approach for larger finite systems. In this work, an alternative derivation that avoids using unoccupied states to construct the electron-hole interaction kernel is presented. The central idea of this approach is to use explicitly correlated geminal functions for treating electron-electron correlation for both ground and excited state wave functions. Using this ansatz, it is derived using both diagrammatic and algebraic techniques that the electron-hole interaction kernel can be expressed only in terms of linked closed-loop diagrams. It is proved that the cancellation of unlinked diagrams is a consequence of linked-cluster theorem in real-space representation. The electron-hole interaction kernel derived in this work was used to calculate excitation energies in many-electron systems and results
Effects of high-order deformation on high-K isomers in superheavy nuclei
Liu, H. L.; Bertulani, C. A.; Xu, F. R.; Walker, P. M.
2011-01-01
Using, for the first time, configuration-constrained potential-energy-surface calculations with the inclusion of β 6 deformation, we find remarkable effects of the high-order deformation on the high-K isomers in 254 No, the focus of recent spectroscopy experiments on superheavy nuclei. For shapes with multipolarity six, the isomers are more tightly bound and, microscopically, have enhanced deformed shell gaps at N=152 and Z=100. The inclusion of β 6 deformation significantly improves the description of the very heavy high-K isomers.
Fast Algorithms for High-Order Sparse Linear Prediction with Applications to Speech Processing
Jensen, Tobias Lindstrøm; Giacobello, Daniele; van Waterschoot, Toon
2016-01-01
In speech processing applications, imposing sparsity constraints on high-order linear prediction coefficients and prediction residuals has proven successful in overcoming some of the limitation of conventional linear predictive modeling. However, this modeling scheme, named sparse linear prediction...... problem with lower accuracy than in previous work. In the experimental analysis, we clearly show that a solution with lower accuracy can achieve approximately the same performance as a high accuracy solution both objectively, in terms of prediction gain, as well as with perceptual relevant measures, when...... evaluated in a speech reconstruction application....
Synchronization of a coupled Hodgkin-Huxley neurons via high order sliding-mode feedback
Aguilar-Lopez, R. [Division de Ciencias Basicas e Ingenieria, Universidad Autonoma Metropolitana, Av. San Pablo No. 180, Reynosa-Tamaulipas, 02200 Azcapotzalco, Mexico, D.F. (Mexico)], E-mail: raguilar@correo.azc.uam.mx; Martinez-Guerra, R. [Departamento de Control Automatico, CINVESTAV-IPN, Apartado Postal 14-740, Mexico, D.F. C.P. 07360 (Mexico)], E-mail: rguerra@ctrl.cinvestav.mx
2008-07-15
This work deals with the synchronizations of two both coupled Hodgkin-Huxley (H-H) neurons, where the master neuron posses inner noise and the slave neuron is considered in a resting state, (without inner noise) and an exciting state (with inner noise). The synchronization procedure is done via a feedback control, considering a class of high order sliding-mode controller which provides chattering reduction and finite time synchronization convergence, with a satisfactory performance. Theoretical analysis is done in order to show the closed-loop stability of the proposed controller and the calculated finite time for convergence. The main results are illustrated via numerical experiments.
Synchronization of a coupled Hodgkin-Huxley neurons via high order sliding-mode feedback
Aguilar-Lopez, R.; Martinez-Guerra, R.
2008-01-01
This work deals with the synchronizations of two both coupled Hodgkin-Huxley (H-H) neurons, where the master neuron posses inner noise and the slave neuron is considered in a resting state, (without inner noise) and an exciting state (with inner noise). The synchronization procedure is done via a feedback control, considering a class of high order sliding-mode controller which provides chattering reduction and finite time synchronization convergence, with a satisfactory performance. Theoretical analysis is done in order to show the closed-loop stability of the proposed controller and the calculated finite time for convergence. The main results are illustrated via numerical experiments
Matrix form of Legendre polynomials for solving linear integro-differential equations of high order
Kammuji, M.; Eshkuvatov, Z. K.; Yunus, Arif A. M.
2017-04-01
This paper presents an effective approximate solution of high order of Fredholm-Volterra integro-differential equations (FVIDEs) with boundary condition. Legendre truncated series is used as a basis functions to estimate the unknown function. Matrix operation of Legendre polynomials is used to transform FVIDEs with boundary conditions into matrix equation of Fredholm-Volterra type. Gauss Legendre quadrature formula and collocation method are applied to transfer the matrix equation into system of linear algebraic equations. The latter equation is solved by Gauss elimination method. The accuracy and validity of this method are discussed by solving two numerical examples and comparisons with wavelet and methods.
A high-order solver for aerodynamic flow simulations and comparison of different numerical schemes
Mikhaylov, Sergey; Morozov, Alexander; Podaruev, Vladimir; Troshin, Alexey
2017-11-01
An implementation of high order of accuracy Discontinuous Galerkin method is presented. Reconstruction is done for the conservative variables. Gradients are calculated using the BR2 method. Coordinate transformations are done by serendipity elements. In computations with schemes of order higher than 2, curvature of the mesh lines is taken into account. A comparison with finite volume methods is performed, including WENO method with linear weights and single quadrature point on a cell side. The results of the following classical tests are presented: subsonic flow around a circular cylinder in an ideal gas, convection of two-dimensional isentropic vortex, and decay of the Taylor-Green vortex.
Orbiting binary black hole evolutions with a multipatch high order finite-difference approach
Pazos, Enrique; Tiglio, Manuel; Duez, Matthew D.; Kidder, Lawrence E.; Teukolsky, Saul A.
2009-01-01
We present numerical simulations of orbiting black holes for around 12 cycles, using a high order multipatch approach. Unlike some other approaches, the computational speed scales almost perfectly for thousands of processors. Multipatch methods are an alternative to adaptive mesh refinement, with benefits of simplicity and better scaling for improving the resolution in the wave zone. The results presented here pave the way for multipatch evolutions of black hole-neutron star and neutron star-neutron star binaries, where high resolution grids are needed to resolve details of the matter flow.
Application of organic compounds for high-order harmonic generation of ultrashort pulses
Ganeev, R. A.
2016-02-01
The studies of the high-order nonlinear optical properties of a few organic compounds (polyvinyl alcohol, polyethylene, sugar, coffee, and leaf) are reported. Harmonic generation in the laser-produced plasmas containing the molecules and large particles of above materials is demonstrated. These studies showed that the harmonic distributions and harmonic cutoffs from organic compound plasmas were similar to those from the graphite ablation. The characteristic feature of observed harmonic spectra was the presence of bluesided lobes near the lower-order harmonics.
Guiding of low-energy electrons by highly ordered Al2 O3 nanocapillaries
Milosavljević, A.R.; Víkor, G.; Pešić, Z.D.
2007-01-01
We report an experimental study of guided transmission of low-energy (200-350 eV) electrons through highly ordered Al2 O3 nanocapillaries with large aspect ratio (140 nm diameter and 15 μm length). The nanochannel array was prepared using self-ordering phenomena during a two-step anodization...... process of a high-purity aluminum foil. The experimental results clearly show the existence of the guiding effect, as found for highly charged ions. The guiding of the electron beam was observed for tilt angles up to 12°. As seen for highly charged ions, the guiding efficiency increases with decreasing...
HIERtalker: A default hierarchy of high order neural networks that learns to read English aloud
An, Z.G.; Mniszewski, S.M.; Lee, Y.C.; Papcun, G.; Doolen, G.D.
1988-01-01
A new learning algorithm based on a default hierarchy of high order neural networks has been developed that is able to generalize as well as handle exceptions. It learns the ''building blocks'' or clusters of symbols in a stream that appear repeatedly and convey certain messages. The default hierarchy prevents a combinatoric explosion of rules. A simulator of such a hierarchy, HIERtalker, has been applied to the conversion of English words to phonemes. Achieved accuracy is 99% for trained words and ranges from 76% to 96% for sets of new words. 8 refs., 4 figs., 1 tab.
FEA identification of high order generalized equivalent circuits for MF high voltage transformers
Candolfi, Sylvain; Cros, Jérôme; Aguglia, Davide
2015-01-01
This paper presents a specific methodology to derive high order generalized equivalent circuits from electromagnetic finite element analysis for high voltage medium frequency and pulse transformers by splitting the main windings in an arbitrary number of elementary windings. With this modeling approach, the dynamic model of the transformer over a large bandwidth is improved and the order of the generalized equivalent circuit can be adapted to a specified bandwidth. This efficient tool can be used by the designer to quantify the influence of the local structure of transformers on their dynamic behavior. The influence of different topologies and winding configurations is investigated. Several application examples and an experimental validation are also presented.
Orbital angular momentum of a high-order Bessel light beam
Volke-Sepulveda, K; Garces-Chavez, V; Chavez-Cerda, S; Arlt, J; Dholakia, K
2002-01-01
The orbital angular momentum density of Bessel beams is calculated explicitly within a rigorous vectorial treatment. This allows us to investigate some aspects that have not been analysed previously, such as the angular momentum content of azimuthally and radially polarized beams. Furthermore, we demonstrate experimentally the mechanical transfer of orbital angular momentum to trapped particles in optical tweezers using a high-order Bessel beam. We set transparent particles of known dimensions into rotation, where the sense of rotation can be reversed by changing the sign of the singularity. Quantitative results are obtained for rotation rates. This paper's animations are available from the Multimedia Enhancements page
High-order FDTD methods for transverse electromagnetic systems in dispersive inhomogeneous media.
Zhao, Shan
2011-08-15
This Letter introduces a novel finite-difference time-domain (FDTD) formulation for solving transverse electromagnetic systems in dispersive media. Based on the auxiliary differential equation approach, the Debye dispersion model is coupled with Maxwell's equations to derive a supplementary ordinary differential equation for describing the regularity changes in electromagnetic fields at the dispersive interface. The resulting time-dependent jump conditions are rigorously enforced in the FDTD discretization by means of the matched interface and boundary scheme. High-order convergences are numerically achieved for the first time in the literature in the FDTD simulations of dispersive inhomogeneous media. © 2011 Optical Society of America
High Order Finite Element Method for the Lambda modes problem on hexagonal geometry
Gonzalez-Pintor, S.; Ginestar, D.; Verdu, G.
2009-01-01
A High Order Finite Element Method to approximate the Lambda modes problem for reactors with hexagonal geometry has been developed. This method is based on the expansion of the neutron flux in terms of the modified Dubiner's polynomials on a triangular mesh. This mesh is fixed and the accuracy of the method is improved increasing the degree of the polynomial expansions without the necessity of remeshing. The performance of method has been tested obtaining the dominant Lambda modes of different 2D reactor benchmark problems.
Hejlesen, Mads Mølholm
ring dynamics is presented based on the alignment of the vorticity vector with the principal axis of the strain rate tensor.A novel iterative implementation of the Brinkman penalisation method is introduced for the enforcement of a fluid-solid interface in re-meshed vortex methods. The iterative scheme...... is included to explicitly fulfil the kinematic constraints of the flow field. The high order, unbounded particle-mesh based vortex method is used to simulate the instability, transition to turbulence and eventual destruction of a single vortex ring. From the simulation data, a novel analysis on the vortex...
Fabrication of Highly Ordered Anodic Aluminium Oxide Templates on Silicon Substrates
2007-01-01
followed by the first anodisation step at 40 V in a 0.3 M oxalic acid at 10 8C for several hours. After chemically removing the anodised Al in the...M phosphoric acid or by dry-etching using chlorine-based gases. For a second method of forming a highly ordered nano- pore array in a thin Al film on...together, e apply a wet-etching process, using a mixture of 6% 3PO4 and 1.8% CrO3 with dispersant of polymethacrylic cid or Gum Arabic, which we developed
High-order Boussinesq-type modelling of nonlinear wave phenomena in deep and shallow water
Madsen, Per A.; Fuhrman, David R.
2010-01-01
In this work, we start with a review of the development of Boussinesq theory for water waves covering the period from 1872 to date. Previous reviews have been given by Dingemans,1 Kirby,2,3 and Madsen & Schäffer.4 Next, we present our most recent high-order Boussinesq-type formulation valid for f...... from an undular sea bed; (8) Run-up of non-breaking solitary waves on a beach; and (9) Tsunami generation from submerged landslides....
Temporally coherent x-ray laser with the high order harmonic light
Hasegawa, Noboru; Kawachi, Tetsuya; Kishimoto, Maki; Sukegawa, Kouta; Tanaka, Momoko; Ochi, Yoshihiro; Nishikino, Masaharu; Kawazome, Hayato; Nagashima, Keisuke
2005-01-01
We obtained the neon-like manganese x-ray laser with the injection of the high order harmonic light as the seed x-ray at the wavelength of 26.9 nm for the purpose of generation of the temporally coherent x-ray laser. The x-ray amplifier, which has quite narrow spectral width, selected and amplified the temporally coherent mode of the harmonic light. The temporal coherence of the mode selected harmonic light was nearly transform limited pulse, and the obtained x-ray laser with the seed x-ray expected to be nearly temporally coherent x-ray. (author)
Bolea, Mario; Mora, José; Ortega, Beatriz; Capmany, José
2013-11-18
We present a high-order UWB pulses generator based on a microwave photonic filter which provides a set of positive and negative samples by using the slicing of an incoherent optical source and the phase inversion in a Mach-Zehnder modulator. The simple scalability and high reconfigurability of the system permit a better accomplishment of the FCC requirements. Moreover, the proposed scheme permits an easy adaptation to pulse amplitude modulation, bi phase modulation, pulse shape modulation and pulse position modulation. The flexibility of the scheme for being adaptable to multilevel modulation formats permits to increase the transmission bit rate by using hybrid modulation formats.
Computation of nonlinear water waves with a high-order Boussinesq model
Fuhrman, David R.; Madsen, Per A.; Bingham, Harry
2005-01-01
Computational highlights from a recently developed high-order Boussinesq model are shown. The model is capable of treating fully nonlinear waves (up to the breaking point) out to dimensionless depths of (wavenumber times depth) kh \\approx 25. Cases considered include the study of short......-crested waves in shallow/deep water, resulting in hexagonal/rectangular surface patterns; crescent waves, resulting from unstable perturbations of plane progressive waves; and highly-nonlinear wave-structure interactions. The emphasis is on physically demanding problems, and in eachcase qualitative and (when...
Finite-time output feedback stabilization of high-order uncertain nonlinear systems
Jiang, Meng-Meng; Xie, Xue-Jun; Zhang, Kemei
2018-06-01
This paper studies the problem of finite-time output feedback stabilization for a class of high-order nonlinear systems with the unknown output function and control coefficients. Under the weaker assumption that output function is only continuous, by using homogeneous domination method together with adding a power integrator method, introducing a new analysis method, the maximal open sector Ω of output function is given. As long as output function belongs to any closed sector included in Ω, an output feedback controller can be developed to guarantee global finite-time stability of the closed-loop system.
Tunneling-induced shift of the cutoff law for high-order above-threshold ionization
Lai, X. Y.; Quan, W.; Liu, X.
2011-01-01
We investigate the cutoff law for high-order above-threshold ionization (HATI) within a semiclassical framework. By explicitly adopting the tunneling effect and considering the initial position shift of the tunneled electron from the origin in the model, the cutoff energy position in HATI spectrum exhibits a well-defined upshift from the simple-man model prediction. The comparison between numerical results from our improved semiclassical model and the quantum-orbit theory shows a good agreement for small values of the Keldysh parameter γ, implying the important role of the inherent quantum tunneling effect in HATI dynamics.
Enhancement of high-order harmonics in a plasma waveguide formed in clustered Ar gas.
Geng, Xiaotao; Zhong, Shiyang; Chen, Guanglong; Ling, Weijun; He, Xinkui; Wei, Zhiyi; Kim, Dong Eon
2018-02-05
Generation of high-order harmonics (HHs) is intensified by using a plasma waveguide created by a laser in a clustered gas jet. The formation of a plasma waveguide and the guiding of a laser beam are also demonstrated. Compared to the case without a waveguide, harmonics were strengthened up to nine times, and blue-shifted. Numerical simulation by solving the time-dependent Schrödinger equation in strong field approximation agreed well with experimental results. This result reveals that the strengthening is the result of improved phase matching and that the blue shift is a result of change in fundamental laser frequency due to self-phase modulation (SPM).
Propagation effects in the generation process of high-order vortex harmonics.
Zhang, Chaojin; Wu, Erheng; Gu, Mingliang; Liu, Chengpu
2017-09-04
We numerically study the propagation of a Laguerre-Gaussian beam through polar molecular media via the exact solution of full-wave Maxwell-Bloch equations where the rotating-wave and slowly-varying-envelope approximations are not included. It is found that beyond the coexistence of odd-order and even-order vortex harmonics due to inversion asymmetry of the system, the light propagation effect results in the intensity enhancement of a high-order vortex harmonics. Moreover, the orbital momentum successfully transfers from the fundamental laser driver to the vortex harmonics which topological charger number is directly proportional to its order.
Application of the Arbitrarily High Order Method to Coupled Electron Photon Transport
Duo, Jose Ignacio
2004-01-01
This work is about the application of the Arbitrary High Order Nodal Method to coupled electron photon transport.A Discrete Ordinates code was enhanced and validated which permited to evaluate the advantages of using variable spatial development order per particle.The results obtained using variable spatial development and adaptive mesh refinement following an a posteriori error estimator are encouraging.Photon spectra for clinical accelerator target and, dose and charge depositio profiles are simulated in one-dimensional problems using cross section generated with CEPXS code.Our results are in good agreement with ONELD and MCNP codes
Neyra, E.; Videla, F.; Ciappina, M. F.; Pérez-Hernández, J. A.; Roso, L.; Lewenstein, M.; Torchia, G. A.
2018-03-01
We study high-order harmonic generation (HHG) in model atoms driven by plasmonic-enhanced fields. These fields result from the illumination of plasmonic nanostructures by few-cycle laser pulses. We demonstrate that the spatial inhomogeneous character of the laser electric field, in a form of Gaussian-shaped functions, leads to an unexpected relationship between the HHG cutoff and the laser wavelength. Precise description of the spatial form of the plasmonic-enhanced field allows us to predict this relationship. We combine the numerical solutions of the time-dependent Schrödinger equation (TDSE) with the plasmonic-enhanced electric fields obtained from 3D finite element simulations. We additionally employ classical simulations to supplement the TDSE outcomes and characterize the extended HHG spectra by means of their associated electron trajectories. A proper definition of the spatially inhomogeneous laser electric field is instrumental to accurately describe the underlying physics of HHG driven by plasmonic-enhanced fields. This characterization opens up new perspectives for HHG control with various experimental nano-setups.
Lebon, G S Bruno; Tzanakis, I; Djambazov, G; Pericleous, K; Eskin, D G
2017-07-01
To address difficulties in treating large volumes of liquid metal with ultrasound, a fundamental study of acoustic cavitation in liquid aluminium, expressed in an experimentally validated numerical model, is presented in this paper. To improve the understanding of the cavitation process, a non-linear acoustic model is validated against reference water pressure measurements from acoustic waves produced by an immersed horn. A high-order method is used to discretize the wave equation in both space and time. These discretized equations are coupled to the Rayleigh-Plesset equation using two different time scales to couple the bubble and flow scales, resulting in a stable, fast, and reasonably accurate method for the prediction of acoustic pressures in cavitating liquids. This method is then applied to the context of treatment of liquid aluminium, where it predicts that the most intense cavitation activity is localised below the vibrating horn and estimates the acoustic decay below the sonotrode with reasonable qualitative agreement with experimental data. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Delorme, Yann; Hassan, Syed Harris; Socha, Jake; Vlachos, Pavlos; Frankel, Steven
2014-11-01
Chrysopelea paradisi are snakes that are able to glide over long distances by morphing the cross section of their bodies from circular to a triangular airfoil, and undulating through the air. Snake glide is characterized by relatively low Reynolds number and high angle of attack as well as three dimensional and unsteady flow. Here we study the 3D dynamics of the flow using an in-house high-order large eddy simulation code. The code features a novel multi block immersed boundary method to accurately and efficiently represent the complex snake geometry. We investigate the steady state 3-dimensionality of the flow, especially the wake flow induced by the presence of the snake's body, as well as the vortex-body interaction thought to be responsible for part of the lift enhancement. Numerical predictions of global lift and drag will be compared to experimental measurements, as well as the lift distribution along the body of the snake due to cross sectional variations. Comparisons with previously published 2D results are made to highlight the importance of 3-dimensional effects. Additional efforts are made to quantify properties of the vortex shedding and Dynamic Mode Decomposition (DMD) is used to analyse the main modes responsible for the lift and drag forces.
Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.
Kwak, Nojun
2016-05-20
Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.
Leitão, Sofia, E-mail: sofia.leitao@tecnico.ulisboa.pt [CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Stadler, Alfred, E-mail: stadler@uevora.pt [Departamento de Física, Universidade de Évora, 7000-671 Évora (Portugal); CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Peña, M.T., E-mail: teresa.pena@tecnico.ulisboa.pt [Departamento de Física, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Biernat, Elmar P., E-mail: elmar.biernat@tecnico.ulisboa.pt [CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)
2017-01-10
The Covariant Spectator Theory (CST) is used to calculate the mass spectrum and vertex functions of heavy–light and heavy mesons in Minkowski space. The covariant kernel contains Lorentz scalar, pseudoscalar, and vector contributions. The numerical calculations are performed in momentum space, where special care is taken to treat the strong singularities present in the confining kernel. The observed meson spectrum is very well reproduced after fitting a small number of model parameters. Remarkably, a fit to a few pseudoscalar meson states only, which are insensitive to spin–orbit and tensor forces and do not allow to separate the spin–spin from the central interaction, leads to essentially the same model parameters as a more general fit. This demonstrates that the covariance of the chosen interaction kernel is responsible for the very accurate prediction of the spin-dependent quark–antiquark interactions.
A more accurate half-discrete Hardy-Hilbert-type inequality with the logarithmic function.
Wang, Aizhen; Yang, Bicheng
2017-01-01
By means of the weight functions, the technique of real analysis and Hermite-Hadamard's inequality, a more accurate half-discrete Hardy-Hilbert-type inequality related to the kernel of logarithmic function and a best possible constant factor is given. Moreover, the equivalent forms, the operator expressions, the reverses and some particular cases are also considered.
A more accurate half-discrete Hardy-Hilbert-type inequality with the logarithmic function
Aizhen Wang
2017-06-01
Full Text Available Abstract By means of the weight functions, the technique of real analysis and Hermite-Hadamard’s inequality, a more accurate half-discrete Hardy-Hilbert-type inequality related to the kernel of logarithmic function and a best possible constant factor is given. Moreover, the equivalent forms, the operator expressions, the reverses and some particular cases are also considered.
Kernel based subspace projection of near infrared hyperspectral images of maize kernels
Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben
2009-01-01
In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods ......- tor transform outperform the linear methods as well as kernel principal components in producing interesting projections of the data.......In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...... including principal component analysis and maximum autocorrelation factor analysis. The latter utilizes the fact that interesting phenomena in images exhibit spatial autocorrelation. However, linear projections often fail to grasp the underlying variability on the data. Therefore we propose to use so...
Gao, Xiang; Zhang, Xiaohong; Song, Jinlin; Xu, Xiao; Xu, Anxiu; Wang, Mengke; Xie, Bingwu; Huang, Enyi; Deng, Feng; Wei, Shicheng
2015-01-01
The construction of functional biomimetic scaffolds that recapitulate the topographical and biochemical features of bone tissue extracellular matrix is now of topical interest in bone tissue engineering. In this study, a novel surface-functionalized electrospun polycaprolactone (PCL) nanofiber scaffold with highly ordered structure was developed to simulate the critical features of native bone tissue via a single step of catechol chemistry. Specially, under slightly alkaline aqueous solution, polydopamine (pDA) was coated on the surface of aligned PCL nanofibers after electrospinning, followed by covalent immobilization of bone morphogenetic protein-7-derived peptides onto the pDA-coated nanofiber surface. Contact angle measurement, Raman spectroscopy, and X-ray photoelectron spectroscopy confirmed the presence of pDA and peptides on PCL nanofiber surface. Our results demonstrated that surface modification with osteoinductive peptides could improve cytocompatibility of nanofibers in terms of cell adhesion, spreading, and proliferation. Most importantly, Alizarin Red S staining, quantitative real-time polymerase chain reaction, immunostaining, and Western blot revealed that human mesenchymal stem cells cultured on aligned nanofibers with osteoinductive peptides exhibited enhanced osteogenic differentiation potential than cells on randomly oriented nanofibers. Furthermore, the aligned nanofibers with osteoinductive peptides could direct osteogenic differentiation of human mesenchymal stem cells even in the absence of osteoinducting factors, suggesting superior osteogenic efficacy of biomimetic design that combines the advantages of osteoinductive peptide signal and highly ordered nanofibers on cell fate decision. The presented peptide-decorated bone-mimic nanofiber scaffolds hold a promising potential in the context of bone tissue engineering.
Shah, Syed Awais Wahab
2017-11-24
This paper addresses the problem of blind demixing of instantaneous mixtures in a multiple-input multiple-output communication system. The main objective is to present efficient blind source separation (BSS) algorithms dedicated to moderate or high-order QAM constellations. Four new iterative batch BSS algorithms are presented dealing with the multimodulus (MM) and alphabet matched (AM) criteria. For the optimization of these cost functions, iterative methods of Givens and hyperbolic rotations are used. A pre-whitening operation is also utilized to reduce the complexity of design problem. It is noticed that the designed algorithms using Givens rotations gives satisfactory performance only for large number of samples. However, for small number of samples, the algorithms designed by combining both Givens and hyperbolic rotations compensate for the ill-whitening that occurs in this case and thus improves the performance. Two algorithms dealing with the MM criterion are presented for moderate order QAM signals such as 16-QAM. The other two dealing with the AM criterion are presented for high-order QAM signals. These methods are finally compared with the state of art batch BSS algorithms in terms of signal-to-interference and noise ratio, symbol error rate and convergence rate. Simulation results show that the proposed methods outperform the contemporary batch BSS algorithms.
High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.
Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong
2018-08-01
This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.
Electrocatalytic oxidation of alcohols on single gold particles in highly ordered SiO2 cavities
Li, Na; Zhou, Qun; Tian, Shu; Zhao, Hong; Li, Xiaowei; Adkins, Jason; Gu, Zhuomin; Zhao, Lili; Zheng, Junwei
2013-01-01
In the present work, we report a new and simple approach for preparing a highly ordered Au (1 1 1) nanoparticle (NP) array in SiO 2 cavities on indium-doped tin oxide (ITO) electrodes. We fabricated a SiO 2 cavity array on the surface of an ITO electrode using highly ordered self-assembly of polystyrene spheres as a template. Gold NPs were electrodeposited at the bottom of the SiO 2 cavities, and single gold NPs dominated with (1 1 1) facets were generated in each cavity by annealing the electrode at a high temperature. Such (1 1 1) facets were the predominate trait of the single gold particle which exhibited considerable electrocatalytic activity toward oxidation of methanol, ethanol, and glycerol. This has been attributed to the formation of incipient hydrous oxides at unusually low potential on the specific (1 1 1) facet of the gold particles. Moreover, each cavity of the SiO 2 possibly behaves as an independent electrochemical cell in which the methanol molecules are trapped; this produces an environment advantageous to catalyzing electrooxidation. The oxidation of methanol on the electrodes is a mixed control mechanism (both by diffusion and electrode kinetics). This strategy both provided an approach to study electrochemical reactions on a single particle in a microenvironment and may supply a way to construct alcohols sensors
Moosavifard, Seyyed E; El-Kady, Maher F; Rahmanifar, Mohammad S; Kaner, Richard B; Mousavi, Mir F
2015-03-04
The increasing demand for energy has triggered tremendous research efforts for the development of lightweight and durable energy storage devices. Herein, we report a simple, yet effective, strategy for high-performance supercapacitors by building three-dimensional pseudocapacitive CuO frameworks with highly ordered and interconnected bimodal nanopores, nanosized walls (∼4 nm) and large specific surface area of 149 m(2) g(-1). This interesting electrode structure plays a key role in providing facilitated ion transport, short ion and electron diffusion pathways and more active sites for electrochemical reactions. This electrode demonstrates excellent electrochemical performance with a specific capacitance of 431 F g(-1) (1.51 F cm(-2)) at 3.5 mA cm(-2) and retains over 70% of this capacitance when operated at an ultrafast rate of 70 mA cm(-2). When this highly ordered CuO electrode is assembled in an asymmetric cell with an activated carbon electrode, the as-fabricated device demonstrates remarkable performance with an energy density of 19.7 W h kg(-1), power density of 7 kW kg(-1), and excellent cycle life. This work presents a new platform for high-performance asymmetric supercapacitors for the next generation of portable electronics and electric vehicles.
Electrochemical synthesis of highly ordered polypyrrole on copper modified aluminium substrates
Siddaramanna, Ashoka; Saleema, N.; Sarkar, D.K.
2014-01-01
Fabrication of highly ordered conducting polymers on metal surfaces has received a significant interest owing to their potential applications in organic electronic devices. In this context, we have developed a simple method for the synthesis of highly ordered polypyrrole (PPy) on copper modified aluminium surfaces via electrochemical polymerization process. A series of characteristic peaks of PPy evidenced on the infrared spectra of these surfaces confirm the formation of PPy. The X-ray diffraction (XRD) pattern of PPy deposited on copper modified aluminium surfaces also confirmed the deposition of PPy as a sharp and intense peak at 2θ angle of 23° attributable to PPy is observed while this peak is absent on PPy deposited on as-received aluminium surfaces. An atomic model of the interface of PPy/Cu has been presented based on the inter-atomic distance of copper–copper of (1 0 0) plane and the inter-monomer distance of PPy, to describe the ordering of PPy on Cu modified Al surfaces.
Effective high-order solver with thermally perfect gas model for hypersonic heating prediction
Jiang, Zhenhua; Yan, Chao; Yu, Jian; Qu, Feng; Ma, Libin
2016-01-01
Highlights: • Design proper numerical flux for thermally perfect gas. • Line-implicit LUSGS enhances efficiency without extra memory consumption. • Develop unified framework for both second-order MUSCL and fifth-order WENO. • The designed gas model can be applied to much wider temperature range. - Abstract: Effective high-order solver based on the model of thermally perfect gas has been developed for hypersonic heat transfer computation. The technique of polynomial curve fit coupling to thermodynamics equation is suggested to establish the current model and particular attention has been paid to the design of proper numerical flux for thermally perfect gas. We present procedures that unify five-order WENO (Weighted Essentially Non-Oscillatory) scheme in the existing second-order finite volume framework and a line-implicit method that improves the computational efficiency without increasing memory consumption. A variety of hypersonic viscous flows are performed to examine the capability of the resulted high order thermally perfect gas solver. Numerical results demonstrate its superior performance compared to low-order calorically perfect gas method and indicate its potential application to hypersonic heating predictions for real-life problem.
Reliability-based design optimization via high order response surface method
Li, Hong Shuang
2013-01-01
To reduce the computational effort of reliability-based design optimization (RBDO), the response surface method (RSM) has been widely used to evaluate reliability constraints. We propose an efficient methodology for solving RBDO problems based on an improved high order response surface method (HORSM) that takes advantage of an efficient sampling method, Hermite polynomials and uncertainty contribution concept to construct a high order response surface function with cross terms for reliability analysis. The sampling method generates supporting points from Gauss-Hermite quadrature points, which can be used to approximate response surface function without cross terms, to identify the highest order of each random variable and to determine the significant variables connected with point estimate method. The cross terms between two significant random variables are added to the response surface function to improve the approximation accuracy. Integrating the nested strategy, the improved HORSM is explored in solving RBDO problems. Additionally, a sampling based reliability sensitivity analysis method is employed to reduce the computational effort further when design variables are distributional parameters of input random variables. The proposed methodology is applied on two test problems to validate its accuracy and efficiency. The proposed methodology is more efficient than first order reliability method based RBDO and Monte Carlo simulation based RBDO, and enables the use of RBDO as a practical design tool.
Shah, Syed Awais Wahab; Abed-Meraim, Karim; Al-Naffouri, Tareq Y.
2017-01-01
This paper addresses the problem of blind demixing of instantaneous mixtures in a multiple-input multiple-output communication system. The main objective is to present efficient blind source separation (BSS) algorithms dedicated to moderate or high-order QAM constellations. Four new iterative batch BSS algorithms are presented dealing with the multimodulus (MM) and alphabet matched (AM) criteria. For the optimization of these cost functions, iterative methods of Givens and hyperbolic rotations are used. A pre-whitening operation is also utilized to reduce the complexity of design problem. It is noticed that the designed algorithms using Givens rotations gives satisfactory performance only for large number of samples. However, for small number of samples, the algorithms designed by combining both Givens and hyperbolic rotations compensate for the ill-whitening that occurs in this case and thus improves the performance. Two algorithms dealing with the MM criterion are presented for moderate order QAM signals such as 16-QAM. The other two dealing with the AM criterion are presented for high-order QAM signals. These methods are finally compared with the state of art batch BSS algorithms in terms of signal-to-interference and noise ratio, symbol error rate and convergence rate. Simulation results show that the proposed methods outperform the contemporary batch BSS algorithms.
High-Intensity High-order Harmonics Generated from Low-Density Plasma
Ozaki, T.; Bom, L. B. Elouga; Abdul-Hadi, J.; Ganeev, R. A.; Haessler, S.; Salieres, P.
2009-01-01
We study the generation of high-order harmonics from lowly ionized plasma, using the 10 TW, 10 Hz laser of the Advanced Laser Light Source (ALLS). We perform detailed studies on the enhancement of a single order of the high-order harmonic spectrum generated in plasma using the fundamental and second harmonic of the ALLS beam line. We observe quasi-monochromatic harmonics for various targets, including Mn, Cr, Sn, and In. We identify most of the ionic/neutral transitions responsible for the enhancement, which all have strong oscillator strengths. We demonstrate intensity enhancements of the 13th, 17th, 29th, and 33rd harmonics from these targets using the 800 nm pump laser and varying its chirp. We also characterized the attosecond nature of such plasma harmonics, measuring attosecond pulse trains with 360 as duration for chromium plasma, using the technique of ''Reconstruction of Attosecond Beating by Interference of Two-photon Transitions''(RABBIT). These results show that plasma harmonics are intense source of ultrashort coherent soft x-rays.
High-order harmonic generation in a laser plasma: a review of recent achievements
Ganeev, R A
2007-01-01
A review of studies of high-order harmonic generation in plasma plumes is presented. The generation of high-order harmonics (up to the 101st order, λ = 7.9 nm) of Ti:sapphire laser radiation during the propagation of short laser pulses through a low-excited, low-ionized plasma produced on the surfaces of different targets is analysed. The observation of considerable resonance-induced enhancement of a single harmonic (λ = 61.2 nm) at the plateau region with 10 -4 conversion efficiency in the case of an In plume can offer some expectations that analogous processes can be realized in other plasma samples in the shorter wavelength range. Recent achievements of single-harmonic enhancement at mid- and end-plateau regions are discussed. Various methods for the optimization of harmonic generation are analysed, such as the application of the second harmonic of driving radiation and the application of prepulses of different durations. The enhancement of harmonic generation efficiency during the propagation of femtosecond pulses through a nanoparticle-containing plasma is discussed. (topical review)
High-order moments of spin-orbit energy in a multielectron configuration
Na, Xieyu; Poirier, M.
2016-07-01
In order to analyze the energy-level distribution in complex ions such as those found in warm dense plasmas, this paper provides values for high-order moments of the spin-orbit energy in a multielectron configuration. Using second-quantization results and standard angular algebra or fully analytical expressions, explicit values are given for moments up to 10th order for the spin-orbit energy. Two analytical methods are proposed, using the uncoupled or coupled orbital and spin angular momenta. The case of multiple open subshells is considered with the help of cumulants. The proposed expressions for spin-orbit energy moments are compared to numerical computations from Cowan's code and agree with them. The convergence of the Gram-Charlier expansion involving these spin-orbit moments is analyzed. While a spectrum with infinitely thin components cannot be adequately represented by such an expansion, a suitable convolution procedure ensures the convergence of the Gram-Charlier series provided high-order terms are accounted for. A corrected analytical formula for the third-order moment involving both spin-orbit and electron-electron interactions turns out to be in fair agreement with Cowan's numerical computations.
A study and simulation of the impact of high-order aberrations to overlay error distribution
Sun, G.; Wang, F.; Zhou, C.
2011-03-01
With reduction of design rules, a number of corresponding new technologies, such as i-HOPC, HOWA and DBO have been proposed and applied to eliminate overlay error. When these technologies are in use, any high-order error distribution needs to be clearly distinguished in order to remove the underlying causes. Lens aberrations are normally thought to mainly impact the Matching Machine Overlay (MMO). However, when using Image-Based overlay (IBO) measurement tools, aberrations become the dominant influence on single machine overlay (SMO) and even on stage repeatability performance. In this paper, several measurements of the error distributions of the lens of SMEE SSB600/10 prototype exposure tool are presented. Models that characterize the primary influence from lens magnification, high order distortion, coma aberration and telecentricity are shown. The contribution to stage repeatability (as measured with IBO tools) from the above errors was predicted with simulator and compared to experiments. Finally, the drift of every lens distortion that impact to SMO over several days was monitored and matched with the result of measurements.
High-order-harmonic generation from H2+ molecular ions near plasmon-enhanced laser fields
Yavuz, I.; Tikman, Y.; Altun, Z.
2015-08-01
Simulations of plasmon-enhanced high-order-harmonic generation are performed for a H2+ molecular cation near the metallic nanostructures. We employ the numerical solution of the time-dependent Schrödinger equation in reduced coordinates. We assume that the main axis of H2+ is aligned perfectly with the polarization direction of the plasmon-enhanced field. We perform systematic calculations on plasmon-enhanced harmonic generation based on an infinite-mass approximation, i.e., pausing nuclear vibrations. Our simulations show that molecular high-order-harmonic generation from plasmon-enhanced laser fields is possible. We observe the dispersion of a plateau of harmonics when the laser field is plasmon enhanced. We find that the maximum kinetic energy of the returning electron follows 4 Up . We also find that when nuclear vibrations are enabled, the efficiency of the harmonics is greatly enhanced relative to that of static nuclei. However, the maximum kinetic energy 4 Up is largely maintained.
Haorui Liu
2016-01-01
Full Text Available In the car control systems, it is hard to measure some key vehicle states directly and accurately when running on the road and the cost of the measurement is high as well. To address these problems, a vehicle state estimation method based on the kernel principal component analysis and the improved Elman neural network is proposed. Combining with nonlinear vehicle model of three degrees of freedom (3 DOF, longitudinal, lateral, and yaw motion, this paper applies the method to the soft sensor of the vehicle states. The simulation results of the double lane change tested by Matlab/SIMULINK cosimulation prove the KPCA-IENN algorithm (kernel principal component algorithm and improved Elman neural network to be quick and precise when tracking the vehicle states within the nonlinear area. This algorithm method can meet the software performance requirements of the vehicle states estimation in precision, tracking speed, noise suppression, and other aspects.
Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan
2007-11-01
Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.
Factorization and the synthesis of optimal feedback kernels for differential-delay systems
Milman, Mark M.; Scheid, Robert E.
1987-01-01
A combination of ideas from the theories of operator Riccati equations and Volterra factorizations leads to the derivation of a novel, relatively simple set of hyperbolic equations which characterize the optimal feedback kernel for the finite-time regulator problem for autonomous differential-delay systems. Analysis of these equations elucidates the underlying structure of the feedback kernel and leads to the development of fast and accurate numerical methods for its computation. Unlike traditional formulations based on the operator Riccati equation, the gain is characterized by means of classical solutions of the derived set of equations. This leads to the development of approximation schemes which are analogous to what has been accomplished for systems of ordinary differential equations with given initial conditions.
Dispersal Kernel Determines Symmetry of Spread and Geographical Range for an Insect
Holland, J.D.
2009-01-01
The distance from a source patch that dispersing insects reach depends on the number of dispersers, or random draws from a probability density function called a dispersal kernel, and the shape of that kernel. This can cause asymmetrical dispersal between habitat patches that produce different numbers of dispersers. Spatial distributions based on these dynamics can explain several ecological patterns including mega populations and geographic range boundaries. I hypothesized that a locally extirpated long horned beetle, the sugar maple borer, has a new geographical range shaped primarily by probabilistic dispersal distances. I used data on occurrence from Ontario, Canada to construct a model of geographical range in Indiana, USA based on maximum dispersal distance scaled by habitat area. This model predicted the new range boundary within 500 m very accurately. This beetle may be an ideal organism for exploring spatial dynamics driven by dispersal.
Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila
2018-05-07
Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.
Kernel based eigenvalue-decomposition methods for analysing ham
Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming
2010-01-01
methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...
Classification of maize kernels using NIR hyperspectral imaging
Williams, Paul; Kucheryavskiy, Sergey V.
2016-01-01
NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual...... and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale....
Ideal gas scattering kernel for energy dependent cross-sections
Rothenstein, W.; Dagan, R.
1998-01-01
A third, and final, paper on the calculation of the joint kernel for neutron scattering by an ideal gas in thermal agitation is presented, when the scattering cross-section is energy dependent. The kernel is a function of the neutron energy after scattering, and of the cosine of the scattering angle, as in the case of the ideal gas kernel for a constant bound atom scattering cross-section. The final expression is suitable for numerical calculations
Pump-probe study of atoms and small molecules with laser driven high order harmonics
Cao, Wei
A commercially available modern laser can emit over 1015 photons within a time window of a few tens of femtoseconds (10-15second), which can be focused into a spot size of about 10 mum, resulting in a peak intensity above 1014W/cm2. This paves the way for table-top strong field physics studies such as above threshold ionization (ATI), non-sequential double ionization (NSDI), high order harmonic generation (HHG), etc.. Among these strong laser-matter interactions, high order harmonic generation, which combines many photons of the fundamental laser field into a single photon, offers a unique way to generate light sources in the vacuum ultraviolet (VUV) or extreme ultraviolet (EUV) region. High order harmonic photons are emitted within a short time window from a few tens of femtoseconds down to a few hundreds of attoseconds (10 -18second). This highly coherent nature of HHG allows it to be synchronized with an infrared (IR) laser pulse, and the pump-probe technique can be adopted to study ultrafast dynamic processes in a quantum system. The major work of this thesis is to develop a table-top VUV(EUV) light source based on HHG, and use it to study dynamic processes in atoms and small molecules with the VUV(EUV)-pump IR-probe method. A Cold Target Recoil Ion Momentum Spectroscopy (COLTRIMS) apparatus is used for momentum imaging of the interaction products. Two types of high harmonic pump pulses are generated and applied for pump-probe studies. The first one consists of several harmonics forming a short attosecond pulse train (APT) in the EUV regime (around 40 eV). We demonstrate that, (1) the auto-ionization process triggered by the EUV in cation carbon-monoxide and oxygen molecules can be modified by scanning the EUV-IR delay, (2) the phase information of quantum trajectories in bifurcated high harmonics can be extracted by performing an EUV-IR cross-correlation experiment, thus disclosing the macroscopic quantum control in HHG. The second type of high harmonic source