WorldWideScience

Sample records for dose kernel integration

  1. Dose point kernels for beta-emitting radioisotopes

    International Nuclear Information System (INIS)

    Prestwich, W.V.; Chan, L.B.; Kwok, C.S.; Wilson, B.

    1986-01-01

    Knowledge of the dose point kernel corresponding to a specific radionuclide is required to calculate the spatial dose distribution produced in a homogeneous medium by a distributed source. Dose point kernels for commonly used radionuclides have been calculated previously using as a basis monoenergetic dose point kernels derived by numerical integration of a model transport equation. The treatment neglects fluctuations in energy deposition, an effect which has been later incorporated in dose point kernels calculated using Monte Carlo methods. This work describes new calculations of dose point kernels using the Monte Carlo results as a basis. An analytic representation of the monoenergetic dose point kernels has been developed. This provides a convenient method both for calculating the dose point kernel associated with a given beta spectrum and for incorporating the effect of internal conversion. An algebraic expression for allowed beta spectra has been accomplished through an extension of the Bethe-Bacher approximation, and tested against the exact expression. Simplified expression for first-forbidden shape factors have also been developed. A comparison of the calculated dose point kernel for 32 P with experimental data indicates good agreement with a significant improvement over the earlier results in this respect. An analytic representation of the dose point kernel associated with the spectrum of a single beta group has been formulated. 9 references, 16 figures, 3 tables

  2. Integral equations with contrasting kernels

    Directory of Open Access Journals (Sweden)

    Theodore Burton

    2008-01-01

    Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.

  3. GRIM : Leveraging GPUs for Kernel integrity monitoring

    NARCIS (Netherlands)

    Koromilas, Lazaros; Vasiliadis, Giorgos; Athanasopoulos, Ilias; Ioannidis, Sotiris

    2016-01-01

    Kernel rootkits can exploit an operating system and enable future accessibility and control, despite all recent advances in software protection. A promising defense mechanism against rootkits is Kernel Integrity Monitor (KIM) systems, which inspect the kernel text and data to discover any malicious

  4. Relationship between attenuation coefficients and dose-spread kernels

    International Nuclear Information System (INIS)

    Boyer, A.L.

    1988-01-01

    Dose-spread kernels can be used to calculate the dose distribution in a photon beam by convolving the kernel with the primary fluence distribution. The theoretical relationships between various types and components of dose-spread kernels relative to photon attenuation coefficients are explored. These relations can be valuable as checks on the conservation of energy by dose-spread kernels calculated by analytic or Monte Carlo methods

  5. Evaluating the Application of Tissue-Specific Dose Kernels Instead of Water Dose Kernels in Internal Dosimetry : A Monte Carlo Study

    NARCIS (Netherlands)

    Moghadam, Maryam Khazaee; Asl, Alireza Kamali; Geramifar, Parham; Zaidi, Habib

    2016-01-01

    Purpose: The aim of this work is to evaluate the application of tissue-specific dose kernels instead of water dose kernels to improve the accuracy of patient-specific dosimetry by taking tissue heterogeneities into consideration. Materials and Methods: Tissue-specific dose point kernels (DPKs) and

  6. Commutators of Integral Operators with Variable Kernels on Hardy ...

    Indian Academy of Sciences (India)

    Home; Journals; Proceedings – Mathematical Sciences; Volume 115; Issue 4. Commutators of Integral Operators with Variable Kernels on Hardy Spaces. Pu Zhang Kai Zhao. Volume 115 Issue 4 November 2005 pp 399-410 ... Keywords. Singular and fractional integrals; variable kernel; commutator; Hardy space.

  7. Considerations on absorbed dose estimates based on different β-dose point kernels in internal dosimetry

    International Nuclear Information System (INIS)

    Uchida, Isao; Yamada, Yasuhiko; Yamashita, Takashi; Okigaki, Shigeyasu; Oyamada, Hiyoshimaru; Ito, Akira.

    1995-01-01

    In radiotherapy with radiopharmaceuticals, more accurate estimates of the three-dimensional (3-D) distribution of absorbed dose is important in specifying the activity to be administered to patients to deliver a prescribed absorbed dose to target volumes without exceeding the toxicity limit of normal tissues in the body. A calculation algorithm for the purpose has already been developed by the authors. An accurate 3-D distribution of absorbed dose based on the algorithm is given by convolution of the 3-D dose matrix for a unit cubic voxel containing unit cumulated activity, which is obtained by transforming a dose point kernel into a 3-D cubic dose matrix, with the 3-D cumulated activity distribution given by the same voxel size. However, beta-dose point kernels affecting accurate estimates of the 3-D absorbed dose distribution have been different among the investigators. The purpose of this study is to elucidate how different beta-dose point kernels in water influence on the estimates of the absorbed dose distribution due to the dose point kernel convolution method by the authors. Computer simulations were performed using the MIRD thyroid and lung phantoms under assumption of uniform activity distribution of 32 P. Using beta-dose point kernels derived from Monte Carlo simulations (EGS-4 or ACCEPT computer code), the differences among their point kernels gave little differences for the mean and maximum absorbed dose estimates for the MIRD phantoms used. In the estimates of mean and maximum absorbed doses calculated using different cubic voxel sizes (4x4x4 mm and 8x8x8 mm) for the MIRD thyroid phantom, the maximum absorbed doses for the 4x4x4 mm-voxel were estimated approximately 7% greater than the cases of the 8x8x8 mm-voxel. They were found in every beta-dose point kernel used in this study. On the other hand, the percentage difference of the mean absorbed doses in the both voxel sizes for each beta-dose point kernel was less than approximately 0.6%. (author)

  8. Fractional quantum integral operator with general kernels and applications

    Science.gov (United States)

    Babakhani, Azizollah; Neamaty, Abdolali; Yadollahzadeh, Milad; Agahi, Hamzeh

    In this paper, we first introduce the concept of fractional quantum integral with general kernels, which generalizes several types of fractional integrals known from the literature. Then we give more general versions of some integral inequalities for this operator, thus generalizing some previous results obtained by many researchers.2,8,25,29,30,36

  9. Investigation of tilted dose kernels for portal dose prediction in a-Si electronic portal imagers

    International Nuclear Information System (INIS)

    Chytyk, K.; McCurdy, B. M. C.

    2006-01-01

    The effect of beam divergence on dose calculation via Monte Carlo generated dose kernels was investigated in an amorphous silicon electronic portal imaging device (EPID). The flat-panel detector was simulated in EGSnrc with an additional 3.0 cm water buildup. The model included details of the detector's imaging cassette and the front cover upstream of it. To approximate the effect of the EPID's rear housing, a 2.1 cm air gap and 1.0 cm water slab were introduced into the simulation as equivalent backscatter material. Dose kernels were generated with an incident pencil beam of monoenergetic photons of energy 0.1, 2, 6, and 18 MeV. The orientation of the incident pencil beam was varied from 0 deg. to 14 deg. in 2 deg. increments. Dose was scored in the phosphor layer of the detector in both cylindrical (at 0 deg. ) and Cartesian (at 0 deg. -14 deg.) geometries. To reduce statistical fluctuations in the Cartesian geometry simulations at large radial distances from the incident pencil beam, the voxels were first averaged bilaterally about the pencil beam and then combined into concentric square rings of voxels. Profiles of the EPID dose kernels displayed increasing asymmetry with increasing angle and energy. A comparison of the superposition (tilted kernels) and convolution (parallel kernels) dose calculation methods via the χ-comparison test (a derivative of the γ-evaluation) in worst-case-scenario geometries demonstrated an agreement between the two methods within 0.0784 cm (one pixel width) distance-to-agreement and up to a 1.8% dose difference. More clinically typical field sizes and source-to-detector distances were also tested, yielding at most a 1.0% dose difference and the same distance-to-agreement. Therefore, the assumption of parallel dose kernels has less than a 1.8% dosimetric effect in extreme cases and less than a 1.0% dosimetric effect in most clinically relevant situations and should be suitable for most clinical dosimetric applications. The

  10. Unsupervised multiple kernel learning for heterogeneous data integration.

    Science.gov (United States)

    Mariette, Jérôme; Villa-Vialaneix, Nathalie

    2018-03-15

    Recent high-throughput sequencing advances have expanded the breadth of available omics datasets and the integrated analysis of multiple datasets obtained on the same samples has allowed to gain important insights in a wide range of applications. However, the integration of various sources of information remains a challenge for systems biology since produced datasets are often of heterogeneous types, with the need of developing generic methods to take their different specificities into account. We propose a multiple kernel framework that allows to integrate multiple datasets of various types into a single exploratory analysis. Several solutions are provided to learn either a consensus meta-kernel or a meta-kernel that preserves the original topology of the datasets. We applied our framework to analyse two public multi-omics datasets. First, the multiple metagenomic datasets, collected during the TARA Oceans expedition, was explored to demonstrate that our method is able to retrieve previous findings in a single kernel PCA as well as to provide a new image of the sample structures when a larger number of datasets are included in the analysis. To perform this analysis, a generic procedure is also proposed to improve the interpretability of the kernel PCA in regards with the original data. Second, the multi-omics breast cancer datasets, provided by The Cancer Genome Atlas, is analysed using a kernel Self-Organizing Maps with both single and multi-omics strategies. The comparison of these two approaches demonstrates the benefit of our integration method to improve the representation of the studied biological system. Proposed methods are available in the R package mixKernel, released on CRAN. It is fully compatible with the mixOmics package and a tutorial describing the approach can be found on mixOmics web site http://mixomics.org/mixkernel/. jerome.mariette@inra.fr or nathalie.villa-vialaneix@inra.fr. Supplementary data are available at Bioinformatics online.

  11. Point kernels and superposition methods for scatter dose calculations in brachytherapy

    International Nuclear Information System (INIS)

    Carlsson, A.K.

    2000-01-01

    Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)

  12. SU-E-T-154: Calculation of Tissue Dose Point Kernels Using GATE Monte Carlo Simulation Toolkit to Compare with Water Dose Point Kernel

    Energy Technology Data Exchange (ETDEWEB)

    Khazaee, M [shahid beheshti university, Tehran, Tehran (Iran, Islamic Republic of); Asl, A Kamali [Shahid Beheshti University, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of); Geramifar, P [Shariati Hospital, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of)

    2015-06-15

    Purpose: the objective of this study was to assess utilizing water dose point kernel (DPK)instead of tissue dose point kernels in convolution algorithms.to the best of our knowledge, in providing 3D distribution of absorbed dose from a 3D distribution of the activity, the human body is considered equivalent to water. as a Result tissue variations are not considered in patient specific dosimetry. Methods: In this study Gate v7.0 was used to calculate tissue dose point kernel. the beta emitter radionuclides which have taken into consideration in this simulation include Y-90, Lu-177 and P-32 which are commonly used in nuclear medicine. the comparison has been performed for dose point kernels of adipose, bone, breast, heart, intestine, kidney, liver, lung and spleen versus water dose point kernel. Results: In order to validate the simulation the Result of 90Y DPK in water were compared with published results of Papadimitroulas et al (Med. Phys., 2012). The results represented that the mean differences between water DPK and other soft tissues DPKs range between 0.6 % and 1.96% for 90Y, except for lung and bone, where the observed discrepancies are 6.3% and 12.19% respectively. The range of DPK difference for 32P is between 1.74% for breast and 18.85% for bone. For 177Lu, the highest difference belongs to bone which is equal to 16.91%. For other soft tissues the least discrepancy is observed in kidney with 1.68%. Conclusion: In all tissues except for lung and bone, the results of GATE for dose point kernel were comparable to water dose point kernel which demonstrates the appropriateness of applying water dose point kernel instead of soft tissues in the field of nuclear medicine.

  13. SU-E-T-154: Calculation of Tissue Dose Point Kernels Using GATE Monte Carlo Simulation Toolkit to Compare with Water Dose Point Kernel

    International Nuclear Information System (INIS)

    Khazaee, M; Asl, A Kamali; Geramifar, P

    2015-01-01

    Purpose: the objective of this study was to assess utilizing water dose point kernel (DPK)instead of tissue dose point kernels in convolution algorithms.to the best of our knowledge, in providing 3D distribution of absorbed dose from a 3D distribution of the activity, the human body is considered equivalent to water. as a Result tissue variations are not considered in patient specific dosimetry. Methods: In this study Gate v7.0 was used to calculate tissue dose point kernel. the beta emitter radionuclides which have taken into consideration in this simulation include Y-90, Lu-177 and P-32 which are commonly used in nuclear medicine. the comparison has been performed for dose point kernels of adipose, bone, breast, heart, intestine, kidney, liver, lung and spleen versus water dose point kernel. Results: In order to validate the simulation the Result of 90Y DPK in water were compared with published results of Papadimitroulas et al (Med. Phys., 2012). The results represented that the mean differences between water DPK and other soft tissues DPKs range between 0.6 % and 1.96% for 90Y, except for lung and bone, where the observed discrepancies are 6.3% and 12.19% respectively. The range of DPK difference for 32P is between 1.74% for breast and 18.85% for bone. For 177Lu, the highest difference belongs to bone which is equal to 16.91%. For other soft tissues the least discrepancy is observed in kidney with 1.68%. Conclusion: In all tissues except for lung and bone, the results of GATE for dose point kernel were comparable to water dose point kernel which demonstrates the appropriateness of applying water dose point kernel instead of soft tissues in the field of nuclear medicine

  14. Suitability of point kernel dose calculation techniques in brachytherapy treatment planning

    Directory of Open Access Journals (Sweden)

    Lakshminarayanan Thilagam

    2010-01-01

    Full Text Available Brachytherapy treatment planning system (TPS is necessary to estimate the dose to target volume and organ at risk (OAR. TPS is always recommended to account for the effect of tissue, applicator and shielding material heterogeneities exist in applicators. However, most brachytherapy TPS software packages estimate the absorbed dose at a point, taking care of only the contributions of individual sources and the source distribution, neglecting the dose perturbations arising from the applicator design and construction. There are some degrees of uncertainties in dose rate estimations under realistic clinical conditions. In this regard, an attempt is made to explore the suitability of point kernels for brachytherapy dose rate calculations and develop new interactive brachytherapy package, named as BrachyTPS, to suit the clinical conditions. BrachyTPS is an interactive point kernel code package developed to perform independent dose rate calculations by taking into account the effect of these heterogeneities, using two regions build up factors, proposed by Kalos. The primary aim of this study is to validate the developed point kernel code package integrated with treatment planning computational systems against the Monte Carlo (MC results. In the present work, three brachytherapy applicators commonly used in the treatment of uterine cervical carcinoma, namely (i Board of Radiation Isotope and Technology (BRIT low dose rate (LDR applicator and (ii Fletcher Green type LDR applicator (iii Fletcher Williamson high dose rate (HDR applicator, are studied to test the accuracy of the software. Dose rates computed using the developed code are compared with the relevant results of the MC simulations. Further, attempts are also made to study the dose rate distribution around the commercially available shielded vaginal applicator set (Nucletron. The percentage deviations of BrachyTPS computed dose rate values from the MC results are observed to be within plus/minus 5

  15. Calculation of dose point kernels for five radionuclides used in radio-immunotherapy

    International Nuclear Information System (INIS)

    Okigaki, S.; Ito, A.; Uchida, I.; Tomaru, T.

    1994-01-01

    With the recent interest in radioimmunotherapy, attention has been given to calculation of dose distribution from beta rays and monoenergetic electrons in tissue. Dose distribution around a point source of a beta ray emitting radioisotope is referred to as a beta dose point kernel. Beta dose point kernels for five radionuclides such as 131 I, 186 Re, 32 P, 188 Re, and 90 Y appropriate for radioimmunotherapy are calculated by Monte Carlo method using the EGS4 code system. Present results were compared with the published data of experiments and other calculations. Accuracy and precisions of beta dose point kernels are discussed. (author)

  16. Integration of the supersonic kernel function

    CSIR Research Space (South Africa)

    Van Zyl, LH

    1994-11-01

    Full Text Available The article discusses ways in which the integrals resulting from a zero-order discontinuous pressure distribution can be arranged in such a way that they can be solved by either normal quadrature or curve fitting followed by analytical integration...

  17. Dose calculation methods in photon beam therapy using energy deposition kernels

    International Nuclear Information System (INIS)

    Ahnesjoe, A.

    1991-01-01

    The problem of calculating accurate dose distributions in treatment planning of megavoltage photon radiation therapy has been studied. New dose calculation algorithms using energy deposition kernels have been developed. The kernels describe the transfer of energy by secondary particles from a primary photon interaction site to its surroundings. Monte Carlo simulations of particle transport have been used for derivation of kernels for primary photon energies form 0.1 MeV to 50 MeV. The trade off between accuracy and calculational speed has been addressed by the development of two algorithms; one point oriented with low computional overhead for interactive use and one for fast and accurate calculation of dose distributions in a 3-dimensional lattice. The latter algorithm models secondary particle transport in heterogeneous tissue by scaling energy deposition kernels with the electron density of the tissue. The accuracy of the methods has been tested using full Monte Carlo simulations for different geometries, and found to be superior to conventional algorithms based on scaling of broad beam dose distributions. Methods have also been developed for characterization of clinical photon beams in entities appropriate for kernel based calculation models. By approximating the spectrum as laterally invariant, an effective spectrum and dose distribution for contaminating charge particles are derived form depth dose distributions measured in water, using analytical constraints. The spectrum is used to calculate kernels by superposition of monoenergetic kernels. The lateral energy fluence distribution is determined by deconvolving measured lateral dose distributions by a corresponding pencil beam kernel. Dose distributions for contaminating photons are described using two different methods, one for estimation of the dose outside of the collimated beam, and the other for calibration of output factors derived from kernel based dose calculations. (au)

  18. Characterization of dilation analytic integral kernels

    Energy Technology Data Exchange (ETDEWEB)

    Vici, A D [Rome Univ. (Italy). Ist. di Matematica

    1979-11-01

    The author characterises integral operators belonging to B(L/sup 2/(R/sup 3/)) which are dilatation analytic in the Cartesian product of two sectors Ssub(a) contains C as analytic functions from Ssub(a) X Ssub(a) into B(L/sup 2/(..cap omega..)), the space of bounded operators on square integrable functions on the unit sphere ..cap omega.., which satisfy certain norm estimates uniformly on every subsector.

  19. Modelling lateral beam quality variations in pencil kernel based photon dose calculations

    International Nuclear Information System (INIS)

    Nyholm, T; Olofsson, J; Ahnesjoe, A; Karlsson, M

    2006-01-01

    Standard treatment machines for external radiotherapy are designed to yield flat dose distributions at a representative treatment depth. The common method to reach this goal is to use a flattening filter to decrease the fluence in the centre of the beam. A side effect of this filtering is that the average energy of the beam is generally lower at a distance from the central axis, a phenomenon commonly referred to as off-axis softening. The off-axis softening results in a relative change in beam quality that is almost independent of machine brand and model. Central axis dose calculations using pencil beam kernels show no drastic loss in accuracy when the off-axis beam quality variations are neglected. However, for dose calculated at off-axis positions the effect should be considered, otherwise errors of several per cent can be introduced. This work proposes a method to explicitly include the effect of off-axis softening in pencil kernel based photon dose calculations for arbitrary positions in a radiation field. Variations of pencil kernel values are modelled through a generic relation between half value layer (HVL) thickness and off-axis position for standard treatment machines. The pencil kernel integration for dose calculation is performed through sampling of energy fluence and beam quality in sectors of concentric circles around the calculation point. The method is fully based on generic data and therefore does not require any specific measurements for characterization of the off-axis softening effect, provided that the machine performance is in agreement with the assumed HVL variations. The model is verified versus profile measurements at different depths and through a model self-consistency check, using the dose calculation model to estimate HVL values at off-axis positions. A comparison between calculated and measured profiles at different depths showed a maximum relative error of 4% without explicit modelling of off-axis softening. The maximum relative error

  20. Pencil kernel correction and residual error estimation for quality-index-based dose calculations

    International Nuclear Information System (INIS)

    Nyholm, Tufve; Olofsson, Joergen; Ahnesjoe, Anders; Georg, Dietmar; Karlsson, Mikael

    2006-01-01

    Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent set of measurements. No significant systematic error was observed in the calculations using the derived correction of the kernel and the remaining random errors were found to be adequately predicted by the proposed method

  1. Integral equations with difference kernels on finite intervals

    CERN Document Server

    Sakhnovich, Lev A

    2015-01-01

    This book focuses on solving integral equations with difference kernels on finite intervals. The corresponding problem on the semiaxis was previously solved by N. Wiener–E. Hopf and by M.G. Krein. The problem on finite intervals, though significantly more difficult, may be solved using our method of operator identities. This method is also actively employed in inverse spectral problems, operator factorization and nonlinear integral equations. Applications of the obtained results to optimal synthesis, light scattering, diffraction, and hydrodynamics problems are discussed in this book, which also describes how the theory of operators with difference kernels is applied to stable processes and used to solve the famous M. Kac problems on stable processes. In this second edition these results are extensively generalized and include the case of all Levy processes. We present the convolution expression for the well-known Ito formula of the generator operator, a convolution expression that has proven to be fruitful...

  2. Application of Electron Dose Kernels to account for heterogeneities in voxelized phantoms

    International Nuclear Information System (INIS)

    Al-Basheer, A. K.; Sjoden, G. E.; Ghita, M.; Bolch, W.

    2009-01-01

    In this paper, we present work on the application of the Electron Dose Kernel discrete ordinates method (EDK-S N ) to compute doses and account for material heterogeneities using high energy external photon beam irradiations in voxelized human phantoms. EDKs are pre-computed using photon pencil 'beamlets' that lead to dose delivery in tissue using highly converged Monte Carlo. Coupling the EDKs to accumulate dose scaled by integral photon fluences computed using S N methods in dose driving voxels (DDVs) allows for the full charged particle physics computed dose to be accumulated throughout the voxelized phantom, and is the basis of the EDK-S N method, which is fully parallelized. For material heterogeneities, a density scaling correction factor is required to yield good agreement. In a fully voxelized phantom, all doses were in agreement with those determined by independent Monte Carlo computations. We are continuing to expand upon the development of this robust approach for rapid and accurate determination of whole body and out of field organ doses due to high energy x-ray beams. (authors)

  3. A point-kernel shielding code for calculations of neutron and secondary gamma-ray 1cm dose equivalents: PKN

    International Nuclear Information System (INIS)

    Kotegawa, Hiroshi; Tanaka, Shun-ichi

    1991-09-01

    A point-kernel integral technique code, PKN, and the related data library have been developed to calculate neutron and secondary gamma-ray dose equivalents in water, concrete and iron shields for neutron sources in 3-dimensional geometry. The comparison between calculational results of the present code and those of the 1-dimensional transport code ANISN = JR, and the 2-dimensional transport code DOT4.2 showed a sufficient accuracy, and the availability of the PKN code has been confirmed. (author)

  4. Recursive integral equations with positive kernel for lattice calculations

    International Nuclear Information System (INIS)

    Illuminati, F.; Isopi, M.

    1990-11-01

    A Kirkwood-Salzburg integral equation, with positive defined kernel, for the states of lattice models of statistical mechanics and quantum field theory is derived. The equation is defined in the thermodynamic limit, and its iterative solution is convergent. Moreover, positivity leads to an exact a priori bound on the iteration. The equation's relevance as a reliable algorithm for lattice calculations is therefore suggested, and it is illustrated with a simple application. It should provide a viable alternative to Monte Carlo methods for models of statistical mechanics and lattice gauge theories. 10 refs

  5. Observing integrals of heat kernels from a distance

    DEFF Research Database (Denmark)

    Heat kernels have integrals such as Brownian motion mean exit time, potential capacity, and torsional rigidity. We show how to obtain bounds on these values - essentially by observing their behaviour in terms of the distance function from a point and then comparing with corresponding values in ta...... and discussed as test cases. The talk is based on joint work with Vicente Palmer....... in tailor-made warped product spaces. The results will be illustrated by applications to the so-called 'type' problem: How to decide if a given manifold or surface is transient (hyperbolic) or recurrent (parabolic). Specific examples of minimal surfaces and constant pressure dry foams will be shown...

  6. Accurate convolution/superposition for multi-resolution dose calculation using cumulative tabulated kernels

    International Nuclear Information System (INIS)

    Lu Weiguo; Olivera, Gustavo H; Chen Mingli; Reckwerdt, Paul J; Mackie, Thomas R

    2005-01-01

    Convolution/superposition (C/S) is regarded as the standard dose calculation method in most modern radiotherapy treatment planning systems. Different implementations of C/S could result in significantly different dose distributions. This paper addresses two major implementation issues associated with collapsed cone C/S: one is how to utilize the tabulated kernels instead of analytical parametrizations and the other is how to deal with voxel size effects. Three methods that utilize the tabulated kernels are presented in this paper. These methods differ in the effective kernels used: the differential kernel (DK), the cumulative kernel (CK) or the cumulative-cumulative kernel (CCK). They result in slightly different computation times but significantly different voxel size effects. Both simulated and real multi-resolution dose calculations are presented. For simulation tests, we use arbitrary kernels and various voxel sizes with a homogeneous phantom, and assume forward energy transportation only. Simulations with voxel size up to 1 cm show that the CCK algorithm has errors within 0.1% of the maximum gold standard dose. Real dose calculations use a heterogeneous slab phantom, both the 'broad' (5 x 5 cm 2 ) and the 'narrow' (1.2 x 1.2 cm 2 ) tomotherapy beams. Various voxel sizes (0.5 mm, 1 mm, 2 mm, 4 mm and 8 mm) are used for dose calculations. The results show that all three algorithms have negligible difference (0.1%) for the dose calculation in the fine resolution (0.5 mm voxels). But differences become significant when the voxel size increases. As for the DK or CK algorithm in the broad (narrow) beam dose calculation, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 10% (7%) of the maximum dose. As for the broad (narrow) beam dose calculation using the CCK algorithm, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 1% of the maximum dose. Among all three methods, the CCK algorithm

  7. Analysis and regularization of the thin-wire integral equation with reduced kernel

    NARCIS (Netherlands)

    Beurden, van M.C.; Tijhuis, A.G.

    2007-01-01

    For the straight wire, modeled as a hollow tube, we establish a conditional equivalence relation between the integral equations with exact and reduced kernel. This relation allows us to examine the existence and uniqueness conditions for the integral equation with reduced kernel, based on a local

  8. Evaluation of the influence of double and triple Gaussian proton kernel models on accuracy of dose calculations for spot scanning technique.

    Science.gov (United States)

    Hirayama, Shusuke; Takayanagi, Taisuke; Fujii, Yusuke; Fujimoto, Rintaro; Fujitaka, Shinichiro; Umezawa, Masumi; Nagamine, Yoshihiko; Hosaka, Masahiro; Yasui, Keisuke; Omachi, Chihiro; Toshito, Toshiyuki

    2016-03-01

    The main purpose in this study was to present the results of beam modeling and how the authors systematically investigated the influence of double and triple Gaussian proton kernel models on the accuracy of dose calculations for spot scanning technique. The accuracy of calculations was important for treatment planning software (TPS) because the energy, spot position, and absolute dose had to be determined by TPS for the spot scanning technique. The dose distribution was calculated by convolving in-air fluence with the dose kernel. The dose kernel was the in-water 3D dose distribution of an infinitesimal pencil beam and consisted of an integral depth dose (IDD) and a lateral distribution. Accurate modeling of the low-dose region was important for spot scanning technique because the dose distribution was formed by cumulating hundreds or thousands of delivered beams. The authors employed a double Gaussian function as the in-air fluence model of an individual beam. Double and triple Gaussian kernel models were also prepared for comparison. The parameters of the kernel lateral model were derived by fitting a simulated in-water lateral dose profile induced by an infinitesimal proton beam, whose emittance was zero, at various depths using Monte Carlo (MC) simulation. The fitted parameters were interpolated as a function of depth in water and stored as a separate look-up table. These stored parameters for each energy and depth in water were acquired from the look-up table when incorporating them into the TPS. The modeling process for the in-air fluence and IDD was based on the method proposed in the literature. These were derived using MC simulation and measured data. The authors compared the measured and calculated absolute doses at the center of the spread-out Bragg peak (SOBP) under various volumetric irradiation conditions to systematically investigate the influence of the two types of kernel models on the dose calculations. The authors investigated the difference

  9. Validation of a dose-point kernel convolution technique for internal dosimetry

    International Nuclear Information System (INIS)

    Giap, H.B.; Macey, D.J.; Bayouth, J.E.; Boyer, A.L.

    1995-01-01

    The objective of this study was to validate a dose-point kernel convolution technique that provides a three-dimensional (3D) distribution of absorbed dose from a 3D distribution of the radionuclide 131 I. A dose-point kernel for the penetrating radiations was calculated by a Monte Carlo simulation and cast in a 3D rectangular matrix. This matrix was convolved with the 3D activity map furnished by quantitative single-photon-emission computed tomography (SPECT) to provide a 3D distribution of absorbed dose. The convolution calculation was performed using a 3D fast Fourier transform (FFT) technique, which takes less than 40 s for a 128 x 128 x 16 matrix on an Intel 486 DX2 (66 MHz) personal computer. The calculated photon absorbed dose was compared with values measured by thermoluminescent dosimeters (TLDS) inserted along the diameter of a 22 cm diameter annular source of 131 I. The mean and standard deviation of the percentage difference between the measurements and the calculations were equal to -1% and 3.6% respectively. This convolution method was also used to calculate the 3D dose distribution in an Alderson abdominal phantom containing a liver, a spleen, and a spherical tumour volume loaded with various concentrations of 131 I. By averaging the dose calculated throughout the liver, spleen, and tumour the dose-point kernel approach was compared with values derived using the MIRD formalism, and found to agree to better than 15%. (author)

  10. Fast dose kernel interpolation using Fourier transform with application to permanent prostate brachytherapy dosimetry.

    Science.gov (United States)

    Liu, Derek; Sloboda, Ron S

    2014-05-01

    Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.

  11. Exact analytical solution of the convolution integral equation for a general profile fitting function and Gaussian detector kernel

    International Nuclear Information System (INIS)

    Garcia-Vicente, F.; Rodriguez, C.

    2000-01-01

    One of the most important aspects in the metrology of radiation fields is the problem of the measurement of dose profiles in regions where the dose gradient is large. In such zones, the 'detector size effect' may produce experimental measurements that do not correspond to reality. Mathematically it can be proved, under some general assumptions of spatial linearity, that the disturbance induced in the measurement by the effect of the finite size of the detector is equal to the convolution of the real profile with a representative kernel of the detector. In this work the exact relation between the measured profile and the real profile is shown, through the analytical resolution of the integral equation for a general type of profile fitting function using Gaussian convolution kernels. (author)

  12. Primary and scattering contributions to beta scaled dose point kernels by means of Monte Carlo simulations

    International Nuclear Information System (INIS)

    Valente, Mauro; Botta, Francesca; Pedroli, Guido

    2012-01-01

    Beta-emitters have proved to be appropriate for radioimmunotherapy. The dosimetric characterization of each radionuclide has to be carefully investigated. One usual and practical dosimetric approach is the calculation of dose distribution from a unit point source emitting particles according to any radionuclide of interest, which is known as dose point kernel. Absorbed dose distributions are due to primary and radiation scattering contributions. This work presented a method capable of performing dose distributions for nuclear medicine dosimetry by means of Monte Carlo methods. Dedicated subroutines have been developed in order to separately compute primary and scattering contributions to the total absorbed dose, performing particle transport up to 1 keV or least. Preliminarily, the suitability of the calculation method has been satisfactory, being tested for monoenergetic sources, and it was further applied to the characterization of different beta-minus radionuclides of nuclear medicine interests for radioimmunotherapy. (author)

  13. CLASS-PAIR-GUIDED MULTIPLE KERNEL LEARNING OF INTEGRATING HETEROGENEOUS FEATURES FOR CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    Q. Wang

    2017-10-01

    Full Text Available In recent years, many studies on remote sensing image classification have shown that using multiple features from different data sources can effectively improve the classification accuracy. As a very powerful means of learning, multiple kernel learning (MKL can conveniently be embedded in a variety of characteristics. The conventional combined kernel learned by MKL can be regarded as the compromise of all basic kernels for all classes in classification. It is the best of the whole, but not optimal for each specific class. For this problem, this paper proposes a class-pair-guided MKL method to integrate the heterogeneous features (HFs from multispectral image (MSI and light detection and ranging (LiDAR data. In particular, the one-against-one strategy is adopted, which converts multiclass classification problem to a plurality of two-class classification problem. Then, we select the best kernel from pre-constructed basic kernels set for each class-pair by kernel alignment (KA in the process of classification. The advantage of the proposed method is that only the best kernel for the classification of any two classes can be retained, which leads to greatly enhanced discriminability. Experiments are conducted on two real data sets, and the experimental results show that the proposed method achieves the best performance in terms of classification accuracies in integrating the HFs for classification when compared with several state-of-the-art algorithms.

  14. Integral dose conservation in radiotherapy

    International Nuclear Information System (INIS)

    Reese, Adam S.; Das, Shiva K.; Curle, Charles; Marks, Lawrence B.

    2009-01-01

    Treatment planners frequently modify beam arrangements and use IMRT to improve target dose coverage while satisfying dose constraints on normal tissues. The authors herein analyze the limitations of these strategies and quantitatively assess the extent to which dose can be redistributed within the patient volume. Specifically, the authors hypothesize that (1) the normalized integral dose is constant across concentric shells of normal tissue surrounding the target (normalized to the average integral shell dose), (2) the normalized integral shell dose is constant across plans with different numbers and orientations of beams, and (3) the normalized integral shell dose is constant across plans when reducing the dose to a critical structure. Using the images of seven patients previously irradiated for cancer of brain or prostate cancer and one idealized scenario, competing three-dimensional conformal and IMRT plans were generated using different beam configurations. Within a given plan and for competing plans with a constant mean target dose, the normalized integral doses within concentric ''shells'' of surrounding normal tissue were quantitatively compared. Within each patient, the normalized integral dose to shells of normal tissue surrounding the target was relatively constant (1). Similarly, for each clinical scenario, the normalized integral dose for a given shell was also relatively constant regardless of the number and orientation of beams (2) or degree of sparing of a critical structure (3). 3D and IMRT planning tools can redistribute, rather than eliminate dose to the surrounding normal tissues (intuitively known by planners). More specifically, dose cannot be moved between shells surrounding the target but only within a shell. This implies that there are limitations in the extent to which a critical structure can be spared based on the location and geometry of the critical structure relative to the target.

  15. SU-E-T-209: Independent Dose Calculation in FFF Modulated Fields with Pencil Beam Kernels Obtained by Deconvolution

    International Nuclear Information System (INIS)

    Azcona, J; Burguete, J

    2014-01-01

    Purpose: To obtain the pencil beam kernels that characterize a megavoltage photon beam generated in a FFF linac by experimental measurements, and to apply them for dose calculation in modulated fields. Methods: Several Kodak EDR2 radiographic films were irradiated with a 10 MV FFF photon beam from a Varian True Beam (Varian Medical Systems, Palo Alto, CA) linac, at the depths of 5, 10, 15, and 20cm in polystyrene (RW3 water equivalent phantom, PTW Freiburg, Germany). The irradiation field was a 50 mm diameter circular field, collimated with a lead block. Measured dose leads to the kernel characterization, assuming that the energy fluence exiting the linac head and further collimated is originated on a point source. The three-dimensional kernel was obtained by deconvolution at each depth using the Hankel transform. A correction on the low dose part of the kernel was performed to reproduce accurately the experimental output factors. The kernels were used to calculate modulated dose distributions in six modulated fields and compared through the gamma index to their absolute dose measured by film in the RW3 phantom. Results: The resulting kernels properly characterize the global beam penumbra. The output factor-based correction was carried out adding the amount of signal necessary to reproduce the experimental output factor in steps of 2mm, starting at a radius of 4mm. There the kernel signal was in all cases below 10% of its maximum value. With this correction, the number of points that pass the gamma index criteria (3%, 3mm) in the modulated fields for all cases are at least 99.6% of the total number of points. Conclusion: A system for independent dose calculations in modulated fields from FFF beams has been developed. Pencil beam kernels were obtained and their ability to accurately calculate dose in homogeneous media was demonstrated

  16. The integral first collision kernel method for gamma-ray skyshine analysis[Skyshine; Gamma-ray; First collision kernel; Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Sheu, R.-D.; Chui, C.-S.; Jiang, S.-H. E-mail: shjiang@mx.nthu.edu.tw

    2003-12-01

    A simplified method, based on the integral of the first collision kernel, is presented for performing gamma-ray skyshine calculations for the collimated sources. The first collision kernels were calculated in air for a reference air density by use of the EGS4 Monte Carlo code. These kernels can be applied to other air densities by applying density corrections. The integral first collision kernel (IFCK) method has been used to calculate two of the ANSI/ANS skyshine benchmark problems and the results were compared with a number of other commonly used codes. Our results were generally in good agreement with others but only spend a small fraction of the computation time required by the Monte Carlo calculations. The scheme of the IFCK method for dealing with lots of source collimation geometry is also presented in this study.

  17. Spent Fuel Pool Dose Rate Calculations Using Point Kernel and Hybrid Deterministic-Stochastic Shielding Methods

    International Nuclear Information System (INIS)

    Matijevic, M.; Grgic, D.; Jecmenica, R.

    2016-01-01

    This paper presents comparison of the Krsko Power Plant simplified Spent Fuel Pool (SFP) dose rates using different computational shielding methodologies. The analysis was performed to estimate limiting gamma dose rates on wall mounted level instrumentation in case of significant loss of cooling water. The SFP was represented with simple homogenized cylinders (point kernel and Monte Carlo (MC)) or cuboids (MC) using uranium, iron, water, and dry-air as bulk region materials. The pool is divided on the old and new section where the old one has three additional subsections representing fuel assemblies (FAs) with different burnup/cooling time (60 days, 1 year and 5 years). The new section represents the FAs with the cooling time of 10 years. The time dependent fuel assembly isotopic composition was calculated using ORIGEN2 code applied to the depletion of one of the fuel assemblies present in the pool (AC-29). The source used in Microshield calculation is based on imported isotopic activities. The time dependent photon spectra with total source intensity from Microshield multigroup point kernel calculations was then prepared for two hybrid deterministic-stochastic sequences. One is based on SCALE/MAVRIC (Monaco and Denovo) methodology and another uses Monte Carlo code MCNP6.1.1b and ADVANTG3.0.1. code. Even though this model is a fairly simple one, the layers of shielding materials are thick enough to pose a significant shielding problem for MC method without the use of effective variance reduction (VR) technique. For that purpose the ADVANTG code was used to generate VR parameters (SB cards in SDEF and WWINP file) for MCNP fixed-source calculation using continuous energy transport. ADVATNG employs a deterministic forward-adjoint transport solver Denovo which implements CADIS/FW-CADIS methodology. Denovo implements a structured, Cartesian-grid SN solver based on the Koch-Baker-Alcouffe parallel transport sweep algorithm across x-y domain blocks. This was first

  18. Improved prediction of drug-target interactions using regularized least squares integrating with kernel fusion technique

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Ming; Wang, Yanli, E-mail: ywang@ncbi.nlm.nih.gov; Bryant, Stephen H., E-mail: bryant@ncbi.nlm.nih.gov

    2016-02-25

    Identification of drug-target interactions (DTI) is a central task in drug discovery processes. In this work, a simple but effective regularized least squares integrating with nonlinear kernel fusion (RLS-KF) algorithm is proposed to perform DTI predictions. Using benchmark DTI datasets, our proposed algorithm achieves the state-of-the-art results with area under precision–recall curve (AUPR) of 0.915, 0.925, 0.853 and 0.909 for enzymes, ion channels (IC), G protein-coupled receptors (GPCR) and nuclear receptors (NR) based on 10 fold cross-validation. The performance can further be improved by using a recalculated kernel matrix, especially for the small set of nuclear receptors with AUPR of 0.945. Importantly, most of the top ranked interaction predictions can be validated by experimental data reported in the literature, bioassay results in the PubChem BioAssay database, as well as other previous studies. Our analysis suggests that the proposed RLS-KF is helpful for studying DTI, drug repositioning as well as polypharmacology, and may help to accelerate drug discovery by identifying novel drug targets. - Graphical abstract: Flowchart of the proposed RLS-KF algorithm for drug-target interaction predictions. - Highlights: • A nonlinear kernel fusion algorithm is proposed to perform drug-target interaction predictions. • Performance can further be improved by using the recalculated kernel. • Top predictions can be validated by experimental data.

  19. Improved prediction of drug-target interactions using regularized least squares integrating with kernel fusion technique

    International Nuclear Information System (INIS)

    Hao, Ming; Wang, Yanli; Bryant, Stephen H.

    2016-01-01

    Identification of drug-target interactions (DTI) is a central task in drug discovery processes. In this work, a simple but effective regularized least squares integrating with nonlinear kernel fusion (RLS-KF) algorithm is proposed to perform DTI predictions. Using benchmark DTI datasets, our proposed algorithm achieves the state-of-the-art results with area under precision–recall curve (AUPR) of 0.915, 0.925, 0.853 and 0.909 for enzymes, ion channels (IC), G protein-coupled receptors (GPCR) and nuclear receptors (NR) based on 10 fold cross-validation. The performance can further be improved by using a recalculated kernel matrix, especially for the small set of nuclear receptors with AUPR of 0.945. Importantly, most of the top ranked interaction predictions can be validated by experimental data reported in the literature, bioassay results in the PubChem BioAssay database, as well as other previous studies. Our analysis suggests that the proposed RLS-KF is helpful for studying DTI, drug repositioning as well as polypharmacology, and may help to accelerate drug discovery by identifying novel drug targets. - Graphical abstract: Flowchart of the proposed RLS-KF algorithm for drug-target interaction predictions. - Highlights: • A nonlinear kernel fusion algorithm is proposed to perform drug-target interaction predictions. • Performance can further be improved by using the recalculated kernel. • Top predictions can be validated by experimental data.

  20. Absorbed dose kernel and self-shielding calculations for a novel radiopaque glass microsphere for transarterial radioembolization.

    Science.gov (United States)

    Church, Cody; Mawko, George; Archambault, John Paul; Lewandowski, Robert; Liu, David; Kehoe, Sharon; Boyd, Daniel; Abraham, Robert; Syme, Alasdair

    2018-02-01

    Radiopaque microspheres may provide intraprocedural and postprocedural feedback during transarterial radioembolization (TARE). Furthermore, the potential to use higher resolution x-ray imaging techniques as opposed to nuclear medicine imaging suggests that significant improvements in the accuracy and precision of radiation dosimetry calculations could be realized for this type of therapy. This study investigates the absorbed dose kernel for novel radiopaque microspheres including contributions of both short and long-lived contaminant radionuclides while concurrently quantifying the self-shielding of the glass network. Monte Carlo simulations using EGSnrc were performed to determine the dose kernels for all monoenergetic electron emissions and all beta spectra for radionuclides reported in a neutron activation study of the microspheres. Simulations were benchmarked against an accepted 90 Y dose point kernel. Self-shielding was quantified for the microspheres by simulating an isotropically emitting, uniformly distributed source, in glass and in water. The ratio of the absorbed doses was scored as a function of distance from a microsphere. The absorbed dose kernel for the microspheres was calculated for (a) two bead formulations following (b) two different durations of neutron activation, at (c) various time points following activation. Self-shielding varies with time postremoval from the reactor. At early time points, it is less pronounced due to the higher energies of the emissions. It is on the order of 0.4-2.8% at a radial distance of 5.43 mm with increased size from 10 to 50 μm in diameter during the time that the microspheres would be administered to a patient. At long time points, self-shielding is more pronounced and can reach values in excess of 20% near the end of the range of the emissions. Absorbed dose kernels for 90 Y, 90m Y, 85m Sr, 85 Sr, 87m Sr, 89 Sr, 70 Ga, 72 Ga, and 31 Si are presented and used to determine an overall kernel for the

  1. A comparison study for dose calculation in radiation therapy: pencil beam Kernel based vs. Monte Carlo simulation vs. measurements

    Energy Technology Data Exchange (ETDEWEB)

    Cheong, Kwang-Ho; Suh, Tae-Suk; Lee, Hyoung-Koo; Choe, Bo-Young [The Catholic Univ. of Korea, Seoul (Korea, Republic of); Kim, Hoi-Nam; Yoon, Sei-Chul [Kangnam St. Mary' s Hospital, Seoul (Korea, Republic of)

    2002-07-01

    Accurate dose calculation in radiation treatment planning is most important for successful treatment. Since human body is composed of various materials and not an ideal shape, it is not easy to calculate the accurate effective dose in the patients. Many methods have been proposed to solve inhomogeneity and surface contour problems. Monte Carlo simulations are regarded as the most accurate method, but it is not appropriate for routine planning because it takes so much time. Pencil beam kernel based convolution/superposition methods were also proposed to correct those effects. Nowadays, many commercial treatment planning systems have adopted this algorithm as a dose calculation engine. The purpose of this study is to verify the accuracy of the dose calculated from pencil beam kernel based treatment planning system comparing to Monte Carlo simulations and measurements especially in inhomogeneous region. Home-made inhomogeneous phantom, Helax-TMS ver. 6.0 and Monte Carlo code BEAMnrc and DOSXYZnrc were used in this study. In homogeneous media, the accuracy was acceptable but in inhomogeneous media, the errors were more significant. However in general clinical situation, pencil beam kernel based convolution algorithm is thought to be a valuable tool to calculate the dose.

  2. Dose point kernel simulation for monoenergetic electrons and radionuclides using Monte Carlo techniques.

    Science.gov (United States)

    Wu, J; Liu, Y L; Chang, S J; Chao, M M; Tsai, S Y; Huang, D E

    2012-11-01

    Monte Carlo (MC) simulation has been commonly used in the dose evaluation of radiation accidents and for medical purposes. The accuracy of simulated results is affected by the particle-tracking algorithm, cross-sectional database, random number generator and statistical error. The differences among MC simulation software packages must be validated. This study simulated the dose point kernel (DPK) and the cellular S-values of monoenergetic electrons ranging from 0.01 to 2 MeV and the radionuclides of (90)Y, (177)Lu and (103 m)Rh, using Fluktuierende Kaskade (FLUKA) and MC N-Particle Transport Code Version 5 (MCNP5). A 6-μm-radius cell model consisting of the cell surface, cytoplasm and cell nucleus was constructed for cellular S-value calculation. The mean absolute percentage errors (MAPEs) of the scaled DPKs, simulated using FLUKA and MCNP5, were 7.92, 9.64, 4.62, 3.71 and 3.84 % for 0.01, 0.1, 0.5, 1 and 2 MeV, respectively. For the three radionuclides, the MAPEs of the scaled DPKs were within 5 %. The maximum deviations of S(N←N), S(N←Cy) and S(N←CS) for the electron energy larger than 10 keV were 6.63, 6.77 and 5.24 %, respectively. The deviations for the self-absorbed S-values and cross-dose S-values of the three radionuclides were within 4 %. On the basis of the results of this study, it was concluded that the simulation results are consistent between FLUKA and MCNP5. However, there is a minor inconsistency for low energy range. The DPK and the cellular S-value should be used as the quality assurance tools before the MC simulation results are adopted as the gold standard.

  3. Multiple Kernel Learning with Random Effects for Predicting Longitudinal Outcomes and Data Integration

    Science.gov (United States)

    Chen, Tianle; Zeng, Donglin

    2015-01-01

    Summary Predicting disease risk and progression is one of the main goals in many clinical research studies. Cohort studies on the natural history and etiology of chronic diseases span years and data are collected at multiple visits. Although kernel-based statistical learning methods are proven to be powerful for a wide range of disease prediction problems, these methods are only well studied for independent data but not for longitudinal data. It is thus important to develop time-sensitive prediction rules that make use of the longitudinal nature of the data. In this paper, we develop a novel statistical learning method for longitudinal data by introducing subject-specific short-term and long-term latent effects through a designed kernel to account for within-subject correlation of longitudinal measurements. Since the presence of multiple sources of data is increasingly common, we embed our method in a multiple kernel learning framework and propose a regularized multiple kernel statistical learning with random effects to construct effective nonparametric prediction rules. Our method allows easy integration of various heterogeneous data sources and takes advantage of correlation among longitudinal measures to increase prediction power. We use different kernels for each data source taking advantage of the distinctive feature of each data modality, and then optimally combine data across modalities. We apply the developed methods to two large epidemiological studies, one on Huntington's disease and the other on Alzheimer's Disease (Alzheimer's Disease Neuroimaging Initiative, ADNI) where we explore a unique opportunity to combine imaging and genetic data to study prediction of mild cognitive impairment, and show a substantial gain in performance while accounting for the longitudinal aspect of the data. PMID:26177419

  4. Primary and scattering contributions to beta scaled dose point kernels by means of Monte Carlo simulations; Contribuicoes primaria e espalhada para dosimetria beta calculadas pelo dose point kernels empregando simulacoes pelo Metodo Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Valente, Mauro [CONICET - Consejo Nacional de Investigaciones Cientificas y Tecnicas de La Republica Argentina (Conicet), Buenos Aires, AR (Brazil); Botta, Francesca; Pedroli, Guido [European Institute of Oncology, Milan (Italy). Medical Physics Department; Perez, Pedro, E-mail: valente@famaf.unc.edu.ar [Universidad Nacional de Cordoba, Cordoba (Argentina). Fac. de Matematica, Astronomia y Fisica (FaMAF)

    2012-07-01

    Beta-emitters have proved to be appropriate for radioimmunotherapy. The dosimetric characterization of each radionuclide has to be carefully investigated. One usual and practical dosimetric approach is the calculation of dose distribution from a unit point source emitting particles according to any radionuclide of interest, which is known as dose point kernel. Absorbed dose distributions are due to primary and radiation scattering contributions. This work presented a method capable of performing dose distributions for nuclear medicine dosimetry by means of Monte Carlo methods. Dedicated subroutines have been developed in order to separately compute primary and scattering contributions to the total absorbed dose, performing particle transport up to 1 keV or least. Preliminarily, the suitability of the calculation method has been satisfactory, being tested for monoenergetic sources, and it was further applied to the characterization of different beta-minus radionuclides of nuclear medicine interests for radioimmunotherapy. (author)

  5. Kernel machine methods for integrative analysis of genome-wide methylation and genotyping studies.

    Science.gov (United States)

    Zhao, Ni; Zhan, Xiang; Huang, Yen-Tsung; Almli, Lynn M; Smith, Alicia; Epstein, Michael P; Conneely, Karen; Wu, Michael C

    2018-03-01

    Many large GWAS consortia are expanding to simultaneously examine the joint role of DNA methylation in addition to genotype in the same subjects. However, integrating information from both data types is challenging. In this paper, we propose a composite kernel machine regression model to test the joint epigenetic and genetic effect. Our approach works at the gene level, which allows for a common unit of analysis across different data types. The model compares the pairwise similarities in the phenotype to the pairwise similarities in the genotype and methylation values; and high correspondence is suggestive of association. A composite kernel is constructed to measure the similarities in the genotype and methylation values between pairs of samples. We demonstrate through simulations and real data applications that the proposed approach can correctly control type I error, and is more robust and powerful than using only the genotype or methylation data in detecting trait-associated genes. We applied our method to investigate the genetic and epigenetic regulation of gene expression in response to stressful life events using data that are collected from the Grady Trauma Project. Within the kernel machine testing framework, our methods allow for heterogeneity in effect sizes, nonlinear, and interactive effects, as well as rapid P-value computation. © 2017 WILEY PERIODICALS, INC.

  6. Low-energy electron dose-point kernel simulations using new physics models implemented in Geant4-DNA

    Energy Technology Data Exchange (ETDEWEB)

    Bordes, Julien, E-mail: julien.bordes@inserm.fr [CRCT, UMR 1037 INSERM, Université Paul Sabatier, F-31037 Toulouse (France); UMR 1037, CRCT, Université Toulouse III-Paul Sabatier, F-31037 (France); Incerti, Sébastien, E-mail: incerti@cenbg.in2p3.fr [Université de Bordeaux, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Lampe, Nathanael, E-mail: nathanael.lampe@gmail.com [Université de Bordeaux, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Bardiès, Manuel, E-mail: manuel.bardies@inserm.fr [CRCT, UMR 1037 INSERM, Université Paul Sabatier, F-31037 Toulouse (France); UMR 1037, CRCT, Université Toulouse III-Paul Sabatier, F-31037 (France); Bordage, Marie-Claude, E-mail: marie-claude.bordage@inserm.fr [CRCT, UMR 1037 INSERM, Université Paul Sabatier, F-31037 Toulouse (France); UMR 1037, CRCT, Université Toulouse III-Paul Sabatier, F-31037 (France)

    2017-05-01

    When low-energy electrons, such as Auger electrons, interact with liquid water, they induce highly localized ionizing energy depositions over ranges comparable to cell diameters. Monte Carlo track structure (MCTS) codes are suitable tools for performing dosimetry at this level. One of the main MCTS codes, Geant4-DNA, is equipped with only two sets of cross section models for low-energy electron interactions in liquid water (“option 2” and its improved version, “option 4”). To provide Geant4-DNA users with new alternative physics models, a set of cross sections, extracted from CPA100 MCTS code, have been added to Geant4-DNA. This new version is hereafter referred to as “Geant4-DNA-CPA100”. In this study, “Geant4-DNA-CPA100” was used to calculate low-energy electron dose-point kernels (DPKs) between 1 keV and 200 keV. Such kernels represent the radial energy deposited by an isotropic point source, a parameter that is useful for dosimetry calculations in nuclear medicine. In order to assess the influence of different physics models on DPK calculations, DPKs were calculated using the existing Geant4-DNA models (“option 2” and “option 4”), newly integrated CPA100 models, and the PENELOPE Monte Carlo code used in step-by-step mode for monoenergetic electrons. Additionally, a comparison was performed of two sets of DPKs that were simulated with “Geant4-DNA-CPA100” – the first set using Geant4′s default settings, and the second using CPA100′s original code default settings. A maximum difference of 9.4% was found between the Geant4-DNA-CPA100 and PENELOPE DPKs. Between the two Geant4-DNA existing models, slight differences, between 1 keV and 10 keV were observed. It was highlighted that the DPKs simulated with the two Geant4-DNA’s existing models were always broader than those generated with “Geant4-DNA-CPA100”. The discrepancies observed between the DPKs generated using Geant4-DNA’s existing models and “Geant4-DNA-CPA100” were

  7. A comparison of graph- and kernel-based -omics data integration algorithms for classifying complex traits.

    Science.gov (United States)

    Yan, Kang K; Zhao, Hongyu; Pang, Herbert

    2017-12-06

    High-throughput sequencing data are widely collected and analyzed in the study of complex diseases in quest of improving human health. Well-studied algorithms mostly deal with single data source, and cannot fully utilize the potential of these multi-omics data sources. In order to provide a holistic understanding of human health and diseases, it is necessary to integrate multiple data sources. Several algorithms have been proposed so far, however, a comprehensive comparison of data integration algorithms for classification of binary traits is currently lacking. In this paper, we focus on two common classes of integration algorithms, graph-based that depict relationships with subjects denoted by nodes and relationships denoted by edges, and kernel-based that can generate a classifier in feature space. Our paper provides a comprehensive comparison of their performance in terms of various measurements of classification accuracy and computation time. Seven different integration algorithms, including graph-based semi-supervised learning, graph sharpening integration, composite association network, Bayesian network, semi-definite programming-support vector machine (SDP-SVM), relevance vector machine (RVM) and Ada-boost relevance vector machine are compared and evaluated with hypertension and two cancer data sets in our study. In general, kernel-based algorithms create more complex models and require longer computation time, but they tend to perform better than graph-based algorithms. The performance of graph-based algorithms has the advantage of being faster computationally. The empirical results demonstrate that composite association network, relevance vector machine, and Ada-boost RVM are the better performers. We provide recommendations on how to choose an appropriate algorithm for integrating data from multiple sources.

  8. A simplified computer code based on point Kernel theory for calculating radiation dose in packages of radioactive material

    International Nuclear Information System (INIS)

    1986-03-01

    A study on radiation dose control in packages of radioactive waste from nuclear facilities, hospitals and industries, such as sources of Ra-226, Co-60, Ir-192 and Cs-137, is presented. The MAPA and MAPAM computer codes, based on point Kernel theory for calculating doses of several source-shielding type configurations, aiming to assure the safe transport conditions for these sources, was developed. The validation of the code for point sources, using the values provided by NCRP, for the thickness of lead and concrete shieldings, limiting the dose at 100 Mrem/hr for several distances from the source to the detector, was carried out. The validation for non point sources was carried out, measuring experimentally radiation dose from packages developed by Brazilian CNEN/S.P. for removing the sources. (M.C.K.) [pt

  9. A Novel Fractional-Order PID Controller for Integrated Pressurized Water Reactor Based on Wavelet Kernel Neural Network Algorithm

    Directory of Open Access Journals (Sweden)

    Yu-xin Zhao

    2014-01-01

    Full Text Available This paper presents a novel wavelet kernel neural network (WKNN with wavelet kernel function. It is applicable in online learning with adaptive parameters and is applied on parameters tuning of fractional-order PID (FOPID controller, which could handle time delay problem of the complex control system. Combining the wavelet function and the kernel function, the wavelet kernel function is adopted and validated the availability for neural network. Compared to the conservative wavelet neural network, the most innovative character of the WKNN is its rapid convergence and high precision in parameters updating process. Furthermore, the integrated pressurized water reactor (IPWR system is established by RELAP5, and a novel control strategy combining WKNN and fuzzy logic rule is proposed for shortening controlling time and utilizing the experiential knowledge sufficiently. Finally, experiment results verify that the control strategy and controller proposed have the practicability and reliability in actual complicated system.

  10. Kernel integration scatter model for parallel beam gamma camera and SPECT point source response

    International Nuclear Information System (INIS)

    Marinkovic, P.M.

    2001-01-01

    Scatter correction is a prerequisite for quantitative single photon emission computed tomography (SPECT). In this paper a kernel integration scatter Scatter correction is a prerequisite for quantitative SPECT. In this paper a kernel integration scatter model for parallel beam gamma camera and SPECT point source response based on Klein-Nishina formula is proposed. This method models primary photon distribution as well as first Compton scattering. It also includes a correction for multiple scattering by applying a point isotropic single medium buildup factor for the path segment between the point of scatter an the point of detection. Gamma ray attenuation in the object of imaging, based on known μ-map distribution, is considered too. Intrinsic spatial resolution of the camera is approximated by a simple Gaussian function. Collimator is modeled simply using acceptance angles derived from the physical dimensions of the collimator. Any gamma rays satisfying this angle were passed through the collimator to the crystal. Septal penetration and scatter in the collimator were not included in the model. The method was validated by comparison with Monte Carlo MCNP-4a numerical phantom simulation and excellent results were obtained. The physical phantom experiments, to confirm this method, are planed to be done. (author)

  11. Free and open source software at CERN: integration of drivers in the Linux kernel

    International Nuclear Information System (INIS)

    Gonzalez Cobas, J.D.; Iglesias Gonsalvez, S.; Howard Lewis, J.; Serrano, J.; Vanga, M.; Cota, E.G.; Rubini, A.; Vaga, F.

    2012-01-01

    Most device drivers written for accelerator control systems suffer from a severe lack of portability due to the ad hoc nature of the code, often embodied with intimate knowledge of the particular machine it is deployed in. In this paper we challenge this practice by arguing for the opposite approach: development in the open, which in our case translates into the integration of our code within the Linux kernel. We make our case by describing the upstream merge effort of the tsi148 driver, a critical (and complex) component of the control system. The encouraging results from this effort have then led us to follow the same approach with two more ambitious projects, currently in the works: Linux support for the upcoming FMC boards and a new I/O subsystem. (authors)

  12. SU-E-T-329: Dosimetric Impact of Implementing Metal Artifact Reduction Methods and Metal Energy Deposition Kernels for Photon Dose Calculations

    International Nuclear Information System (INIS)

    Huang, J; Followill, D; Howell, R; Liu, X; Mirkovic, D; Stingo, F; Kry, S

    2015-01-01

    Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titanium and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus

  13. Fractional multilinear integrals with rough kernels on generalized weighted Morrey spaces

    Directory of Open Access Journals (Sweden)

    Akbulut Ali

    2016-01-01

    Full Text Available In this paper, we study the boundedness of fractional multilinear integral operators with rough kernels TΩ,αA1,A2,…,Ak,$T_{\\Omega ,\\alpha }^{{A_1},{A_2}, \\ldots ,{A_k}},$ which is a generalization of the higher-order commutator of the rough fractional integral on the generalized weighted Morrey spaces Mp,ϕ (w. We find the sufficient conditions on the pair (ϕ1, ϕ2 with w ∈ Ap,q which ensures the boundedness of the operators TΩ,αA1,A2,…,Ak,$T_{\\Omega ,\\alpha }^{{A_1},{A_2}, \\ldots ,{A_k}},$ from Mp,φ1wptoMp,φ2wq${M_{p,{\\varphi _1}}}\\left( {{w^p}} \\right\\,{\\rm{to}}\\,{M_{p,{\\varphi _2}}}\\left( {{w^q}} \\right$ for 1 < p < q < ∞. In all cases the conditions for the boundedness of the operator TΩ,αA1,A2,…,Ak,$T_{\\Omega ,\\alpha }^{{A_1},{A_2}, \\ldots ,{A_k}},$ are given in terms of Zygmund-type integral inequalities on (ϕ1, ϕ2 and w, which do not assume any assumption on monotonicity of ϕ1 (x,r, ϕ2(x, r in r.

  14. SU-F-T-672: A Novel Kernel-Based Dose Engine for KeV Photon Beams

    Energy Technology Data Exchange (ETDEWEB)

    Reinhart, M; Fast, M F; Nill, S; Oelfke, U [The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London (United Kingdom)

    2016-06-15

    Purpose: Mimicking state-of-the-art patient radiotherapy with high precision irradiators for small animals allows advanced dose-effect studies and radiobiological investigations. One example is the implementation of pre-clinical IMRT-like irradiations, which requires the development of inverse planning for keV photon beams. As a first step, we present a novel kernel-based dose calculation engine for keV x-rays with explicit consideration of energy and material dependencies. Methods: We follow a superposition-convolution approach adapted to keV x-rays, based on previously published work on micro-beam therapy. In small animal radiotherapy, we assume local energy deposition at the photon interaction point, since the electron ranges in tissue are of the same order of magnitude as the voxel size. This allows us to use photon-only kernel sets generated by MC simulations, which are pre-calculated for six energy windows and ten base materials. We validate our stand-alone dose engine against Geant4 MC simulations for various beam configurations in water, slab phantoms with bone and lung inserts, and on a mouse CT with (0.275mm)3 voxels. Results: We observe good agreement for all cases. For field sizes of 1mm{sup 2} to 1cm{sup 2} in water, the depth dose curves agree within 1% (mean), with the largest deviations in the first voxel (4%) and at depths>5cm (<2.5%). The out-of-field doses at 1cm depth agree within 8% (mean) for all but the smallest field size. In slab geometries, the mean agreement was within 3%, with maximum deviations of 8% at water-bone interfaces. The γ-index (1mm/1%) passing rate for a single-field mouse irradiation is 71%. Conclusion: The presented dose engine yields an accurate representation of keV-photon doses suitable for inverse treatment planning for IMRT. It has the potential to become a significantly faster yet sufficiently accurate alternative to full MC simulations. Further investigations will focus on energy sampling as well as calculation

  15. Drug Repositioning by Kernel-Based Integration of Molecular Structure, Molecular Activity, and Phenotype Data

    Science.gov (United States)

    Wang, Yongcui; Chen, Shilong; Deng, Naiyang; Wang, Yong

    2013-01-01

    Computational inference of novel therapeutic values for existing drugs, i.e., drug repositioning, offers the great prospect for faster and low-risk drug development. Previous researches have indicated that chemical structures, target proteins, and side-effects could provide rich information in drug similarity assessment and further disease similarity. However, each single data source is important in its own way and data integration holds the great promise to reposition drug more accurately. Here, we propose a new method for drug repositioning, PreDR (Predict Drug Repositioning), to integrate molecular structure, molecular activity, and phenotype data. Specifically, we characterize drug by profiling in chemical structure, target protein, and side-effects space, and define a kernel function to correlate drugs with diseases. Then we train a support vector machine (SVM) to computationally predict novel drug-disease interactions. PreDR is validated on a well-established drug-disease network with 1,933 interactions among 593 drugs and 313 diseases. By cross-validation, we find that chemical structure, drug target, and side-effects information are all predictive for drug-disease relationships. More experimentally observed drug-disease interactions can be revealed by integrating these three data sources. Comparison with existing methods demonstrates that PreDR is competitive both in accuracy and coverage. Follow-up database search and pathway analysis indicate that our new predictions are worthy of further experimental validation. Particularly several novel predictions are supported by clinical trials databases and this shows the significant prospects of PreDR in future drug treatment. In conclusion, our new method, PreDR, can serve as a useful tool in drug discovery to efficiently identify novel drug-disease interactions. In addition, our heterogeneous data integration framework can be applied to other problems. PMID:24244318

  16. Bellman-Krein formula for an integral equation with kernel of the type k(x,y)=k(x - y) x- y sup(-α)

    International Nuclear Information System (INIS)

    Youssef, M.Y.A.; El Walik, S.A.

    1976-08-01

    With the aid of the Bellman-Krein formula for the resolvent, it is shown how to solve the integral equation with kernel of the type k(x,y)=k(x - y) x - ysup(-α), 0<α< n, i.e. the kernel with weak singularity

  17. Evaluation of gravitational curvatures of a tesseroid in spherical integral kernels

    Science.gov (United States)

    Deng, Xiao-Le; Shen, Wen-Bin

    2018-04-01

    Proper understanding of how the Earth's mass distributions and redistributions influence the Earth's gravity field-related functionals is crucial for numerous applications in geodesy, geophysics and related geosciences. Calculations of the gravitational curvatures (GC) have been proposed in geodesy in recent years. In view of future satellite missions, the sixth-order developments of the gradients are becoming requisite. In this paper, a set of 3D integral GC formulas of a tesseroid mass body have been provided by spherical integral kernels in the spatial domain. Based on the Taylor series expansion approach, the numerical expressions of the 3D GC formulas are provided up to sixth order. Moreover, numerical experiments demonstrate the correctness of the 3D Taylor series approach for the GC formulas with order as high as sixth order. Analogous to other gravitational effects (e.g., gravitational potential, gravity vector, gravity gradient tensor), numerically it is found that there exist the very-near-area problem and polar singularity problem in the GC east-east-radial, north-north-radial and radial-radial-radial components in spatial domain, and compared to the other gravitational effects, the relative approximation errors of the GC components are larger due to not only the influence of the geocentric distance but also the influence of the latitude. This study shows that the magnitude of each term for the nonzero GC functionals by a grid resolution 15^' } } × 15^' }} at GOCE satellite height can reach of about 10^{-16} m^{-1} s2 for zero order, 10^{-24 } or 10^{-23} m^{-1} s2 for second order, 10^{-29} m^{-1} s2 for fourth order and 10^{-35} or 10^{-34} m^{-1} s2 for sixth order, respectively.

  18. Dynamic Proteomic Characteristics and Network Integration Revealing Key Proteins for Two Kernel Tissue Developments in Popcorn.

    Directory of Open Access Journals (Sweden)

    Yongbin Dong

    Full Text Available The formation and development of maize kernel is a complex dynamic physiological and biochemical process that involves the temporal and spatial expression of many proteins and the regulation of metabolic pathways. In this study, the protein profiles of the endosperm and pericarp at three important developmental stages were analyzed by isobaric tags for relative and absolute quantification (iTRAQ labeling coupled with LC-MS/MS in popcorn inbred N04. Comparative quantitative proteomic analyses among developmental stages and between tissues were performed, and the protein networks were integrated. A total of 6,876 proteins were identified, of which 1,396 were nonredundant. Specific proteins and different expression patterns were observed across developmental stages and tissues. The functional annotation of the identified proteins revealed the importance of metabolic and cellular processes, and binding and catalytic activities for the development of the tissues. The whole, endosperm-specific and pericarp-specific protein networks integrated 125, 9 and 77 proteins, respectively, which were involved in 54 KEGG pathways and reflected their complex metabolic interactions. Confirmation for the iTRAQ endosperm proteins by two-dimensional gel electrophoresis showed that 44.44% proteins were commonly found. However, the concordance between mRNA level and the protein abundance varied across different proteins, stages, tissues and inbred lines, according to the gene cloning and expression analyses of four relevant proteins with important functions and different expression levels. But the result by western blot showed their same expression tendency for the four proteins as by iTRAQ. These results could provide new insights into the developmental mechanisms of endosperm and pericarp, and grain formation in maize.

  19. An extensive analysis of disease-gene associations using network integration and fast kernel-based gene prioritization methods.

    Science.gov (United States)

    Valentini, Giorgio; Paccanaro, Alberto; Caniza, Horacio; Romero, Alfonso E; Re, Matteo

    2014-06-01

    In the context of "network medicine", gene prioritization methods represent one of the main tools to discover candidate disease genes by exploiting the large amount of data covering different types of functional relationships between genes. Several works proposed to integrate multiple sources of data to improve disease gene prioritization, but to our knowledge no systematic studies focused on the quantitative evaluation of the impact of network integration on gene prioritization. In this paper, we aim at providing an extensive analysis of gene-disease associations not limited to genetic disorders, and a systematic comparison of different network integration methods for gene prioritization. We collected nine different functional networks representing different functional relationships between genes, and we combined them through both unweighted and weighted network integration methods. We then prioritized genes with respect to each of the considered 708 medical subject headings (MeSH) diseases by applying classical guilt-by-association, random walk and random walk with restart algorithms, and the recently proposed kernelized score functions. The results obtained with classical random walk algorithms and the best single network achieved an average area under the curve (AUC) across the 708 MeSH diseases of about 0.82, while kernelized score functions and network integration boosted the average AUC to about 0.89. Weighted integration, by exploiting the different "informativeness" embedded in different functional networks, outperforms unweighted integration at 0.01 significance level, according to the Wilcoxon signed rank sum test. For each MeSH disease we provide the top-ranked unannotated candidate genes, available for further bio-medical investigation. Network integration is necessary to boost the performances of gene prioritization methods. Moreover the methods based on kernelized score functions can further enhance disease gene ranking results, by adopting both

  20. An extensive analysis of disease-gene associations using network integration and fast kernel-based gene prioritization methods

    Science.gov (United States)

    Valentini, Giorgio; Paccanaro, Alberto; Caniza, Horacio; Romero, Alfonso E.; Re, Matteo

    2014-01-01

    Objective In the context of “network medicine”, gene prioritization methods represent one of the main tools to discover candidate disease genes by exploiting the large amount of data covering different types of functional relationships between genes. Several works proposed to integrate multiple sources of data to improve disease gene prioritization, but to our knowledge no systematic studies focused on the quantitative evaluation of the impact of network integration on gene prioritization. In this paper, we aim at providing an extensive analysis of gene-disease associations not limited to genetic disorders, and a systematic comparison of different network integration methods for gene prioritization. Materials and methods We collected nine different functional networks representing different functional relationships between genes, and we combined them through both unweighted and weighted network integration methods. We then prioritized genes with respect to each of the considered 708 medical subject headings (MeSH) diseases by applying classical guilt-by-association, random walk and random walk with restart algorithms, and the recently proposed kernelized score functions. Results The results obtained with classical random walk algorithms and the best single network achieved an average area under the curve (AUC) across the 708 MeSH diseases of about 0.82, while kernelized score functions and network integration boosted the average AUC to about 0.89. Weighted integration, by exploiting the different “informativeness” embedded in different functional networks, outperforms unweighted integration at 0.01 significance level, according to the Wilcoxon signed rank sum test. For each MeSH disease we provide the top-ranked unannotated candidate genes, available for further bio-medical investigation. Conclusions Network integration is necessary to boost the performances of gene prioritization methods. Moreover the methods based on kernelized score functions can further

  1. Calculation of electron and isotopes dose point kernels with FLUKA Monte Carlo code for dosimetry in nuclear medicine therapy

    CERN Document Server

    Mairani, A; Valente, M; Battistoni, G; Botta, F; Pedroli, G; Ferrari, A; Cremonesi, M; Di Dia, A; Ferrari, M; Fasso, A

    2011-01-01

    Purpose: The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, FLUKA Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, FLUKA has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK), quantifying the energy deposition all around a point isotropic source, is often the one. Methods: FLUKA DPKS have been calculated in both water and compact bone for monoenergetic electrons (10-3 MeV) and for beta emitting isotopes commonly used for therapy ((89)Sr, (90)Y, (131)I, (153)Sm, (177)Lu, (186)Re, and (188)Re). Point isotropic...

  2. Point kernel technique for calculating dose rates due to cobalt-60 hot particles

    International Nuclear Information System (INIS)

    Thornhill, M.J.; McCarthy, J.T.; Morrissette, R.R.; Leach, B.N.

    1989-01-01

    This paper reports on a computer code called BETA that has been developed by health physicists at the Vermont Yankee Nuclear Power Station which accounts for the mass and size of hot particles of Cobalt-60, and therefore corrects the Loevinger-based dose calculation for self-absorption

  3. Analytical equations for CT dose profiles derived using a scatter kernel of Monte Carlo parentage with broad applicability to CT dosimetry problems

    International Nuclear Information System (INIS)

    Dixon, Robert L.; Boone, John M.

    2011-01-01

    Purpose: Knowledge of the complete axial dose profile f(z), including its long scatter tails, provides the most complete (and flexible) description of the accumulated dose in CT scanning. The CTDI paradigm (including CTDI vol ) requires shift-invariance along z (identical dose profiles spaced at equal intervals), and is therefore inapplicable to many of the new and complex shift-variant scan protocols, e.g., high dose perfusion studies using variable (or zero) pitch. In this work, a convolution-based beam model developed by Dixon et al.[Med. Phys. 32, 3712-3728, (2005)] updated with a scatter LSF kernel (or DSF) derived from a Monte Carlo simulation by Boone [Med. Phys. 36, 4547-4554 (2009)] is used to create an analytical equation for the axial dose profile f(z) in a cylindrical phantom. Using f(z), equations are derived which provide the analytical description of conventional (axial and helical) dose, demonstrating its physical underpinnings; and likewise for the peak axial dose f(0) appropriate to stationary phantom cone beam CT, (SCBCT). The methodology can also be applied to dose calculations in shift-variant scan protocols. This paper is an extension of our recent work Dixon and Boone [Med. Phys. 37, 2703-2718 (2010)], which dealt only with the properties of the peak dose f(0), its relationship to CTDI, and its appropriateness to SCBCT. Methods: The experimental beam profile data f(z) of Mori et al.[Med. Phys. 32, 1061-1069 (2005)] from a 256 channel prototype cone beam scanner for beam widths (apertures) ranging from a = 28 to 138 mm are used to corroborate the theoretical axial profiles in a 32 cm PMMA body phantom. Results: The theoretical functions f(z) closely-matched the central axis experimental profile data 11 for all apertures (a = 28 -138 mm). Integration of f(z) likewise yields analytical equations for all the (CTDI-based) dosimetric quantities of conventional CT (including CTDI L itself) in addition to the peak dose f(0) relevant to SCBCT

  4. Convergence Analysis of Generalized Jacobi-Galerkin Methods for Second Kind Volterra Integral Equations with Weakly Singular Kernels

    Directory of Open Access Journals (Sweden)

    Haotao Cai

    2017-01-01

    Full Text Available We develop a generalized Jacobi-Galerkin method for second kind Volterra integral equations with weakly singular kernels. In this method, we first introduce some known singular nonpolynomial functions in the approximation space of the conventional Jacobi-Galerkin method. Secondly, we use the Gauss-Jacobi quadrature rules to approximate the integral term in the resulting equation so as to obtain high-order accuracy for the approximation. Then, we establish that the approximate equation has a unique solution and the approximate solution arrives at an optimal convergence order. One numerical example is presented to demonstrate the effectiveness of the proposed method.

  5. Comparison of electron dose-point kernels in water generated by the Monte Carlo codes, PENELOPE, GEANT4, MCNPX, and ETRAN.

    Science.gov (United States)

    Uusijärvi, Helena; Chouin, Nicolas; Bernhardt, Peter; Ferrer, Ludovic; Bardiès, Manuel; Forssell-Aronsson, Eva

    2009-08-01

    Point kernels describe the energy deposited at a certain distance from an isotropic point source and are useful for nuclear medicine dosimetry. They can be used for absorbed-dose calculations for sources of various shapes and are also a useful tool when comparing different Monte Carlo (MC) codes. The aim of this study was to compare point kernels calculated by using the mixed MC code, PENELOPE (v. 2006), with point kernels calculated by using the condensed-history MC codes, ETRAN, GEANT4 (v. 8.2), and MCNPX (v. 2.5.0). Point kernels for electrons with initial energies of 10, 100, 500, and 1 MeV were simulated with PENELOPE. Spherical shells were placed around an isotropic point source at distances from 0 to 1.2 times the continuous-slowing-down-approximation range (R(CSDA)). Detailed (event-by-event) simulations were performed for electrons with initial energies of less than 1 MeV. For 1-MeV electrons, multiple scattering was included for energy losses less than 10 keV. Energy losses greater than 10 keV were simulated in a detailed way. The point kernels generated were used to calculate cellular S-values for monoenergetic electron sources. The point kernels obtained by using PENELOPE and ETRAN were also used to calculate cellular S-values for the high-energy beta-emitter, 90Y, the medium-energy beta-emitter, 177Lu, and the low-energy electron emitter, 103mRh. These S-values were also compared with the Medical Internal Radiation Dose (MIRD) cellular S-values. The greatest differences between the point kernels (mean difference calculated for distances, electrons was 1.4%, 2.5%, and 6.9% for ETRAN, GEANT4, and MCNPX, respectively, compared to PENELOPE, if omitting the S-values when the activity was distributed on the cell surface for 10-keV electrons. The largest difference between the cellular S-values for the radionuclides, between PENELOPE and ETRAN, was seen for 177Lu (1.2%). There were large differences between the MIRD cellular S-values and those obtained from

  6. Hydrogen production from palm kernel shell via integrated catalytic adsorption (ICA) steam gasification

    International Nuclear Information System (INIS)

    Khan, Zakir; Yusup, Suzana; Ahmad, Murni Melati; Chin, Bridgid Lai Fui

    2014-01-01

    Highlights: • The paper presents integrated catalytic adsorption (ICA) steam gasification for H 2 yield. • Effects of adsorbent to biomass, biomass particle size and fluidization velocity on H 2 yield are examined. • The present study produces higher H 2 yield as compared to that obtained in literatures. • The ICA provides enhancement of H 2 yield as compared to independent catalytic and CO 2 adsorption gasification systems. - Abstract: The present study investigates the integrated catalytic adsorption (ICA) steam gasification of palm kernel shell for hydrogen production in a pilot scale atmospheric fluidized bed gasifier. The biomass steam gasification is performed in the presence of an adsorbent and a catalyst in the system. The effect of adsorbent to biomass (A/B) ratio (0.5–1.5 wt/wt), fluidization velocity (0.15–0.26 m/s) and biomass particle size (0.355–2.0 mm) are studied at temperature of 675 °C, steam to biomass (S/B) ratio of 2.0 (wt/wt) and biomass to catalyst ratio of 0.1 (wt/wt). Hydrogen composition and yield, total gas yield, and lower product gas heating values (LHV gas ) increases with increasing A/B ratio, while particle size has no significant effect on hydrogen composition and yield, total gas and char yield, gasification and carbon conversion efficiency. However, gas heating values increased with increasing biomass particle size which is due to presence of high methane content in product gas. Meanwhile, medium fluidization velocity of 0.21 m/s favoured hydrogen composition and yield. The results showed that the maximum hydrogen composition and yield of 84.62 vol% and 91.11 g H 2 /kg biomass are observed at A/B ratio of 1.5, S/B ratio of 2.0, catalyst to biomass ratio of 0.1 and temperature of 675 °C. The product gas heating values are observed in the range of 10.92–17.02 MJ/N m 3 . Gasification and carbon conversion efficiency are observed in the range of 25.66–42.95% and 20.61–41.95%, respectively. These lower

  7. Absorbed dose evaluation of Auger electron-emitting radionuclides: impact of input decay spectra on dose point kernels and S-values.

    Science.gov (United States)

    Falzone, Nadia; Lee, Boon Q; Fernández-Varea, José M; Kartsonaki, Christiana; Stuchbery, Andrew E; Kibédi, Tibor; Vallis, Katherine A

    2017-03-21

    The aim of this study was to investigate the impact of decay data provided by the newly developed stochastic atomic relaxation model BrIccEmis on dose point kernels (DPKs - radial dose distribution around a unit point source) and S-values (absorbed dose per unit cumulated activity) of 14 Auger electron (AE) emitting radionuclides, namely 67 Ga, 80m Br, 89 Zr, 90 Nb, 99m Tc, 111 In, 117m Sn, 119 Sb, 123 I, 124 I, 125 I, 135 La, 195m Pt and 201 Tl. Radiation spectra were based on the nuclear decay data from the medical internal radiation dose (MIRD) RADTABS program and the BrIccEmis code, assuming both an isolated-atom and condensed-phase approach. DPKs were simulated with the PENELOPE Monte Carlo (MC) code using event-by-event electron and photon transport. S-values for concentric spherical cells of various sizes were derived from these DPKs using appropriate geometric reduction factors. The number of Auger and Coster-Kronig (CK) electrons and x-ray photons released per nuclear decay (yield) from MIRD-RADTABS were consistently higher than those calculated using BrIccEmis. DPKs for the electron spectra from BrIccEmis were considerably different from MIRD-RADTABS in the first few hundred nanometres from a point source where most of the Auger electrons are stopped. S-values were, however, not significantly impacted as the differences in DPKs in the sub-micrometre dimension were quickly diminished in larger dimensions. Overestimation in the total AE energy output by MIRD-RADTABS leads to higher predicted energy deposition by AE emitting radionuclides, especially in the immediate vicinity of the decaying radionuclides. This should be taken into account when MIRD-RADTABS data are used to simulate biological damage at nanoscale dimensions.

  8. Time-dependent integral transport equation kernels, leakage rates and collision rates for plane and spherical geometry

    International Nuclear Information System (INIS)

    Henderson, D.L.

    1987-01-01

    Time-dependent integral transport equation flux and current kernels for plane and spherical geometry are derived for homogeneous media. Using the multiple collision formalism, isotropic sources that are delta distributions in time are considered for four different problems. The plane geometry flux kernel is applied to a uniformly distributed source within an infinite medium and to a surface source in a semi-infinite medium. The spherical flux kernel is applied to a point source in an infinite medium and to a point source at the origin of a finite sphere. The time-dependent first-flight leakage rates corresponding to the existing steady state first-flight escape probabilities are computed by the Laplace transform technique assuming a delta distribution source in time. The case of a constant source emitting neutrons over a time interval, Δt, for a spatially uniform source is obtained for a slab and a sphere. Time-dependent first-flight leakage rates are also determined for the general two region spherical medium problem for isotropic sources with a delta distribution in time uniformly distributed throughout both the inner and outer regions. The time-dependent collision rates due to the uncollided neutrons are computed for a slab and a sphere using the time-dependent first-flight leakage rates and the time-dependent continuity equation. The case of a constant source emitting neutrons over a time interval, Δt, is also considered

  9. Calculation of electron and isotopes dose point kernels with FLUKA Monte Carlo code for dosimetry in nuclear medicine therapy.

    Science.gov (United States)

    Botta, F; Mairani, A; Battistoni, G; Cremonesi, M; Di Dia, A; Fassò, A; Ferrari, A; Ferrari, M; Paganelli, G; Pedroli, G; Valente, M

    2011-07-01

    The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, FLUKA Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, FLUKA has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK), quantifying the energy deposition all around a point isotropic source, is often the one. FLUKA DPKS have been calculated in both water and compact bone for monoenergetic electrons (10-3 MeV) and for beta emitting isotopes commonly used for therapy (89Sr, 90Y, 131I 153Sm, 177Lu, 186Re, and 188Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. FLUKA outcomes have been compared to PENELOPE v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (ETRAN, GEANT4, MCNPX) has been done. Maximum percentage differences within 0.8.RCSDA and 0.9.RCSDA for monoenergetic electrons (RCSDA being the continuous slowing down approximation range) and within 0.8.X90 and 0.9.X90 for isotopes (X90 being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9.RCSDA and 0.9.X90 for electrons and isotopes, respectively. Concerning monoenergetic electrons, within 0.8.RCSDA (where 90%-97% of the particle energy is deposed), FLUKA and PENELOPE agree mostly within 7%, except for 10 and 20 keV electrons (12% in water, 8.3% in bone). The

  10. Analysis of axially symmetric wire antennas by the use of exact kernel of electric field integral equation

    Directory of Open Access Journals (Sweden)

    Krneta Aleksandra J.

    2016-01-01

    Full Text Available The paper presents a new method for the analysis of wire antennas with axial symmetry. Truncated cones have been applied to precisely model antenna geometry, while the exact kernel of the electric field integral equation has been used for computation. Accuracy and efficiency of the method has been further increased by the use of higher order basis functions for current expansion, and by selecting integration methods based on singularity cancelation techniques for the calculation of potential and impedance integrals. The method has been applied to the analysis of a typical dipole antenna, thick dipole antenna and a coaxial line. The obtained results verify the high accuracy of the method. [Projekat Ministarstva nauke Republike Srbije, br. TR-32005

  11. Shielding calculations and collective dose estimations with the point-kernel-code VISIPLAN registered for the example of the project ZENT

    International Nuclear Information System (INIS)

    Boehlke, S.; Niegoth, H.

    2012-01-01

    In the nuclear power plant Leibstadt (KKL) during the next year large components will be dismantled and stored for final disposal within the interim storage facility ZENT at the NPP site. Before construction of ZENT appropriate estimations of the local dose rate inside and outside the building and the collective dose for the normal operation have to be performed. The shielding calculations are based on the properties of the stored components and radiation sources and on the concepts for working place requirements. The installation of control and monitoring areas will depend on these calculations. For the determination of the shielding potential of concrete walls and steel doors with the defined boundary conditions point-kernel codes like MICROSHIELd registered are used. Complex problems cannot be modeled with this code. Therefore the point-kernel code VISIPLAN registered was developed for the determination of the local dose distribution functions in 3D models. The possibility of motion sequence inputs allows an optimization of collective dose estimations for the operational phases of a nuclear facility.

  12. Radiation transport simulation in gamma irradiator systems using E G S 4 Monte Carlo code and dose mapping calculations based on point kernel technique

    International Nuclear Information System (INIS)

    Raisali, G.R.

    1992-01-01

    A series of computer codes based on point kernel technique and also Monte Carlo method have been developed. These codes perform radiation transport calculations for irradiator systems having cartesian, cylindrical and mixed geometries. The monte Carlo calculations, the computer code 'EGS4' has been applied to a radiation processing type problem. This code has been acompanied by a specific user code. The set of codes developed include: GCELLS, DOSMAPM, DOSMAPC2 which simulate the radiation transport in gamma irradiator systems having cylinderical, cartesian, and mixed geometries, respectively. The program 'DOSMAP3' based on point kernel technique, has been also developed for dose rate mapping calculations in carrier type gamma irradiators. Another computer program 'CYLDETM' as a user code for EGS4 has been also developed to simulate dose variations near the interface of heterogeneous media in gamma irradiator systems. In addition a system of computer codes 'PRODMIX' has been developed which calculates the absorbed dose in the products with different densities. validation studies of the calculated results versus experimental dosimetry has been performed and good agreement has been obtained

  13. Integrated model of multiple kernel learning and differential evolution for EUR/USD trading.

    Science.gov (United States)

    Deng, Shangkun; Sakurai, Akito

    2014-01-01

    Currency trading is an important area for individual investors, government policy decisions, and organization investments. In this study, we propose a hybrid approach referred to as MKL-DE, which combines multiple kernel learning (MKL) with differential evolution (DE) for trading a currency pair. MKL is used to learn a model that predicts changes in the target currency pair, whereas DE is used to generate the buy and sell signals for the target currency pair based on the relative strength index (RSI), while it is also combined with MKL as a trading signal. The new hybrid implementation is applied to EUR/USD trading, which is the most traded foreign exchange (FX) currency pair. MKL is essential for utilizing information from multiple information sources and DE is essential for formulating a trading rule based on a mixture of discrete structures and continuous parameters. Initially, the prediction model optimized by MKL predicts the returns based on a technical indicator called the moving average convergence and divergence. Next, a combined trading signal is optimized by DE using the inputs from the prediction model and technical indicator RSI obtained from multiple timeframes. The experimental results showed that trading using the prediction learned by MKL yielded consistent profits.

  14. Integrated Model of Multiple Kernel Learning and Differential Evolution for EUR/USD Trading

    Directory of Open Access Journals (Sweden)

    Shangkun Deng

    2014-01-01

    Full Text Available Currency trading is an important area for individual investors, government policy decisions, and organization investments. In this study, we propose a hybrid approach referred to as MKL-DE, which combines multiple kernel learning (MKL with differential evolution (DE for trading a currency pair. MKL is used to learn a model that predicts changes in the target currency pair, whereas DE is used to generate the buy and sell signals for the target currency pair based on the relative strength index (RSI, while it is also combined with MKL as a trading signal. The new hybrid implementation is applied to EUR/USD trading, which is the most traded foreign exchange (FX currency pair. MKL is essential for utilizing information from multiple information sources and DE is essential for formulating a trading rule based on a mixture of discrete structures and continuous parameters. Initially, the prediction model optimized by MKL predicts the returns based on a technical indicator called the moving average convergence and divergence. Next, a combined trading signal is optimized by DE using the inputs from the prediction model and technical indicator RSI obtained from multiple timeframes. The experimental results showed that trading using the prediction learned by MKL yielded consistent profits.

  15. Relationships of clinical protocols and reconstruction kernels with image quality and radiation dose in a 128-slice CT scanner: Study with an anthropomorphic and water phantom

    International Nuclear Information System (INIS)

    Paul, Jijo; Krauss, B.; Banckwitz, R.; Maentele, W.; Bauer, R.W.; Vogl, T.J.

    2012-01-01

    Research highlights: ► Clinical protocol, reconstruction kernel, reconstructed slice thickness, phantom diameter or the density of material it contains directly affects the image quality of DSCT. ► Dual energy protocol shows the lowest DLP compared to all other protocols examined. ► Dual-energy fused images show excellent image quality and the noise is same as that of single- or high-pitch mode protocol images. ► Advanced CT technology improves image quality and considerably reduce radiation dose. ► An important finding is the comparatively higher DLP of the dual-source high-pitch protocol compared to other single- or dual-energy protocols. - Abstract: Purpose: The aim of this study was to explore the relationship of scanning parameters (clinical protocols), reconstruction kernels and slice thickness with image quality and radiation dose in a DSCT. Materials and methods: The chest of an anthropomorphic phantom was scanned on a DSCT scanner (Siemens Somatom Definition flash) using different clinical protocols, including single- and dual-energy modes. Four scan protocols were investigated: 1) single-source 120 kV, 110 mA s, 2) single-source 100 kV, 180 mA s, 3) high-pitch 120 kV, 130 mA s and 4) dual-energy with 100/Sn140 kV, eff.mA s 89, 76. The automatic exposure control was switched off for all the scans and the CTDIvol selected was in between 7.12 and 7.37 mGy. The raw data were reconstructed using the reconstruction kernels B31f, B80f and B70f, and slice thicknesses were 1.0 mm and 5.0 mm. Finally, the same parameters and procedures were used for the scanning of water phantom. Friedman test and Wilcoxon-Matched-Pair test were used for statistical analysis. Results: The DLP based on the given CTDIvol values showed significantly lower exposure for protocol 4, when compared to protocol 1 (percent difference 5.18%), protocol 2 (percent diff. 4.51%), and protocol 3 (percent diff. 8.81%). The highest change in Hounsfield Units was observed with dual

  16. Determination of the integral dose in CT of the neurocranium

    Energy Technology Data Exchange (ETDEWEB)

    Rahim, H.; Mandl, H.; Hofmann, W.; Grobovschek, M.

    1985-12-01

    The amount of exposure of the cranium is calculated on the basis of the measured dose distribution in craniocaudal direction and on the axial planes of the Alderson phantom. The integral dose of the cranium and the local dose at sensitive organs are used as a measure of radiation exposure. (orig.).

  17. Neutron shielding point kernel integral calculation code for personal computer: PKN-pc

    International Nuclear Information System (INIS)

    Kotegawa, Hiroshi; Sakamoto, Yukio; Nakane, Yoshihiro; Tomita, Ken-ichi; Kurosawa, Naohiro.

    1994-07-01

    A personal computer version of PKN code, PKN-pc, has been developed to calculate neutron and secondary gamma-ray 1cm depth dose equivalents in water, ordinary concrete and iron for neutron source. Characteristics of PKN code are, to able to calculate dose equivalents in multi-layer three-dimensional system, which are described with two-dimensional surface, for monoenergetic neutron source from 0.01 to 14.9 MeV, 252 Cf fission and 241 Am-Be neutron source quick and easily. In addition to these features, the PKN-pc is possible to process interactive input and to get graphical system configuration and graphical results easily. (author)

  18. Local Observed-Score Kernel Equating

    Science.gov (United States)

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  19. Singular and Marcinkiewicz integrals with H1 kernels on product spaces

    International Nuclear Information System (INIS)

    Chen, Jiecheng; Wang, Meng; Fan, Dashan

    2008-08-01

    In this paper we shall prove that for Ω is part of H 1 (S n-1 x S m-1 ), which satisfies the cancellation condition ∫ S n-1 Ω(x ' , y ' )dx ' = ∫ S m-1 Ω(x ' , y ' )dy ' = 0 (A(x ' , y ' ) is part of S n-1 x S m-1 , the Calderon-Zygmund singular integral operator T Ω , its maximal operator T Ω * and the Marcinkiewicz integral operator μ Ω are bounded on L p (R n x R m for 1 < p < ∞. (author)

  20. Determination of pyrolysis characteristics and kinetics of palm kernel shell using TGA–FTIR and model-free integral methods

    International Nuclear Information System (INIS)

    Ma, Zhongqing; Chen, Dengyu; Gu, Jie; Bao, Binfu; Zhang, Qisheng

    2015-01-01

    Highlights: • Model-free integral kinetics method and analytical TGA–FTIR were conducted on pyrolysis process of PKS. • The pyrolysis mechanism of PKS was elaborated. • Thermal stability was established: lignin > cellulose > xylan. • Detailed compositions in the volatiles of PKS pyrolysis were determinated. • The interaction of biomass three components led to the fluctuation of activation energy in PKS pyrolysis. - Abstract: Palm kernel shell (PKS) from palm oil production is a potential biomass source for bio-energy production. A fundamental understanding of PKS pyrolysis behavior and kinetics is essential to its efficient thermochemical conversion. The thermal degradation profile in derivative thermogravimetry (DTG) analysis shown two significant mass-loss peaks mainly related to the decomposition of hemicellulose and cellulose respectively. This characteristic differentiated with other biomass (e.g. wheat straw and corn stover) presented just one peak or accompanied with an extra “shoulder” peak (e.g. wheat straw). According to the Fourier transform infrared spectrometry (FTIR) analysis, the prominent volatile components generated by the pyrolysis of PKS were CO 2 (2400–2250 cm −1 and 586–726 cm −1 ), aldehydes, ketones, organic acids (1900–1650 cm −1 ), and alkanes, phenols (1475–1000 cm −1 ). The activation energy dependent on the conversion rate was estimated by two model-free integral methods: Flynn–Wall–Ozawa (FWO) and Kissinger–Akahira–Sunose (KAS) method at different heating rates. The fluctuation of activation energy can be interpreted as a result of interactive reactions related to cellulose, hemicellulose and lignin degradation, occurred in the pyrolysis process. Based on TGA–FTIR analysis and model free integral kinetics method, the pyrolysis mechanism of PKS was elaborated in this paper

  1. iTRAQ-Based Proteomics Analysis and Network Integration for Kernel Tissue Development in Maize

    Science.gov (United States)

    Dong, Yongbin; Wang, Qilei; Du, Chunguang; Xiong, Wenwei; Li, Xinyu; Zhu, Sailan; Li, Yuling

    2017-01-01

    Grain weight is one of the most important yield components and a developmentally complex structure comprised of two major compartments (endosperm and pericarp) in maize (Zea mays L.), however, very little is known concerning the coordinated accumulation of the numerous proteins involved. Herein, we used isobaric tags for relative and absolute quantitation (iTRAQ)-based comparative proteomic method to analyze the characteristics of dynamic proteomics for endosperm and pericarp during grain development. Totally, 9539 proteins were identified for both components at four development stages, among which 1401 proteins were non-redundant, 232 proteins were specific in pericarp and 153 proteins were specific in endosperm. A functional annotation of the identified proteins revealed the importance of metabolic and cellular processes, and binding and catalytic activities for the tissue development. Three and 76 proteins involved in 49 Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways were integrated for the specific endosperm and pericarp proteins, respectively, reflecting their complex metabolic interactions. In addition, four proteins with important functions and different expression levels were chosen for gene cloning and expression analysis. Different concordance between mRNA level and the protein abundance was observed across different proteins, stages, and tissues as in previous research. These results could provide useful message for understanding the developmental mechanisms in grain development in maize. PMID:28837076

  2. Evaluation of the new electron-transport algorithm in MCNP6.1 for the simulation of dose point kernel in water

    Science.gov (United States)

    Antoni, Rodolphe; Bourgois, Laurent

    2017-12-01

    In this work, the calculation of specific dose distribution in water is evaluated in MCNP6.1 with the regular condensed history algorithm the "detailed electron energy-loss straggling logic" and the new electrons transport algorithm proposed the "single event algorithm". Dose Point Kernel (DPK) is calculated with monoenergetic electrons of 50, 100, 500, 1000 and 3000 keV for different scoring cells dimensions. A comparison between MCNP6 results and well-validated codes for electron-dosimetry, i.e., EGSnrc or Penelope, is performed. When the detailed electron energy-loss straggling logic is used with default setting (down to the cut-off energy 1 keV), we infer that the depth of the dose peak increases with decreasing thickness of the scoring cell, largely due to combined step-size and boundary crossing artifacts. This finding is less prominent for 500 keV, 1 MeV and 3 MeV dose profile. With an appropriate number of sub-steps (ESTEP value in MCNP6), the dose-peak shift is almost complete absent to 50 keV and 100 keV electrons. However, the dose-peak is more prominent compared to EGSnrc and the absorbed dose tends to be underestimated at greater depths, meaning that boundaries crossing artifact are still occurring while step-size artifacts are greatly reduced. When the single-event mode is used for the whole transport, we observe the good agreement of reference and calculated profile for 50 and 100 keV electrons. Remaining artifacts are fully vanished, showing a possible transport treatment for energies less than a hundred of keV and accordance with reference for whatever scoring cell dimension, even if the single event method initially intended to support electron transport at energies below 1 keV. Conversely, results for 500 keV, 1 MeV and 3 MeV undergo a dramatic discrepancy with reference curves. These poor results and so the current unreliability of the method is for a part due to inappropriate elastic cross section treatment from the ENDF/B-VI.8 library in those

  3. Integral dose and evaluation of irradiated tissue volume

    International Nuclear Information System (INIS)

    Sivachenko, T.P.; Kalina, V.K.; Belous, A.K.; Gaevskij, V.I.

    1984-01-01

    Two parameters having potentialities of radiotherapy planning improvement are under consideration. One of these two parameters in an integral dose. An efficiency of application of special tables for integral dose estimation is noted. These tables were developed by the Kiev Physician Improvement Institute and the Cybernetics Institute of the Ukrainian SSR Academy of Science. The meaning of the term of ''irradiated tissue volume'' is specified, and the method of calculation of the irradiated tissue effective mass is considered. It is possible to evaluate with higher accuracy tolerance doses taking into account the irradiated mass

  4. Integrating K-means Clustering with Kernel Density Estimation for the Development of a Conditional Weather Generation Downscaling Model

    Science.gov (United States)

    Chen, Y.; Ho, C.; Chang, L.

    2011-12-01

    In previous decades, the climate change caused by global warming increases the occurrence frequency of extreme hydrological events. Water supply shortages caused by extreme events create great challenges for water resource management. To evaluate future climate variations, general circulation models (GCMs) are the most wildly known tools which shows possible weather conditions under pre-defined CO2 emission scenarios announced by IPCC. Because the study area of GCMs is the entire earth, the grid sizes of GCMs are much larger than the basin scale. To overcome the gap, a statistic downscaling technique can transform the regional scale weather factors into basin scale precipitations. The statistic downscaling technique can be divided into three categories include transfer function, weather generator and weather type. The first two categories describe the relationships between the weather factors and precipitations respectively based on deterministic algorithms, such as linear or nonlinear regression and ANN, and stochastic approaches, such as Markov chain theory and statistical distributions. In the weather type, the method has ability to cluster weather factors, which are high dimensional and continuous variables, into weather types, which are limited number of discrete states. In this study, the proposed downscaling model integrates the weather type, using the K-means clustering algorithm, and the weather generator, using the kernel density estimation. The study area is Shihmen basin in northern of Taiwan. In this study, the research process contains two steps, a calibration step and a synthesis step. Three sub-steps were used in the calibration step. First, weather factors, such as pressures, humidities and wind speeds, obtained from NCEP and the precipitations observed from rainfall stations were collected for downscaling. Second, the K-means clustering grouped the weather factors into four weather types. Third, the Markov chain transition matrixes and the

  5. Generation of point isotropic source dose buildup factor data for the PFBR special concretes in a form compatible for usage in point kernel computer code QAD-CGGP

    International Nuclear Information System (INIS)

    Radhakrishnan, G.

    2003-01-01

    Full text: Around the PFBR (Prototype Fast Breeder Reactor) reactor assembly, in the peripheral shields special concretes of density 2.4 g/cm 3 and 3.6 g/cm 3 are to be used in complex geometrical shapes. Point-kernel computer code like QAD-CGGP, written for complex shield geometry comes in handy for the shield design optimization of peripheral shields. QAD-CGGP requires data base for the buildup factor data and it contains only ordinary concrete of density 2.3 g/cm 3 . In order to extend the data base for the PFBR special concretes, point isotropic source dose buildup factors have been generated by Monte Carlo method using the computer code MCNP-4A. For the above mentioned special concretes, buildup factor data have been generated in the energy range 0.5 MeV to 10.0 MeV with the thickness ranging from 1 mean free paths (mfp) to 40 mfp. Capo's formula fit of the buildup factor data compatible with QAD-CGGP has been attempted

  6. Integration of models for the Hanford Environmental Dose Reconstruction Project

    International Nuclear Information System (INIS)

    Napier, B.A.

    1991-01-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation dose that individuals could have received as a result of emissions from nuclear operations at Hanford since 1944. The objective of phase 1 of the project was to demonstrate through calculations that adequate models and support data exist or could be developed to allow realistic estimations of doses to individuals from releases of radionuclides to the environment that occurred as long as 45 years ago. Much of the data used in phase 1 was preliminary; therefore, the doses calculated must be considered preliminary approximations. This paper describes the integration of various models that was implemented for initial computer calculations. Models were required for estimating the quantity of radioactive material released, for evaluating its transport through the environment, for estimating human exposure, and for evaluating resultant doses

  7. Numerical Evaluation of the "Dual-Kernel Counter-flow" Matric Convolution Integral that Arises in Discrete/Continuous (D/C) Control Theory

    Science.gov (United States)

    Nixon, Douglas D.

    2009-01-01

    Discrete/Continuous (D/C) control theory is a new generalized theory of discrete-time control that expands the concept of conventional (exact) discrete-time control to create a framework for design and implementation of discretetime control systems that include a continuous-time command function generator so that actuator commands need not be constant between control decisions, but can be more generally defined and implemented as functions that vary with time across sample period. Because the plant/control system construct contains two linear subsystems arranged in tandem, a novel dual-kernel counter-flow convolution integral appears in the formulation. As part of the D/C system design and implementation process, numerical evaluation of that integral over the sample period is required. Three fundamentally different evaluation methods and associated algorithms are derived for the constant-coefficient case. Numerical results are matched against three available examples that have closed-form solutions.

  8. Methods of assessing total doses integrated across pathways

    International Nuclear Information System (INIS)

    Grzechnik, M.; Camplin, W.; Clyne, F.; Allott, R.; Webbe-Wood, D.

    2006-01-01

    Calculated doses for comparison with limits resulting from discharges into the environment should be summed across all relevant pathways and food groups to ensure adequate protection. Current methodology for assessments used in the radioactivity in Food and the Environment (R.I.F.E.) reports separate doses from pathways related to liquid discharges of radioactivity to the environment from those due to gaseous releases. Surveys of local inhabitant food consumption and occupancy rates are conducted in the vicinity of nuclear sites. Information has been recorded in an integrated way, such that the data for each individual is recorded for all pathways of interest. These can include consumption of foods, such as fish, crustaceans, molluscs, fruit and vegetables, milk and meats. Occupancy times over beach sediments and time spent in close proximity to the site is also recorded for inclusion of external and inhalation radiation dose pathways. The integrated habits survey data may be combined with monitored environmental radionuclide concentrations to calculate total dose. The criteria for successful adoption of a method for this calculation were: Reproducibility can others easily use the approach and reassess doses? Rigour and realism how good is the match with reality?Transparency a measure of the ease with which others can understand how the calculations are performed and what they mean. Homogeneity is the group receiving the dose relatively homogeneous with respect to age, diet and those aspects that affect the dose received? Five methods of total dose calculation were compared and ranked according to their suitability. Each method was labelled (A to E) and given a short, relevant name for identification. The methods are described below; A) Individual doses to individuals are calculated and critical group selection is dependent on dose received. B) Individual Plus As in A, but consumption and occupancy rates for high dose is used to derive rates for application in

  9. Mercure IV code application to the external dose computation from low and medium level wastes

    International Nuclear Information System (INIS)

    Tomassini, T.

    1985-01-01

    In the present work the external dose from low and medium level wastes is calculated using MERCURE IV code. The code utilizes MONTECARLO method for integrating multigroup line of sight attenuation Kernels

  10. Shielding calculations and collective dose estimations with the point-kernel-code VISIPLAN {sup registered} for the example of the project ZENT; Abschirmberechnungen und Kollektivdosisabschaetzungen mit dem Punkt-Kern-Code VISIPLAN {sup registered} am Beispiel des Projektes ZENT

    Energy Technology Data Exchange (ETDEWEB)

    Boehlke, S.; Niegoth, H. [STEAG Energy Services GmbH, Essen (Germany). Nuclear Technologies; Stalder, I. [Kernkraftwerk Leibstadt AG, Leibstadt (Switzerland)

    2012-11-01

    In the nuclear power plant Leibstadt (KKL) during the next year large components will be dismantled and stored for final disposal within the interim storage facility ZENT at the NPP site. Before construction of ZENT appropriate estimations of the local dose rate inside and outside the building and the collective dose for the normal operation have to be performed. The shielding calculations are based on the properties of the stored components and radiation sources and on the concepts for working place requirements. The installation of control and monitoring areas will depend on these calculations. For the determination of the shielding potential of concrete walls and steel doors with the defined boundary conditions point-kernel codes like MICROSHIELd {sup registered} are used. Complex problems cannot be modeled with this code. Therefore the point-kernel code VISIPLAN {sup registered} was developed for the determination of the local dose distribution functions in 3D models. The possibility of motion sequence inputs allows an optimization of collective dose estimations for the operational phases of a nuclear facility.

  11. Developing a flexible and verifiable integrated dose assessment capability

    International Nuclear Information System (INIS)

    Parzyck, D.C.; Rhea, T.A.; Copenhaver, E.D.; Bogard, J.S.

    1987-01-01

    A flexible yet verifiable system of computing and recording personnel doses is needed. Recent directions in statutes establish the trend of combining internal and external doses. We are developing a Health Physics Information Management System (HPIMS) that will centralize dosimetry calculations and data storage; integrate health physics records with other health-related disciplines, such as industrial hygiene, medicine, and safety; provide a more auditable system with published algorithms and clearly defined flowcharts of system operation; readily facilitate future changes dictated by new regulations, new dosimetric models, and new systems of units; and address ad-hoc inquiries regarding worker/workplace interactions, including potential synergisms with non-radiation exposures. The system is modular and provides a high degree of isolation from low-level detail, allowing flexibility for changes without adversely affecting other parts of the system. 10 refs., 3 figs

  12. Total Dose Effects on Bipolar Integrated Circuits at Low Temperature

    Science.gov (United States)

    Johnston, A. H.; Swimm, R. T.; Thorbourn, D. O.

    2012-01-01

    Total dose damage in bipolar integrated circuits is investigated at low temperature, along with the temperature dependence of the electrical parameters of internal transistors. Bandgap narrowing causes the gain of npn transistors to decrease far more at low temperature compared to pnp transistors, due to the large difference in emitter doping concentration. When irradiations are done at temperatures of -140 deg C, no damage occurs until devices are warmed to temperatures above -50 deg C. After warm-up, subsequent cooling shows that damage is then present at low temperature. This can be explained by the very strong temperature dependence of dispersive transport in the continuous-time-random-walk model for hole transport. For linear integrated circuits, low temperature operation is affected by the strong temperature dependence of npn transistors along with the higher sensitivity of lateral and substrate pnp transistors to radiation damage.

  13. Robust Kernel (Cross-) Covariance Operators in Reproducing Kernel Hilbert Space toward Kernel Methods

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2016-01-01

    To the best of our knowledge, there are no general well-founded robust methods for statistical unsupervised learning. Most of the unsupervised methods explicitly or implicitly depend on the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). They are sensitive to contaminated data, even when using bounded positive definite kernels. First, we propose robust kernel covariance operator (robust kernel CO) and robust kernel crosscovariance operator (robust kern...

  14. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Optimized Kernel Entropy Components.

    Science.gov (United States)

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  16. Subsampling Realised Kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    In a recent paper we have introduced the class of realised kernel estimators of the increments of quadratic variation in the presence of noise. We showed that this estimator is consistent and derived its limit distribution under various assumptions on the kernel weights. In this paper we extend our...... that subsampling is impotent, in the sense that subsampling has no effect on the asymptotic distribution. Perhaps surprisingly, for the efficient smooth kernels, such as the Parzen kernel, we show that subsampling is harmful as it increases the asymptotic variance. We also study the performance of subsampled...

  17. Putting Priors in Mixture Density Mercer Kernels

    Science.gov (United States)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  18. A relationship between Gel'fand-Levitan and Marchenko kernels

    International Nuclear Information System (INIS)

    Kirst, T.; Von Geramb, H.V.; Amos, K.A.

    1989-01-01

    An integral equation which relates the output kernels of the Gel'fand-Levitan and Marchenko inverse scattering equations is specified. Structural details of this integral equation are studied when the S-matrix is a rational function, and the output kernels are separable in terms of Bessel, Hankel and Jost solutions. 4 refs

  19. Analysis of Electronic Densities and Integrated Doses in Multiform Glioblastomas Stereotactic Radiotherapy

    International Nuclear Information System (INIS)

    Baron-Aznar, C.; Moreno-Jimenez, S.; Celis, M. A.; Ballesteros-Zebadua, P.; Larraga-Gutierrez, J. M.

    2008-01-01

    Integrated dose is the total energy delivered in a radiotherapy target. This physical parameter could be a predictor for complications such as brain edema and radionecrosis after stereotactic radiotherapy treatments for brain tumors. Integrated Dose depends on the tissue density and volume. Using CT patients images from the National Institute of Neurology and Neurosurgery and BrainScan(c) software, this work presents the mean density of 21 multiform glioblastomas, comparative results for normal tissue and estimated integrated dose for each case. The relationship between integrated dose and the probability of complications is discussed

  20. Iterative software kernels

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  1. RKRD: Runtime Kernel Rootkit Detection

    Science.gov (United States)

    Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.

    In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.

  2. Effects of total dose of ionizing radiation on integrated circuits

    Energy Technology Data Exchange (ETDEWEB)

    Silveira, Marcilei A.G.; Cirne, K.H.; Gimenez, S.; Santos, R.B.B. [Centro Universitario da FEI, Sao Bernardo do Campo, SP (Brazil); Added, N.; Barbosa, M.D.L.; Medina, N.H.; Tabacniks, M.H. [Universidade de Sao Paulo (IF/USP), SP (Brazil). Inst. de Fisica; Lima, J.A. de; Seixas Junior, L.E.; Melo, W. [Centro de Tecnologia da Informacao Paulo Archer, Sao Paulo, SP (Brazil)

    2011-07-01

    Full text: The study of ionizing radiation effects on materials used in electronic devices is of great relevance for the progress of global technological development and, particularly, it is a necessity in some strategic areas in Brazil. Electronic circuits are strongly influenced by radiation and the need for IC's featuring radiation hardness is largely growing to meet the stringent environment in space electronics. On the other hand, aerospace agencies are encouraging both scientific community and semiconductors industry to develop hardened-by-design components using standard manufacturing processes to achieve maximum performance, while significantly reducing costs. To understand the physical phenomena responsible for changes in devices exposed to ionizing radiation several kinds of radiation should then be considered, among them alpha particles, protons, gamma and X-rays. Radiation effects on the integrated circuits are usually divided into two categories: total ionizing dose (TID), a cumulative dose that shifts the threshold voltage and increases transistor's off-state current; single events effects (SEE), a transient effect which can deposit charge directly into the device and disturb the properties of electronic circuits. TID is one of the most common effects and may generate degradation in some parameters of the CMOS electronic devices, such as the threshold voltage oscillation, increase of the sub-threshold slope and increase of the off-state current. The effects of ionizing radiation are the creation of electron-hole pairs in the oxide layer changing operation mode parameters of the electronic device. Indirectly, there will be also changes in the device due to the formation of secondary electrons from the interaction of electromagnetic radiation with the material, since the charge carriers can be trapped both in the oxide layer and in the interface with the oxide. In this work we have investigated the behavior of MOSFET devices fabricated with

  3. Classification With Truncated Distance Kernel.

    Science.gov (United States)

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  4. [Determination of the integral dose in computer tomography of the neurocranium].

    Science.gov (United States)

    Rahim, H; Hofmann, W; Grobovschek, M; Mandl, H

    1985-12-01

    The amount of exposure of the cranium is calculated on the basis of the measured dose distribution in craniocaudal direction and on the axial planes of the Alderson phantom. The integral dose of the cranium and the local dose at sensitive organs are used as a measure of radiation exposure.

  5. Kernels for structured data

    CERN Document Server

    Gärtner, Thomas

    2009-01-01

    This book provides a unique treatment of an important area of machine learning and answers the question of how kernel methods can be applied to structured data. Kernel methods are a class of state-of-the-art learning algorithms that exhibit excellent learning results in several application domains. Originally, kernel methods were developed with data in mind that can easily be embedded in a Euclidean vector space. Much real-world data does not have this property but is inherently structured. An example of such data, often consulted in the book, is the (2D) graph structure of molecules formed by

  6. Kernel Bayesian ART and ARTMAP.

    Science.gov (United States)

    Masuyama, Naoki; Loo, Chu Kiong; Dawood, Farhan

    2018-02-01

    Adaptive Resonance Theory (ART) is one of the successful approaches to resolving "the plasticity-stability dilemma" in neural networks, and its supervised learning model called ARTMAP is a powerful tool for classification. Among several improvements, such as Fuzzy or Gaussian based models, the state of art model is Bayesian based one, while solving the drawbacks of others. However, it is known that the Bayesian approach for the high dimensional and a large number of data requires high computational cost, and the covariance matrix in likelihood becomes unstable. This paper introduces Kernel Bayesian ART (KBA) and ARTMAP (KBAM) by integrating Kernel Bayes' Rule (KBR) and Correntropy Induced Metric (CIM) to Bayesian ART (BA) and ARTMAP (BAM), respectively, while maintaining the properties of BA and BAM. The kernel frameworks in KBA and KBAM are able to avoid the curse of dimensionality. In addition, the covariance-free Bayesian computation by KBR provides the efficient and stable computational capability to KBA and KBAM. Furthermore, Correntropy-based similarity measurement allows improving the noise reduction ability even in the high dimensional space. The simulation experiments show that KBA performs an outstanding self-organizing capability than BA, and KBAM provides the superior classification ability than BAM, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Locally linear approximation for Kernel methods : the Railway Kernel

    OpenAIRE

    Muñoz, Alberto; González, Javier

    2008-01-01

    In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capab...

  8. Data-variant kernel analysis

    CERN Document Server

    Motai, Yuichi

    2015-01-01

    Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include

  9. Dose planning and dose delivery in radiation therapy

    International Nuclear Information System (INIS)

    Knoeoes, T.

    1991-01-01

    A method has been developed for calibration of CT-numbers to volumetric electron density distributions using tissue substitutes of known elemental composition and experimentally determined electron density. This information have been used in a dose calculation method based on photon and electron interaction processes. The method utilizes a convolution integral between the photon fluence matrix and dose distribution kernels. Inhomogeneous media are accounted for using the theorems of Fano and O'Connor for scaling dose distribution kernels in proportion to electron density. For clinical application of a calculated dose plan, a method for prediction of accelerator output have been developed. The methods gives the number of monitor units that has to be given to obtain a certain absorbed dose to a point inside an irregular, inhomogeneous object. The method for verification of dose distributions outlined in this study makes it possible to exclude the treatment related variance contributions, making an objective evaluation of dose calculations with experiments feasible. The methods for electron density determination, dose calculation and prediction of accelerator output discussed in this study will all contribute to an increased accuracy in the mean absorbed dose to the target volume. However, a substantial gain in the accuracy for the spatial absorbed dose distribution will also follow, especially using CT for mapping of electron density together with the dose calculation algorithm. (au)

  10. Device for simulation of integral dose distribution in multifield radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Belyakov, E K; Voronin, V V; Kolosova, V F; Moskalev, A I; Marova, Yu M; Stavitskii, R V; Yarovoi, V S

    1974-11-15

    Described is a device for simulation of the sum dose distribution at multifield radiation therapy; the device comprises a mechanical unit on which the emission sources and detectors are mounted, an electromechanical scanning equipment, amplifiers, an adder, a position sensor and a recording instrument. The device suggested raises an accuracy of a sick man radiation program elaboration at a remote multifield radiation therapy, permits to estimate the irradiated medium heterogeneity and beam shaper influence on the sum dose distribution and also ensured the information on the sum dose distribution of the relative or absolute units. Additional filters simulating heterogeneity and beam shaping conditions of ionizing radiation may be mounted between the quantum emission sources and detectors, and an amplifier with a variable amplification factor may be placed between the adders and printers. Thus it is possible to obtain a sum dose distribution at static methods of the remote radiation therapy at a high degree of accuracy (up to +-10%).

  11. Reproducing Kernels and Coherent States on Julia Sets

    Energy Technology Data Exchange (ETDEWEB)

    Thirulogasanthar, K., E-mail: santhar@cs.concordia.ca; Krzyzak, A. [Concordia University, Department of Computer Science and Software Engineering (Canada)], E-mail: krzyzak@cs.concordia.ca; Honnouvo, G. [Concordia University, Department of Mathematics and Statistics (Canada)], E-mail: g_honnouvo@yahoo.fr

    2007-11-15

    We construct classes of coherent states on domains arising from dynamical systems. An orthonormal family of vectors associated to the generating transformation of a Julia set is found as a family of square integrable vectors, and, thereby, reproducing kernels and reproducing kernel Hilbert spaces are associated to Julia sets. We also present analogous results on domains arising from iterated function systems.

  12. Reproducing Kernels and Coherent States on Julia Sets

    International Nuclear Information System (INIS)

    Thirulogasanthar, K.; Krzyzak, A.; Honnouvo, G.

    2007-01-01

    We construct classes of coherent states on domains arising from dynamical systems. An orthonormal family of vectors associated to the generating transformation of a Julia set is found as a family of square integrable vectors, and, thereby, reproducing kernels and reproducing kernel Hilbert spaces are associated to Julia sets. We also present analogous results on domains arising from iterated function systems

  13. BrachyTPS -Interactive point kernel code package for brachytherapy treatment planning of gynaecological cancers

    International Nuclear Information System (INIS)

    Thilagam, L.; Subbaiah, K.V.

    2008-01-01

    Brachytherapy treatment planning systems (TPS) are always recommended to account for the effect of tissue, applicator and shielding material heterogeneities exist in Intracavitary brachytherapy (ICBT) applicators. Most of the commercially available brachytherapy TPS softwares estimate the absorbed dose at a point, only taking care of the contributions of individual sources and the source distribution, neglecting the dose perturbations arising from the applicator design and construction. So the doses estimated by them are not much accurate under realistic clinical conditions. In this regard, interactive point kernel rode (BrachyTPS) has been developed to perform independent dose calculations by taking into account the effect of these heterogeneities, using two regions build up factors, proposed by Kalos. As primary input data, the code takes patients' planning data including the source specifications, dwell positions, dwell times and it computes the doses at reference points by dose point kernel formalisms, with multi-layer shield build-up factors accounting for the contributions from scattered radiation. In addition to performing dose distribution calculations, this code package is capable of displaying an isodose distribution curve into the patient anatomy images. The primary aim of this study is to validate the developed point kernel code integrated with treatment planning systems against the other tools which are available in the market. In the present work, three brachytherapy applicators commonly used in the treatment of uterine cervical carcinoma, Board of Radiation Isotope and Technology (BRIT) made low dose rate (LDR) applicator, Fletcher Green type LDR applicator and Fletcher Williamson high dose rate (HDR) applicator were studied to test the accuracy of the software

  14. On Improving Convergence Rates for Nonnegative Kernel Density Estimators

    OpenAIRE

    Terrell, George R.; Scott, David W.

    1980-01-01

    To improve the rate of decrease of integrated mean square error for nonparametric kernel density estimators beyond $0(n^{-\\frac{4}{5}}),$ we must relax the constraint that the density estimate be a bonafide density function, that is, be nonnegative and integrate to one. All current methods for kernel (and orthogonal series) estimators relax the nonnegativity constraint. In this paper we show how to achieve similar improvement by relaxing the integral constraint only. This is important in appl...

  15. Integrated Worker Radiation Dose Assessment for the K Basins

    International Nuclear Information System (INIS)

    NELSON, J.V.

    1999-01-01

    This report documents an assessment of the radiation dose workers at the K Basins are expected to receive in the process of removing spent nuclear fuel from the storage basins. The K Basins (K East and K West) are located in the Hanford 100K Area

  16. On convergence of kernel learning estimators

    NARCIS (Netherlands)

    Norkin, V.I.; Keyzer, M.A.

    2009-01-01

    The paper studies convex stochastic optimization problems in a reproducing kernel Hilbert space (RKHS). The objective (risk) functional depends on functions from this RKHS and takes the form of a mathematical expectation (integral) of a nonnegative integrand (loss function) over a probability

  17. Analytic properties of the Virasoro modular kernel

    Energy Technology Data Exchange (ETDEWEB)

    Nemkov, Nikita [Moscow Institute of Physics and Technology (MIPT), Dolgoprudny (Russian Federation); Institute for Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); National University of Science and Technology MISIS, The Laboratory of Superconducting metamaterials, Moscow (Russian Federation)

    2017-06-15

    On the space of generic conformal blocks the modular transformation of the underlying surface is realized as a linear integral transformation. We show that the analytic properties of conformal block implied by Zamolodchikov's formula are shared by the kernel of the modular transformation and illustrate this by explicit computation in the case of the one-point toric conformal block. (orig.)

  18. Calculation of radiation dose rate above water layer of Interim Spent Fuel Storage Jaslovske Bohunice by the point Kernels (VISIPLAN) and Monte Carlo (MCNP4C) methods

    International Nuclear Information System (INIS)

    Slavik, O.; Kucharova, D.; Listjak, M.; Fueloep, M.

    2008-01-01

    The aim of this paper is to evaluate maximal dose rate (DR) of gamma radiation above different configurations of reservoirs with spent nuclear fuel with cooling period 1.8 year and to compare by buildup factor method (Visiplan) and Monte Carlo simulations and to appreciate influence of scattered photons in the case of calculation of fully filled fuel transfer storage (FTS). On the ground of performed accounts it was shown, that relative contributions of photons from adjacent reservoirs are in the case buildup factor method (Visiplan) similar to Monte Carlo simulations. It means, that Visiplan can be used also for valuation of contributions of of dose rates from neighbouring reservoirs. It was shown, that calculations of DR by Visiplan are conservatively overestimated for this source of radiation and thickness of shielding approximately 2.6 - 3 times. Also following these calculations resulted, that by storage of reservoirs with cooling period 1.8 years in FTS is not needed any additional protection measures for workers against primal safety report. Calculated DR also above fully filled FTS by these reservoirs in Jaslovske Bohunice is very low on the level 0.03 μSv/h. (authors)

  19. Calculation of radiation dose rate above water layer of Interim Spent Fuel Storage Jaslovske Bohunice by the point Kernels (VISIPLAN) and Monte Carlo (MCNP4C) methods

    International Nuclear Information System (INIS)

    Slavik, O.; Kucharova, D.; Listjak, M.; Fueloep, M.

    2009-01-01

    The aim of this paper is to evaluate maximal dose rate (DR) of gamma radiation above different configurations of reservoirs with spent nuclear fuel with cooling period 1.8 year and to compare by buildup factor method (Visiplan) and Monte Carlo simulations and to appreciate influence of scattered photons in the case of calculation of fully filled fuel transfer storage (FTS). On the ground of performed accounts it was shown, that relative contributions of photons from adjacent reservoirs are in the case buildup factor method (Visiplan) similar to Monte Carlo simulations. It means, that Visiplan can be used also for valuation of contributions of of dose rates from neighbouring reservoirs. It was shown, that calculations of DR by Visiplan are conservatively overestimated for this source of radiation and thickness of shielding approximately 2.6 - 3 times. Also following these calculations resulted, that by storage of reservoirs with cooling period 1.8 years in FTS is not needed any additional protection measures for workers against primal safety report. Calculated DR also above fully filled FTS by these reservoirs in Jaslovske Bohunice is very low on the level 0.03 μSv/h. (authors)

  20. Risk of a second malignant neoplasm after cancer in childhood treated with radiotherapy: correlation with the integral dose

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, F.; Rubino, C.; Guerin, S.; de Vathaire, F. [National Institute of Public Health and Medical Research (INSERM) Unit 605, Institut Gustave-Roussy, Villejuif (France); Diallo, I.; Samand, A. [National Institute of Public Health and Medical Research (INSERM) Unit 605, Institut Gustave-Roussy, Villejuif, (France); Medical Physics and Radiotherapy Departments, Institut Gustave-Roussy, Villejuif (France); Hawkins, M. [Centre for Childhood Cancer Survivor Studies, University of Birmingham, Birmingham (United Kingdom); Oberlin, O. [Paediatrics Department, Institut Gustave-Roussy, Villejuif (France); Lefkopoulos, D. [Medical Physics and Radiotherapy Departments, Institut Gustave-Roussy, Villejuif (France)

    2006-07-01

    In the cohort, among patients who had received radiotherapy, only those who had received the highest integral dose had a higher risk. Among the other patients, including 80% of the variability of the integral dose, no increased risk was evidenced. Thus, the integral dose in the study cannot be considered as a good predictor of later risk. (N.C.)

  1. Risk of a second malignant neoplasm after cancer in childhood treated with radiotherapy: correlation with the integral dose

    International Nuclear Information System (INIS)

    Nguyen, F.; Rubino, C.; Guerin, S.; de Vathaire, F.; Diallo, I.; Samand, A.; Hawkins, M.; Oberlin, O.; Lefkopoulos, D.

    2006-01-01

    In the cohort, among patients who had received radiotherapy, only those who had received the highest integral dose had a higher risk. Among the other patients, including 80% of the variability of the integral dose, no increased risk was evidenced. Thus, the integral dose in the study cannot be considered as a good predictor of later risk. (N.C.)

  2. Realized kernels in practice

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, P. Reinhard; Lunde, Asger

    2009-01-01

    and find a remarkable level of agreement. We identify some features of the high-frequency data, which are challenging for realized kernels. They are when there are local trends in the data, over periods of around 10 minutes, where the prices and quotes are driven up or down. These can be associated......Realized kernels use high-frequency data to estimate daily volatility of individual stock prices. They can be applied to either trade or quote data. Here we provide the details of how we suggest implementing them in practice. We compare the estimates based on trade and quote data for the same stock...

  3. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  4. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  5. Integral dose investigation of non-coplanar treatment beam geometries in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Dan; Dong, Peng; Ruan, Dan; Low, Daniel A.; Sheng, Ke, E-mail: ksheng@mednet.ucla.edu [Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States); Long, Troy; Romeijn, Edwin [Department of Industrial and Operations, University of Michigan, Ann Arbor, Michigan 48109 (United States)

    2014-01-15

    Purpose: Automated planning and delivery of non-coplanar plans such as 4π radiotherapy involving a large number of fields have been developed to take advantage of the newly available automated couch and gantry on C-arm gantry linacs. However, there is an increasing concern regarding the potential changes in the integral dose that needs to be investigated. Methods: A digital torso phantom and 22 lung and liver stereotactic body radiation therapy (SBRT) patients were included in the study. The digital phantom was constructed as a water equivalent elliptical cylinder with a major axis length of 35.4 cm and minor axis of 23.6 cm. A 4.5 cm diameter target was positioned at varying depths along the major axis. Integral doses from intensity modulated, non-coplanar beams forming a conical pattern were compared against the equally spaced coplanar beam plans. Integral dose dependence on the phantom geometry and the beam number was also quantified. For the patient plans, the non-coplanar and coplanar beams and fluences were optimized using a column generation and pricing approach and compared against clinical VMAT plans using two full (lung) or partial coplanar arcs (liver) entering at the side proximal to the tumor. Both the average dose to the normal tissue volume and the total volumes receiving greater than 2 Gy (V2) and 5 Gy (V5) were evaluated and compared. Results: The ratio of integral dose from the non-coplanar and coplanar plans depended on the tumor depth for the phantom; for tumors shallower than 10 cm, the non-coplanar integral doses were lower than coplanar integral doses for non-coplanar angles less than 60°. Similar patterns were observed in the patient plans. The smallest non-coplanar integral doses were observed for tumor 6–8 cm deep. For the phantom, the integral dose was independent of the number of beams, consistent with the liver SBRT patients but the lung SBRT patients showed slight increase in the integral dose when more beams were used. Larger

  6. DITTY - a computer program for calculating population dose integrated over ten thousand years

    International Nuclear Information System (INIS)

    Napier, B.A.; Peloquin, R.A.; Strenge, D.L.

    1986-03-01

    The computer program DITTY (Dose Integrated Over Ten Thousand Years) was developed to determine the collective dose from long term nuclear waste disposal sites resulting from the ground-water pathways. DITTY estimates the time integral of collective dose over a ten-thousand-year period for time-variant radionuclide releases to surface waters, wells, or the atmosphere. This document includes the following information on DITTY: a description of the mathematical models, program designs, data file requirements, input preparation, output interpretations, sample problems, and program-generated diagnostic messages

  7. Integration method of 3D MR spectroscopy into treatment planning system for glioblastoma IMRT dose painting with integrated simultaneous boost

    International Nuclear Information System (INIS)

    Ken, Soléakhéna; Cassol, Emmanuelle; Delannes, Martine; Celsis, Pierre; Cohen-Jonathan, Elizabeth Moyal; Laprie, Anne; Vieillevigne, Laure; Franceries, Xavier; Simon, Luc; Supper, Caroline; Lotterie, Jean-Albert; Filleron, Thomas; Lubrano, Vincent; Berry, Isabelle

    2013-01-01

    To integrate 3D MR spectroscopy imaging (MRSI) in the treatment planning system (TPS) for glioblastoma dose painting to guide simultaneous integrated boost (SIB) in intensity-modulated radiation therapy (IMRT). For sixteen glioblastoma patients, we have simulated three types of dosimetry plans, one conventional plan of 60-Gy in 3D conformational radiotherapy (3D-CRT), one 60-Gy plan in IMRT and one 72-Gy plan in SIB-IMRT. All sixteen MRSI metabolic maps were integrated into TPS, using normalization with color-space conversion and threshold-based segmentation. The fusion between the metabolic maps and the planning CT scans were assessed. Dosimetry comparisons were performed between the different plans of 60-Gy 3D-CRT, 60-Gy IMRT and 72-Gy SIB-IMRT, the last plan was targeted on MRSI abnormalities and contrast enhancement (CE). Fusion assessment was performed for 160 transformations. It resulted in maximum differences <1.00 mm for translation parameters and ≤1.15° for rotation. Dosimetry plans of 72-Gy SIB-IMRT and 60-Gy IMRT showed a significantly decreased maximum dose to the brainstem (44.00 and 44.30 vs. 57.01 Gy) and decreased high dose-volumes to normal brain (19 and 20 vs. 23% and 7 and 7 vs. 12%) compared to 60-Gy 3D-CRT (p < 0.05). Delivering standard doses to conventional target and higher doses to new target volumes characterized by MRSI and CE is now possible and does not increase dose to organs at risk. MRSI and CE abnormalities are now integrated for glioblastoma SIB-IMRT, concomitant with temozolomide, in an ongoing multi-institutional phase-III clinical trial. Our method of MR spectroscopy maps integration to TPS is robust and reliable; integration to neuronavigation systems with this method could also improve glioblastoma resection or guide biopsies

  8. Kernel methods for deep learning

    OpenAIRE

    Cho, Youngmin

    2012-01-01

    We introduce a new family of positive-definite kernels that mimic the computation in large neural networks. We derive the different members of this family by considering neural networks with different activation functions. Using these kernels as building blocks, we also show how to construct other positive-definite kernels by operations such as composition, multiplication, and averaging. We explore the use of these kernels in standard models of supervised learning, such as support vector mach...

  9. Multivariate realised kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole; Hansen, Peter Reinhard; Lunde, Asger

    We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement noise of certain types and can also handle non-synchronous trading. It is the first estimator...

  10. Kernel bundle EPDiff

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Lauze, Francois Bernard; Nielsen, Mads

    2011-01-01

    In the LDDMM framework, optimal warps for image registration are found as end-points of critical paths for an energy functional, and the EPDiff equations describe the evolution along such paths. The Large Deformation Diffeomorphic Kernel Bundle Mapping (LDDKBM) extension of LDDMM allows scale space...

  11. Kernel structures for Clouds

    Science.gov (United States)

    Spafford, Eugene H.; Mckendry, Martin S.

    1986-01-01

    An overview of the internal structure of the Clouds kernel was presented. An indication of how these structures will interact in the prototype Clouds implementation is given. Many specific details have yet to be determined and await experimentation with an actual working system.

  12. Viscosity kernel of molecular fluids

    DEFF Research Database (Denmark)

    Puscasu, Ruslan; Todd, Billy; Daivis, Peter

    2010-01-01

    , temperature, and chain length dependencies of the reciprocal and real-space viscosity kernels are presented. We find that the density has a major effect on the shape of the kernel. The temperature range and chain lengths considered here have by contrast less impact on the overall normalized shape. Functional...... forms that fit the wave-vector-dependent kernel data over a large density and wave-vector range have also been tested. Finally, a structural normalization of the kernels in physical space is considered. Overall, the real-space viscosity kernel has a width of roughly 3–6 atomic diameters, which means...

  13. Enchanced total dose damage in junction field effect transistors and related linear integrated circuits

    International Nuclear Information System (INIS)

    Flament, O.; Autran, J.L.; Roche, P.; Leray, J.L.; Musseau, O.

    1996-01-01

    Enhanced total dose damage of Junction Field-effect Transistors (JFETs) due to low dose rate and/or elevated temperature has been investigated for elementary p-channel structures fabricated on bulk and SOI substrates as well as for related linear integrated circuits. All these devices were fabricated with conventional junction isolation (field oxide). Large increases in damage have been revealed by performing high temperature and/or low dose rate irradiations. These results are consistent with previous studies concerning bipolar field oxides under low-field conditions. They suggest that the transport of radiation-induced holes through the oxide is the underlying mechanism. Such an enhanced degradation must be taken into account for low dose rate effects on linear integrated circuits

  14. Variable Kernel Density Estimation

    OpenAIRE

    Terrell, George R.; Scott, David W.

    1992-01-01

    We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...

  15. Steerability of Hermite Kernel

    Czech Academy of Sciences Publication Activity Database

    Yang, Bo; Flusser, Jan; Suk, Tomáš

    2013-01-01

    Roč. 27, č. 4 (2013), 1354006-1-1354006-25 ISSN 0218-0014 R&D Projects: GA ČR GAP103/11/1552 Institutional support: RVO:67985556 Keywords : Hermite polynomials * Hermite kernel * steerability * adaptive filtering Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.558, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/yang-0394387. pdf

  16. The effect of source-axis distance on integral dose: implications for IMRT

    International Nuclear Information System (INIS)

    Keall, P.

    2001-01-01

    The source-axis distance (SAD) is a treatment machine design parameter that affects integral dose, dose rate and patient clearance. The aim of this work was to investigate the effect of source-axis distance on integral dose for conformal arc therapy. This work is part of a larger project to determine the ideal characteristics of a dedicated IMRT machine. The sensitivity of SAD to beam energy, PTV size, body size and PTV position were determined for conformal arc therapy. For the calculations performed here it was assumed that dose equals terma. The integral dose ratio (IDR) was used to quantify the calculation results. It was found that the IDR increases as both SAD and photon energy increase, though the dependence of IDR on SAD decreases as energy increases. The PTV size was found to have a negligible effect on the relationship between the SAD and IDR, however the body size does affect the relationship between the SAD and IDR. The position of the PTV within the body also affects the IDR. From dosimetric considerations alone, the larger the SAD, the better the possible dose distribution. The IDR for a very large SAD is increased by approximately 5% when compared with the IDR for 100 cm SAD. Similarly, the IDR for 100 cm SAD is approximately 5% higher than the IDR at 50 cm SAD. Copyright (2001) Australasian College of Physical Scientists and Engineers in Medicine

  17. Energy and integrated dose dependence of MOSFET dosimeter for clinical electron beams

    International Nuclear Information System (INIS)

    Manigandan, D.; Bharanidharan, G.; Aruna, P.; Ganesan, S.; Tamil Kumar, T.; Rai

    2008-01-01

    In this study, the sensitivity (mV/cGy) and integral dose dependence of a MOSFET detector for different clinical electron beams was studied. Calibrated clinical electron beams (Varian 2100) were used for the exposure. A Markus type parallel plate chamber was used for the absolute dose measurements. In order to study the sensitivity of a MOSFET, the response of the ion chamber and MOSFET for the absorbed dose of 100 cGy was measured. The sensitivity of the MOSFET was then expressed as mV/cGy. Sensitivity was measured for 4-18 MeV electron beams. (author)

  18. Scuba: scalable kernel-based gene prioritization.

    Science.gov (United States)

    Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio

    2018-01-25

    The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .

  19. Independent dose calculation in IMRT for the Tps Iplan using the Clarkson modified integral

    International Nuclear Information System (INIS)

    Adrada, A.; Tello, Z.; Garrigo, E.; Venencia, D.

    2014-08-01

    Intensity-Modulated Radiation Therapy (IMRT) treatments require a quality assurance (Q A) specific patient before delivery. These controls include the experimental verification in dose phantom of the total plan as well as dose distributions. The use of independent dose calculation (IDC) is used in 3D-Crt treatments; however its application in IMRT requires the implementation of an algorithm that allows considering a non-uniform intensity beam. The purpose of this work was to develop IDC software in IMRT with MLC using the algorithm proposed by Kung (Kung et al. 2000). The software was done using Matlab programming. The Clarkson modified integral was implemented on each flowing, applying concentric rings for the dose determination. From the integral of each field was calculated the dose anywhere. One time finished a planning; all data are exported to a phantom where a Q A plan is generated. On this is calculated the half dose in a representative volume of the ionization chamber and the dose at the center of it. Until now 230 IMRT planning were analyzed carried out ??in the treatment planning system (Tps) Iplan. For each one of them Q A plan was generated, were calculated and compared calculated dose with the Tps, IDC system and measurement with ionization chamber. The average difference between measured and calculated dose with the IDC system was 0.4% ± 2.2% [-6.8%, 6.4%]. The difference between the measured and the calculated doses by the pencil-beam algorithm (Pb) of Tps was 2.6% ± 1.41% [-2.0%, 5.6%] and with the Monte Carlo algorithm was 0.4% ± 1.5% [-4.9%, 3.7%]. The differences of the carried out software are comparable to the obtained with the ionization chamber and Tps in Monte Carlo mode. (author)

  20. High-dose simultaneously integrated breast boost using intensity-modulated radiotherapy and inverse optimization

    International Nuclear Information System (INIS)

    Hurkmans, Coen W.; Meijer, Gert J.; Vliet-Vroegindeweij, Corine van; Sangen, Maurice J. van der; Cassee, Jorien

    2006-01-01

    Purpose: Recently a Phase III randomized trial has started comparing a boost of 16 Gy as part of whole-breast irradiation to a high boost of 26 Gy in young women. Our main aim was to develop an efficient simultaneously integrated boost (SIB) technique for the high-dose arm of the trial. Methods and Materials: Treatment planning was performed for 5 left-sided and 5 right-sided tumors. A tangential field intensity-modulated radiotherapy technique added to a sequentially planned 3-field boost (SEQ) was compared with a simultaneously planned technique (SIB) using inverse optimization. Normalized total dose (NTD)-corrected dose volume histogram parameters were calculated and compared. Results: The intended NTD was produced by 31 fractions of 1.66 Gy to the whole breast and 2.38 Gy to the boost volume. The average volume of the PTV-breast and PTV-boost receiving more than 95% of the prescribed dose was 97% or more for both techniques. Also, the mean lung dose and mean heart dose did not differ much between the techniques, with on average 3.5 Gy and 2.6 Gy for the SEQ and 3.8 Gy and 2.6 Gy for the SIB, respectively. However, the SIB resulted in a significantly more conformal irradiation of the PTV-boost. The volume of the PTV-breast, excluding the PTV-boost, receiving a dose higher than 95% of the boost dose could be reduced considerably using the SIB as compared with the SEQ from 129 cc (range, 48-262 cc) to 58 cc (range, 30-102 cc). Conclusions: A high-dose simultaneously integrated breast boost technique has been developed. The unwanted excessive dose to the breast was significantly reduced

  1. On weights which admit the reproducing kernel of Bergman type

    Directory of Open Access Journals (Sweden)

    Zbigniew Pasternak-Winiarski

    1992-01-01

    Full Text Available In this paper we consider (1 the weights of integration for which the reproducing kernel of the Bergman type can be defined, i.e., the admissible weights, and (2 the kernels defined by such weights. It is verified that the weighted Bergman kernel has the analogous properties as the classical one. We prove several sufficient conditions and necessary and sufficient conditions for a weight to be an admissible weight. We give also an example of a weight which is not of this class. As a positive example we consider the weight μ(z=(Imz2 defined on the unit disk in ℂ.

  2. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    Science.gov (United States)

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  3. A point kernel shielding code, PKN-HP, for high energy proton incident

    Energy Technology Data Exchange (ETDEWEB)

    Kotegawa, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1996-06-01

    A point kernel integral technique code PKN-HP, and the related thick target neutron yield data have been developed to calculate neutron and secondary gamma-ray dose equivalents in ordinary concrete and iron shields for fully stopping length C, Cu and U-238 target neutrons produced by 100 MeV-10 GeV proton incident in a 3-dimensional geometry. The comparisons among calculation results of the present code and other calculation techniques, and measured values showed the usefulness of the code. (author)

  4. The definition of kernel Oz

    OpenAIRE

    Smolka, Gert

    1994-01-01

    Oz is a concurrent language providing for functional, object-oriented, and constraint programming. This paper defines Kernel Oz, a semantically complete sublanguage of Oz. It was an important design requirement that Oz be definable by reduction to a lean kernel language. The definition of Kernel Oz introduces three essential abstractions: the Oz universe, the Oz calculus, and the actor model. The Oz universe is a first-order structure defining the values and constraints Oz computes with. The ...

  5. 7 CFR 981.7 - Edible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  6. 7 CFR 981.408 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  7. 7 CFR 981.8 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  8. Data integration reveals key homeostatic mechanisms following low dose radiation exposure

    Energy Technology Data Exchange (ETDEWEB)

    Tilton, Susan C.; Matzke, Melissa M. [Computational Biology and Bioinformatics, Pacific Northwest National Laboratory, Richland, WA 99338 (United States); Sowa, Marianne B.; Stenoien, David L.; Weber, Thomas J. [Health Impacts and Exposure Science, Pacific Northwest National Laboratory, Richland, WA 99338 (United States); Morgan, William F. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99338 (United States); Waters, Katrina M., E-mail: katrina.waters@pnnl.gov [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99338 (United States)

    2015-05-15

    The goal of this study was to define pathways regulated by low dose radiation to understand how biological systems respond to subtle perturbations in their environment and prioritize pathways for human health assessment. Using an in vitro 3-D human full thickness skin model, we have examined the temporal response of dermal and epidermal layers to 10 cGy X-ray using transcriptomic, proteomic, phosphoproteomic and metabolomic platforms. Bioinformatics analysis of each dataset independently revealed potential signaling mechanisms affected by low dose radiation, and integrating data shed additional insight into the mechanisms regulating low dose responses in human tissue. We examined direct interactions among datasets (top down approach) and defined several hubs as significant regulators, including transcription factors (YY1, MYC and CREB1), kinases (CDK2, PLK1) and a protease (MMP2). These data indicate a shift in response across time — with an increase in DNA repair, tissue remodeling and repression of cell proliferation acutely (24–72 h). Pathway-based integration (bottom up approach) identified common molecular and pathway responses to low dose radiation, including oxidative stress, nitric oxide signaling and transcriptional regulation through the SP1 factor that would not have been identified by the individual data sets. Significant regulation of key downstream metabolites of nitrative stress was measured within these pathways. Among the features identified in our study, the regulation of MMP2 and SP1 was experimentally validated. Our results demonstrate the advantage of data integration to broadly define the pathways and networks that represent the mechanisms by which complex biological systems respond to perturbation. - Highlights: • Low dose ionizing radiation altered homeostasis in 3D skin tissue model. • Global gene/protein/metabolite data integrated using complementary statistical approaches • Time and location-specific change in matrix regulation

  9. Multivariate realised kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement error of certain types and can also handle non-synchronous trading. It is the first estimator...... which has these three properties which are all essential for empirical work in this area. We derive the large sample asymptotics of this estimator and assess its accuracy using a Monte Carlo study. We implement the estimator on some US equity data, comparing our results to previous work which has used...

  10. Clustering via Kernel Decomposition

    DEFF Research Database (Denmark)

    Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan

    2006-01-01

    Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....

  11. Integration of multi-criteria and nearest neighbour analysis with kernel density functions for improving sinkhole susceptibility models: the case study of Enemonzo (NE Italy

    Directory of Open Access Journals (Sweden)

    Chiara Calligaris

    2017-06-01

    Full Text Available The significance of intra-mountain valleys to infrastructure and human settlements and the need to mitigate the geo-hazard affecting these assets are fundamental to the economy of Italian alpine regions. Therefore, there is a real need to recognize and assess possible geo-hazards affecting them. This study proposes the use of GIS-based analyses to construct a sinkhole susceptibility model based on conditioning factors such as land use, geomorphology, thickness of shallow deposits, distance to drainage network and distance to faults. Thirty-two models, applied to a test site (Enemonzo municipality, NE Italy, were produced using a method based on the Likelihood Ratio (λ function, nine with only one variable and 23 applying different combinations. The sinkhole susceptibility model with the best forecast performance, with an Area Under the Prediction Rate Curve (AUPRC of 0.88, was that combining the following parameters: Nearest Sinkhole Distance (NSD, land use and thickness of the surficial deposits. The introduction of NSD as a continuous variable in the computation represents an important upgrade in the prediction capability of the model. Additionally, the model was refined using a kernel density estimation that produced a significant improvement in the forecast performance.

  12. Integrated doses calculation in evacuation scenarios of the neutron generator facility at Missouri S&T

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Manish K.; Alajo, Ayodeji B., E-mail: alajoa@mst.edu

    2016-08-11

    Any source of ionizing radiations could lead to considerable dose acquisition to individuals in a nuclear facility. Evacuation may be required when elevated levels of radiation is detected within a facility. In this situation, individuals are more likely to take the closest exit. This may not be the most expedient decision as it may lead to higher dose acquisition. The strategy followed in preventing large dose acquisitions should be predicated on the path that offers least dose acquisition. In this work, the neutron generator facility at Missouri University of Science and Technology was analyzed. The Monte Carlo N-Particle (MCNP) radiation transport code was used to model the entire floor of the generator's building. The simulated dose rates in the hallways were used to estimate the integrated doses for different paths leading to exits. It was shown that shortest path did not always lead to minimum dose acquisition and the approach was successful in predicting the expedient path as opposed to the approach of taking the nearest exit.

  13. Global Polynomial Kernel Hazard Estimation

    DEFF Research Database (Denmark)

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  14. Linear and kernel methods for multivariate change detection

    DEFF Research Database (Denmark)

    Canty, Morton J.; Nielsen, Allan Aasbjerg

    2012-01-01

    ), as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (nonlinear), may further enhance change signals relative to no-change background. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric...... normalization, and kernel PCA/MAF/MNF transformations are presented that function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. The train/test approach to kernel PCA is evaluated against a Hebbian learning procedure. Matlab code is also available...... that allows fast data exploration and experimentation with smaller datasets. New, multiresolution versions of IR-MAD that accelerate convergence and that further reduce no-change background noise are introduced. Computationally expensive matrix diagonalization and kernel image projections are programmed...

  15. Omnibus risk assessment via accelerated failure time kernel machine modeling.

    Science.gov (United States)

    Sinnott, Jennifer A; Cai, Tianxi

    2013-12-01

    Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.

  16. Robotic intelligence kernel

    Science.gov (United States)

    Bruemmer, David J [Idaho Falls, ID

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  17. The integral biologically effective dose to predict brain stem toxicity of hypofractionated stereotactic radiotherapy

    International Nuclear Information System (INIS)

    Clark, Brenda G.; Souhami, Luis; Pla, Conrado; Al-Amro, Abdullah S.; Bahary, Jean-Paul; Villemure, Jean-Guy; Caron, Jean-Louis; Olivier, Andre; Podgorsak, Ervin B.

    1998-01-01

    Purpose: The aim of this work was to develop a parameter for use during fractionated stereotactic radiotherapy treatment planning to aid in the determination of the appropriate treatment volume and fractionation regimen that will minimize risk of late damage to normal tissue. Materials and Methods: We have used the linear quadratic model to assess the biologically effective dose at the periphery of stereotactic radiotherapy treatment volumes that impinge on the brain stem. This paper reports a retrospective study of 77 patients with malignant and benign intracranial lesions, treated between 1987 and 1995, with the dynamic rotation technique in 6 fractions over a period of 2 weeks, to a total dose of 42 Gy prescribed at the 90% isodose surface. From differential dose-volume histograms, we evaluated biologically effective dose-volume histograms and obtained an integral biologically-effective dose (IBED) in each case. Results: Of the 77 patients in the study, 36 had target volumes positioned so that the brain stem received more than 1% of the prescribed dose, and 4 of these, all treated for meningioma, developed serious late damage involving the brain stem. Other than type of lesion, the only significant variable was the volume of brain stem exposed. An analysis of the IBEDs received by these 36 patients shows evidence of a threshold value for late damage to the brain stem consistent with similar thresholds that have been determined for external beam radiotherapy. Conclusions: We have introduced a new parameter, the IBED, that may be used to represent the fractional effective dose to structures such as the brain stem that are partially irradiated with stereotactic dose distributions. The IBED is easily calculated prior to treatment and may be used to determine appropriate treatment volumes and fractionation regimens minimizing possible toxicity to normal tissue

  18. Integrated numerical platforms for environmental dose assessments of large tritium inventory facilities

    International Nuclear Information System (INIS)

    Castro, P.; Ardao, J.; Velarde, M.; Sedano, L.; Xiberta, J.

    2013-01-01

    Related with a prospected new scenario of large inventory tritium facilities [KATRIN at TLK, CANDUs, ITER, EAST, other coming] the prescribed dosimetric limits by ICRP-60 for tritium committed-doses are under discussion requiring, in parallel, to surmount the highly conservative assessments by increasing the refinement of dosimetric-assessments in many aspects. Precise Lagrangian-computations of dosimetric cloud-evolution after standardized (normal/incidental/SBO) tritium cloud emissions are today numerically open to the perfect match of real-time meteorological-data, and patterns data at diverse scales for prompt/early and chronic tritium dose assessments. The trends towards integrated-numerical-platforms for environmental-dose assessments of large tritium inventory facilities under development.

  19. Insulator photocurrents: Application to dose rate hardening of CMOS/SOI integrated circuits

    International Nuclear Information System (INIS)

    Dupont-Nivet, E.; Coiec, Y.M.; Flament, O.; Tinel, F.

    1998-01-01

    Irradiation of insulators with a pulse of high energy x-rays can induce photocurrents in the interconnections of integrated circuits. The authors present, here, a new method to measure and analyze this effect together with a simple model. They also demonstrate that these insulator photocurrents have to be taken into account to obtain high levels of dose-rate hardness with CMOS on SOI integrated circuits, especially flip-flops or memory blocks of ASICs. They show that it explains some of the upsets observed in a SRAM embedded in an ASIC

  20. Measurements of integrated components' parameters versus irradiation doses gamma radiation (60Co) dosimetry-methodology-tests

    International Nuclear Information System (INIS)

    Fuan, J.

    1991-01-01

    This paper describes the methodology used for the irradiation of the integrated components and the measurements of their parameters, using Quality Insurance of dosimetry: - Measurement of the integrated dose using the competences of the Laboratoire Central des Industries Electriques (LCIE): - Measurement of irradiation dose versus source/component distance, using a calibrated equipment. - Use of ALANINE dosimeters, placed on the support of the irradiated components. - Assembly and polarization of components during the irradiations. Selection of the irradiator. - Measurement of the irradiated components's parameters, using the competences of the societies: - GenRad: GR130 tests equipement placed in the DEIN/SIR-CEN SACLAY. - Laboratoire Central des Industries Electriques (LCIE): GR125 tests equipment and this associated programmes test [fr

  1. Test methods of total dose effects in very large scale integrated circuits

    International Nuclear Information System (INIS)

    He Chaohui; Geng Bin; He Baoping; Yao Yujuan; Li Yonghong; Peng Honglun; Lin Dongsheng; Zhou Hui; Chen Yusheng

    2004-01-01

    A kind of test method of total dose effects (TDE) is presented for very large scale integrated circuits (VLSI). The consumption current of devices is measured while function parameters of devices (or circuits) are measured. Then the relation between data errors and consumption current can be analyzed and mechanism of TDE in VLSI can be proposed. Experimental results of 60 Co γ TDEs are given for SRAMs, EEPROMs, FLASH ROMs and a kind of CPU

  2. Time-integrated thyroid dose for accidental releases from Pakistan Research Reactor-1

    International Nuclear Information System (INIS)

    Raza, S Shoaib; Iqbal, M; Salahuddin, A; Avila, R; Pervez, S

    2004-01-01

    The two-hourly time-integrated thyroid dose due to radio-iodines released to the atmosphere through the exhaust stack of Pakistan Research Reactor-1 (PARR-1), under accident conditions, has been calculated. A computer program, PAKRAD (which was developed under an IAEA research grant, PAK/RCA/8990), was used for the dose calculations. The sensitivity of the dose results to different exhaust flow rates and atmospheric stability classes was studied. The effect of assuming a constant activity concentration (as a function of time) within the containment air volume and an exponentially decreasing air concentration on the time-integrated dose was also studied for various flow rates (1000-50,000 m 3 h -1 ). The comparison indicated that the results were insensitive to the containment air exhaust rates up to or below 2000 m 3 h -1 , when the prediction with the constant activity concentration assumption was compared to an exponentially decreasing activity concentration model. The results also indicated that the plume touchdown distance increases with increasing atmospheric stability. (note)

  3. Radioactivity in food and the environment: calculations of UK radiation doses using integrated methods

    International Nuclear Information System (INIS)

    Allott, Rob

    2003-01-01

    Dear Sir: I read with interest the paper by W C Camplin, G P Brownless, G D Round, K Winpenny and G J Hunt from the Centre for Environment, Fisheries and Aquaculture Science (CEFAS) on 'Radioactivity in food and the environment: calculations of UK radiation doses using integrated methods' in the December 2002 issue of this journal (J. Radiol. Prot. 22 371-88). The Environment Agency has a keen interest in the development of a robust methodology for assessing total doses which have been received by members of the public from authorised discharges of radioactive substances to the environment. Total dose in this context means the dose received from all authorised discharges and all exposure pathways (e.g. inhalation, external irradiation from radionuclides in sediment/soil, direct radiation from operations on a nuclear site, consumption of food etc). I chair a 'total retrospective dose assessment' working group with representatives from the Scottish Environment Protection Agency (SEPA), Food Standards Agency (FSA), National Radiological Protection Board, CEFAS and BNFL which began discussing precisely this issue during 2002. This group is a sub-group of the National Dose Assessment Working Group which was set up in April 2002 (J. Radiol. Prot. 22 318-9). The Environment Agency, Food Standards Agency and the Nuclear Installations Inspectorate previously undertook joint research into the most appropriate methodology to use for total dose assessment (J J Hancox, S J Stansby and M C Thorne 2002 The Development of a Methodology to Assess Population Doses from Multiple Source and Exposure Pathways of Radioactivity (Environment Agency R and D Technical Report P3-070/TR). This work came to broadly the same conclusion as the work by CEFAS, that an individual dose method is probably the most appropriate method to use. This research and that undertaken by CEFAS will help the total retrospective dose assessment working group refine a set of principles and a methodology for the

  4. Mixture Density Mercer Kernels: A Method to Learn Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...

  5. A model for dose estimation in therapy of liver with intraarterial microspheres

    International Nuclear Information System (INIS)

    Zavgorodni, S.F.

    1996-01-01

    Therapy with intraarterial microspheres is a technique which involves incorporation of radioisotope-labelled microspheres into a capillary bed of tumour and normal tissue. Beta-emitters such as 90 Y and 166 Ho are used for this purpose. This technique provides tumour to normal tissue (TNT) dose ratios in the range of 2-10 and demonstrates significant clinical benefit, which could potentially be increased with more accurate dose predictions and delivery. However, dose calculations in this modality face the difficulties associated with nonuniform and inhomogeneous activity distribution. Most of the dose calculations used clinically do not account for the nonuniformity and assume uniform activity distribution. This paper is devoted to the development of a model which would allow more accurate prediction of dose distributions from microspheres. The model calculates dose assuming that microspheres are aggregated into randomly distributed clusters, and using precomputed dose kernels for the clusters. The dose kernel due to a microsphere cluster was found by numerical integration of a point source dose kernel over the volume of the cluster. It is shown that a random distribution of clusters produces an intercluster distance distribution which agrees well with the one measured by Pillai et al in liver. Dose volume histograms (DVHs) predicted by the model agree closely with the results of Roberson et al for normal tissue and tumour. Dose distributions for different concentrations and types of radioisotope, as well as for tumours of different radii, have been calculated to demonstrate the model's possible applications. (author)

  6. Application of the Monte Carlo integration method in calculations of dose distributions in HDR-Brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Baltas, D; Geramani, K N; Ioannidis, G T; Kolotas, C; Zamboglou, N [Strahlenklinik, Stadtische Kliniken Offenbach, Offenbach (Germany); Giannouli, S [Department of Electrical and Computer Engineering, National Technical University of Athens, Athens (Greece)

    1999-12-31

    Source anisotropy is a very important factor in brachytherapy quality assurance of high dose rate HDR Ir 192 afterloading stepping sources. If anisotropy is not taken into account then doses received by a brachytherapy patient in certain directions can be in error by a clinically significant amount. Experimental measurements of anisotropy are very labour intensive. We have shown that within acceptable limits of accuracy, Monte Carlo integration (MCI) of a modified Sievert integral (3D generalisation) can provide the necessary data within a much shorter time scale than can experiments. Hence MCI can be used for routine quality assurance schedules whenever a new design of HDR or PDR Ir 192 is used for brachytherapy afterloading. Our MCI calculation results are comparable with published experimental data and Monte Carlo simulation data for microSelectron and VariSource Ir 192 sources. We have shown not only that MCI offers advantages over alternative numerical integration methods, but also that treating filtration coefficients as radial distance-dependent functions improves Sievert integral accuracy at low energies. This paper also provides anisotropy data for three new Ir 192 sources, one for microSelectron-HDR and two for the microSelectron-PDR, for which data currently is not available. The information we have obtained in this study can be incorporated into clinical practice.

  7. An Ensemble Approach to Building Mercer Kernels with Prior Information

    Science.gov (United States)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  8. 7 CFR 981.9 - Kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  9. 7 CFR 51.2295 - Half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  10. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  11. kernel oil by lipolytic organisms

    African Journals Online (AJOL)

    USER

    2010-08-02

    Aug 2, 2010 ... Rancidity of extracted cashew oil was observed with cashew kernel stored at 70, 80 and 90% .... method of American Oil Chemist Society AOCS (1978) using glacial ..... changes occur and volatile products are formed that are.

  12. RISC-RAD. A European integrated approach to the problem of low doses

    International Nuclear Information System (INIS)

    Meunier, A.; Sabatier, L.; Atkinson, M.; Paretzke, H.; Bouffler, S.; Mullenders, L.

    2007-01-01

    Complete text of publication follows. Funded by the European Commission in the framework of a dedicated programme supporting research in the Nuclear sector (FP6 Euratom), the project RISC-RAD undertakes experimental and modelling studies ultimately to improve low dose radiation cancer risk assessment by exploring and providing evidence for the most appropriate radiation cancer risk projection and interpolation models. It started on 1st January 2004 and is running until 31 st October 2008. It mobilizes a consortium of 31 partners and is coordinated by Dr. Laure Sabatier from the French atomic energy commission. Indeed the project represents an unprecedented attempt to integrate horizontally the research on the effects of low doses of IR at the European level. A multipartner project supporting objective-driven research, RISC-RAD aims at contributing to bridge the remaining gap of scientific knowledge about effects of lows doses of ionizing radiation. It spans a large part of the research spectrum, including many topics addressed during the LOWRAD2007 conference. This presentation intends to give an account of the integrative aspects of the project, insights on the innovative solutions found to approach a complex and controversial scientific topic like the biological effects of low doses of ionizing radiation, and links with some areas of social studies on science.The concept of 'integration' implies the development of a new kind of activity in the research field, which crosses its traditional boundaries : controversies of several kinds must temporarily be overcome within the project management board in order to define and follow a common strategy. Among them, how to reconcile the creative part of fundamental research with the compliance to strict project planning rules has come up as a debate which questions the best way a significant collective and coordinated action can address the issue of the low dose cancer risk assessment on the long term. The knowledge and

  13. Multivariate and semiparametric kernel regression

    OpenAIRE

    Härdle, Wolfgang; Müller, Marlene

    1997-01-01

    The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...

  14. Notes on the gamma kernel

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole E.

    The density function of the gamma distribution is used as shift kernel in Brownian semistationary processes modelling the timewise behaviour of the velocity in turbulent regimes. This report presents exact and asymptotic properties of the second order structure function under such a model......, and relates these to results of von Karmann and Horwath. But first it is shown that the gamma kernel is interpretable as a Green’s function....

  15. Dose-escalated simultaneous integrated-boost treatment of prostate cancer patients via helical tomotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Geier, M.; Astner, S.T.; Duma, M.N.; Putzhammer, J.; Winkler, C.; Molls, M.; Geinitz, H. [Technische Univ. Muenchen (Germany). Klinik und Poliklinik fuer Strahlentherapie und Radiologische Onkologie; Jacob, V. [Universitaetsklinikum Freiburg (Germany). Klinik fuer Strahlenheilkunde; Nieder, C. [Nordland Hospital, Bodoe (Norway). Dept. of Oncology and Palliative Care; Tromsoe Univ. (Norway). Inst. of Clinical Medicine

    2012-05-15

    The goal of this work was to assess the feasibility of moderately hypofractionated simultaneous integrated-boost intensity-modulated radiotherapy (SIB-IMRT) with helical tomotherapy in patients with localized prostate cancer regarding acute side effects and dose-volume histogram data (DVH data). Acute side effects and DVH data were evaluated of the first 40 intermediate risk prostate cancer patients treated with a definitive daily image-guided SIB-IMRT protocol via helical tomotherapy in our department. The planning target volume including the prostate and the base of the seminal vesicles with safety margins was treated with 70 Gy in 35 fractions. The boost volume containing the prostate and 3 mm safety margins (5 mm craniocaudal) was treated as SIB to a total dose of 76 Gy (2.17 Gy per fraction). Planning constraints for the anterior rectal wall were set in order not to exceed the dose of 76 Gy prescribed to the boost volume. Acute toxicity was evaluated prospectively using a modified CTCAE (Common Terminology Criteria for Adverse Events) score. SIB-IMRT allowed good rectal sparing, although the full boost dose was permitted to the anterior rectal wall. Median rectum dose was 38 Gy in all patients and the median volumes receiving at least 65 Gy (V65), 70 Gy (V70), and 75 Gy (V75) were 13.5%, 9%, and 3%, respectively. No grade 4 toxicity was observed. Acute grade 3 toxicity was observed in 20% of patients involving nocturia only. Grade 2 acute intestinal and urological side effects occurred in 25% and 57.5%, respectively. No correlation was found between acute toxicity and the DVH data. This institutional SIB-IMRT protocol using daily image guidance as a precondition for smaller safety margins allows dose escalation to the prostate without increasing acute toxicity. (orig.)

  16. Dose calculations for irregular fields using three-dimensional first-scatter integration

    International Nuclear Information System (INIS)

    Boesecke, R.; Scharfenberg, H.; Schlegel, W.; Hartmann, G.H.

    1986-01-01

    This paper describes a method of dose calculations for irregular fields which requires only the mean energy of the incident photons, the geometrical properties of the irregular field and of the therapy unit, and the attenuation coefficient of tissue. The method goes back to an approach including spatial aspects of photon scattering for inhomogeneities for the calculation of dose reduction factors as proposed by Sontag and Cunningham (1978). It is based on the separation of dose into a primary component and a scattered component. The scattered component can generally be calculated for each field by integration over dose contributions from scattering in neighbouring volume elements. The quotient of this scattering contribution in the irregular field and the scattering contribution in the equivalent open field is then the correction factor for scattering in an irregular field. A correction factor for the primary component can be calculated if the attenuation of the photons in the shielding block is properly taken into account. The correction factor is simply given by the quotient of primary photons of the irregular field and the primary photons of the open field. (author)

  17. Protein fold recognition using geometric kernel data fusion.

    Science.gov (United States)

    Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves

    2014-07-01

    Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.

  18. GRAB - WRS system module number 60221 for calculating gamma-ray penetration in slab shields by the method of kernel integration with build-up factors

    International Nuclear Information System (INIS)

    Grimstone, M.J.

    1978-06-01

    The WRS Modular Programming System has been developed as a means by which programmes may be more efficiently constructed, maintained and modified. In this system a module is a self-contained unit typically composed of one or more Fortran routines, and a programme is constructed from a number of such modules. This report describes one WRS module, the function of which is to calculate the gamma-ray flux, dose, or heating rate in a slab shield using the build-up factor method. The information given in this manual is of use both to the programmer wishing to incorporate the module in a programme, and to the user of such a programme. (author)

  19. The influence of maize kernel moisture on the sterilizing effect of gamma rays

    International Nuclear Information System (INIS)

    Khanymova, T.; Poloni, E.

    1980-01-01

    The influence of 4 levels of maize kernel moisture (16, 20, 25 and 30%) on gamma-ray sterilizing effect was studied and the after-effect of radiation on the microorganisms at short term storage was followed up. Maize kernels of the hybrid Knezha-36 produced in 1975 were used. Gamma-ray treatment of the kernels was effected by GUBEh-4000 irradiator at doses of 0.2 and 0.3 Mrad and after that they were stored for a month at 12 deg and 25 deg C and controlled moisture conditions. Surface and subepidermal infection of the kernels was determined immediately post irradiation and at the end of the experiment. Non-irradiated kernels were used as controls. Results indicated that the initial kernel moisture has a considerable influence on the sterilizing effect of gamma-rays at the rates used in the experiment and affects to a considerable extent the post-irradiation recovery of organisms. The speed of recovery was highest in the treatment with 30% moisture and lowest in the treatment with 16% kernel moisture. Irradiation of the kernels causes pronounced changes on the surface and subepidermal infection. This was due to the unequal radio resistance to the microbial components and to the modifying effect of the moisture holding capacity. The useful effect of maize kernel irradiation was more prolonged at 12 deg C than at 25 deg C

  20. Influence Function and Robust Variant of Kernel Canonical Correlation Analysis

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2017-01-01

    Many unsupervised kernel methods rely on the estimation of the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). Both kernel CO and kernel CCO are sensitive to contaminated data, even when bounded positive definite kernels are used. To the best of our knowledge, there are few well-founded robust kernel methods for statistical unsupervised learning. In addition, while the influence function (IF) of an estimator can characterize its robustness, asymptotic ...

  1. Pulse and integral optically stimulated luminescence (OSL). Similarities and dissimilarities to thermoluminescence (TL) dose dependence and dose-rate effects

    International Nuclear Information System (INIS)

    Chen, R.; Leung, P.L.

    2000-01-01

    Optically stimulated luminescence (OSL) and thermoluminescence (Tl) are two possible methods to monitor the absorbed radiation in solid samples, and therefore are utilized for dosimetry. For this application, two properties are desirable, namely, linear dose dependence of the measured quantity and dose-rate independence. For Tl, different kinds of super linear dose dependence have been reported in the literature in different materials, and in some cases, dose-rate dependence has also been found. These have been explained as being the result of competition. In OSL, some recent works reported on super linear dose dependence in annealed samples. In the present work, we explain the possible occurrence of these phenomena in OSL by solving numerically the relevant rate equations governing the process during irradiation, relaxation and read-out (heating or light stimulation). The results show that for short pulse OSL, quadratic dose dependence can be expected when only one trapping state and one kind of recombination center are involved and when the excitation starts with empty traps and centers. With the short pulse OSL, the calculation also reveals a possible dose-rate effect. Under the same circumstances, the area under the OSL curve depends linearly on the dose. The dependence of the whole area under the OSL curve on the dose is shown to be super linear when a disconnected trapping state or radiationless center take part in the process. Also, dose-rate effect can be expected in these cases, although no experimental effect of this sort has been reported so far. In pulse OSL, the analogy is made between the measured intensity and the initial rise range of non-first order Tl, whereas for the total area OSL, there is a nearly full analogy with the dose behavior of the Tl maximum. (Author)

  2. The gravitational potential of axially symmetric bodies from a regularized green kernel

    Science.gov (United States)

    Trova, A.; Huré, J.-M.; Hersant, F.

    2011-12-01

    The determination of the gravitational potential inside celestial bodies (rotating stars, discs, planets, asteroids) is a common challenge in numerical Astrophysics. Under axial symmetry, the potential is classically found from a two-dimensional integral over the body's meridional cross-section. Because it involves an improper integral, high accuracy is generally difficult to reach. We have discovered that, for homogeneous bodies, the singular Green kernel can be converted into a regular kernel by direct analytical integration. This new kernel, easily managed with standard techniques, opens interesting horizons, not only for numerical calculus but also to generate approximations, in particular for geometrically thin discs and rings.

  3. Kernel versions of some orthogonal transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    Kernel versions of orthogonal transformations such as principal components are based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced...... by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel...... function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA) and kernel minimum noise fraction (MNF) analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function...

  4. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  5. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  6. Utilization of radiation protection gear for absorbed dose reduction: an integrative literature review

    International Nuclear Information System (INIS)

    Soares, Flavio Augusto Penna; Flor, Rita de Cassia; Pereira, Aline Garcia

    2011-01-01

    Objective: The present study was aimed at evaluating the relation between the use of radiation protection gear and the decrease in absorbed dose of ionizing radiation, thereby reinforcing the efficacy of its use by both the patients and occupationally exposed personnel. Materials and Methods: The integrative literature review method was utilized to analyze 21 articles, 2 books, 1 thesis, 1 monograph, 1 computer program, 4 pieces of database research (Instituto Brasileiro de Geografia e Estatistica and Departamento de Informatica do Sistema Unico de Saude) and 2 sets of radiological protection guidelines. Results: Theoretically, a reduction of 86% to 99% in the absorbed dose is observed with the use of radiation protection gear. In practice, however, the reduction may achieve 88% in patients submitted to conventional radiology, and 95% in patients submitted to computed tomography. In occupationally exposed individuals, the reduction is around 90% during cardiac catheterization, and 75% during orthopedic surgery. Conclusion: According to findings of several previous pieces of research, the use of radiation protection gear is a low-cost and effective way to reduce absorbed dose both for patients and occupationally exposed individuals. Thus, its use is necessary for the implementation of effective radioprotection programs in radiodiagnosis centers. (author)

  7. Kernel learning algorithms for face recognition

    CERN Document Server

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  8. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  9. Online learning control using adaptive critic designs with sparse kernel machines.

    Science.gov (United States)

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  10. Sparse kernel orthonormalized PLS for feature extraction in large datasets

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Petersen, Kaare Brandt; Hansen, Lars Kai

    2006-01-01

    In this paper we are presenting a novel multivariate analysis method for large scale problems. Our scheme is based on a novel kernel orthonormalized partial least squares (PLS) variant for feature extraction, imposing sparsity constrains in the solution to improve scalability. The algorithm...... is tested on a benchmark of UCI data sets, and on the analysis of integrated short-time music features for genre prediction. The upshot is that the method has strong expressive power even with rather few features, is clearly outperforming the ordinary kernel PLS, and therefore is an appealing method...

  11. Integrated technique for assessing environmental dose of radioactive waste storage installation

    Energy Technology Data Exchange (ETDEWEB)

    Bor-Jing Chang; Chien-Liang Shih; Ing-Jane Chen [Institute of Nuclear Energy Research, Lungtan, Taiwan (China); Ren-Dih Sheu; Shiang-Huei Jiang [National Tsing Hua University, Hsinchu, Taiwan (China); Shu-Jun Chang [Nuclear Science and Technology Association, Taiwan (China); Ruei-Ying Liao; Pei Yu; Chin-Yi Huang [Taiwan Power Company, Taipei, Taiwan (China)

    2000-05-01

    The ability to accurately predict exposure rates at large distances from a gamma radiation source is becoming increasingly important. This is because that the related regulation for the control of radiation levels in and around nuclear facilities becomes more stringent. Since the continuous increase of the radwaste storage capacity requirement on site, the requirement of a more realistic evaluation is very necessary. Those doses are usually at past time evaluated by QADCG/INER-2 code for direct dose and by SKYSHINE-III code for skyshine dose in which evaluation were over conservatively considered. This study is to update the evaluation code package accompanied with adequate methodology and to establish integrated analysis procedure. Thereafter, radiation doses can be accurately calculated in a reasonably conservative way. The purpose of the investigation is divided into three categories. First, SPECTRUM-506 is used instead of SPECTUM. Nuclide databases are enlarged from 100 up to 506. And the operation is ported to personal computer. Secondly, the QADCG/INER-3 code is developed to enhance the original QADCG/INER-2 code. The most important difference is the use of the geometric progression (GP) fitting function for the gamma-ray buildup factor. SKYSHINE-III code is replaced by McSKY and SKYDOSE codes. They are well benchmarked by using the Monte Carlo code MCNP and sensitive parameters are detailed investigated. Thirdly, the well developed analysis procedure is applicable for nuclear utility radwaste storage sites. Finally, the case studies were performed by using those packages to assess the radiological impact of utility radwaste storage site. The results are verified in detail by using Monte Carlo code MCNP and the results seems pretty consistent from both method. (author)

  12. RTOS kernel in portable electrocardiograph

    Science.gov (United States)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  13. RTOS kernel in portable electrocardiograph

    International Nuclear Information System (INIS)

    Centeno, C A; Voos, J A; Riva, G G; Zerbini, C; Gonzalez, E A

    2011-01-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  14. Semi-Supervised Kernel PCA

    DEFF Research Database (Denmark)

    Walder, Christian; Henao, Ricardo; Mørup, Morten

    We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least...... squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets....

  15. New Fukui, dual and hyper-dual kernels as bond reactivity descriptors.

    Science.gov (United States)

    Franco-Pérez, Marco; Polanco-Ramírez, Carlos-A; Ayers, Paul W; Gázquez, José L; Vela, Alberto

    2017-06-21

    We define three new linear response indices with promising applications for bond reactivity using the mathematical framework of τ-CRT (finite temperature chemical reactivity theory). The τ-Fukui kernel is defined as the ratio between the fluctuations of the average electron density at two different points in the space and the fluctuations in the average electron number and is designed to integrate to the finite-temperature definition of the electronic Fukui function. When this kernel is condensed, it can be interpreted as a site-reactivity descriptor of the boundary region between two atoms. The τ-dual kernel corresponds to the first order response of the Fukui kernel and is designed to integrate to the finite temperature definition of the dual descriptor; it indicates the ambiphilic reactivity of a specific bond and enriches the traditional dual descriptor by allowing one to distinguish between the electron-accepting and electron-donating processes. Finally, the τ-hyper dual kernel is defined as the second-order derivative of the Fukui kernel and is proposed as a measure of the strength of ambiphilic bonding interactions. Although these quantities have never been proposed, our results for the τ-Fukui kernel and for τ-dual kernel can be derived in zero-temperature formulation of the chemical reactivity theory with, among other things, the widely-used parabolic interpolation model.

  16. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  17. On a new point kernel for use in gamma radiation calculations

    Energy Technology Data Exchange (ETDEWEB)

    Bindel, Laurent; Gamess, Andre; Lejeune, Eric [Societe Generale pour les techniques Nouvelles, Saint Quentin en Yvelines (France)

    2000-03-01

    The present paper demonstrate the existence of a new formulation for the transport point kernel, the principal characteristic of which lies in a two-dimensional integration over the surfaces that deliminate a source. (author)

  18. On a new point kernel for use in gamma radiation calculations

    International Nuclear Information System (INIS)

    Bindel, Laurent; Gamess, Andre; Lejeune, Eric

    2000-01-01

    The present paper demonstrate the existence of a new formulation for the transport point kernel, the principal characteristic of which lies in a two-dimensional integration over the surfaces that deliminate a source. (author)

  19. Multiple Kernel Learning with Data Augmentation

    Science.gov (United States)

    2016-11-22

    JMLR: Workshop and Conference Proceedings 63:49–64, 2016 ACML 2016 Multiple Kernel Learning with Data Augmentation Khanh Nguyen nkhanh@deakin.edu.au...University, Australia Editors: Robert J. Durrant and Kee-Eung Kim Abstract The motivations of multiple kernel learning (MKL) approach are to increase... kernel expres- siveness capacity and to avoid the expensive grid search over a wide spectrum of kernels . A large amount of work has been proposed to

  20. A kernel version of multivariate alteration detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2013-01-01

    Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....

  1. A novel adaptive kernel method with kernel centers determined by a support vector regression approach

    NARCIS (Netherlands)

    Sun, L.G.; De Visser, C.C.; Chu, Q.P.; Mulder, J.A.

    2012-01-01

    The optimality of the kernel number and kernel centers plays a significant role in determining the approximation power of nearly all kernel methods. However, the process of choosing optimal kernels is always formulated as a global optimization task, which is hard to accomplish. Recently, an

  2. Low dose radiation effects: an integrative european approach (Risc-Rad Project) coordinated by the Cea

    International Nuclear Information System (INIS)

    Sabatier, L.

    2006-01-01

    RISC-RAD (Radiosensitivity of Individuals and Susceptibility to Cancer induced by ionizing Radiations) is an Integrated Project funded by the European Commission under 6. Framework Programme / EURATOM. RISC-RAD started on 1. January 2004 for a duration of four years. Coordinated by Cea (Dr Laure Sabatier), it involves 11 European countries (Austria, Denmark, Finland, France, Germany, Ireland, Italy, the Netherlands, Spain, Sweden and the United Kingdom) and 29 research institutions. Objectives: Exposures to low and protracted doses of ionizing radiation are very frequent in normal living environment, at work places, in industry and in medicine. Effects of these exposures on human health cannot be reliably assessed by epidemiological methods, nor is thoroughly understood by biologists. RISC-RAD project proposes to help bridging the gap of scientific knowledge about these effects. To achieve this goal, a necessary key step is to understand the basic mechanisms by which radiation induces cancer. Studying this multistage process in an integrated way, the project offers a new biological approach characterised by and clear-cut and objective-driven scientific policy: the project is focused on the effects of low doses (less than 100 mSv) and protracted doses of radiation. It aims at identifying new parameters that take into account the differences in radiation responses between individuals. A group of modelers works closely with the experimental teams in order to better quantify the risks associated with low and protracted doses. Research work is divided into five work packages interacting closely with each other. WP1 is dedicated to DNA damage. Ionizing Radiation (IR) produce a broad spectrum of base modifications and DNA strand breaks of different kinds, among which double-strand breaks and 'clustered damage' which is thought to be a major feature in biological effectiveness of IR. The aim of Work Package 1 is to improve understanding of the initial DNA damage induced by

  3. Integration of Fricke gel dosimetry with Ag nanoparticles for experimental dose enhancement determination in theranostics

    International Nuclear Information System (INIS)

    Vedelago, J.; Valente, M.; Mattea, F.

    2017-10-01

    The use and implementation of nanoparticles in medicine has grown exponentially in the last twenty years. Their main applications include drug delivery, theranostics, tissue engineering and magneto function. Dosimetry techniques can take advantage of inorganic nanoparticles properties and their combination with gel dosimetry techniques could be used as a first step for their later inclusion in radio-diagnostics or radiotherapy treatments. This work presents preliminary results of properly synthesized and purify silver nanoparticles integration with Fricke gel dosimeters. Used nanoparticles presented mean sizes ranging from 2 to 20 nm, with a lognormal distribution. Xylenol orange concentration in Fricke gel dosimeter was adjust in order to allow sample optical readout, accounting nanoparticles plasmon. Dose enhancement was assessed irradiating dosimeters setting X-ray beams energies below and above silver K-edge. (Author)

  4. Integration of Fricke gel dosimetry with Ag nanoparticles for experimental dose enhancement determination in theranostics

    Energy Technology Data Exchange (ETDEWEB)

    Vedelago, J.; Valente, M. [Instituto de Fisica Enrique Gaviola - CONICET, Av. Medina Allende s/n, Ciudad Universitaria, X5000HUA Cordoba (Argentina); Mattea, F., E-mail: jvedelago@famaf.unc.edu.ar [Universidad Nacional de Cordoba, FAMAF, Laboratorio de Investigacion e Instrumentacion en Fisica Aplicada a la Medicina e Imagenes por Rayos X, Av. Medina Allende s/n, Ciudad Universitaria, X5000HUA Cordoba (Argentina)

    2017-10-15

    The use and implementation of nanoparticles in medicine has grown exponentially in the last twenty years. Their main applications include drug delivery, theranostics, tissue engineering and magneto function. Dosimetry techniques can take advantage of inorganic nanoparticles properties and their combination with gel dosimetry techniques could be used as a first step for their later inclusion in radio-diagnostics or radiotherapy treatments. This work presents preliminary results of properly synthesized and purify silver nanoparticles integration with Fricke gel dosimeters. Used nanoparticles presented mean sizes ranging from 2 to 20 nm, with a lognormal distribution. Xylenol orange concentration in Fricke gel dosimeter was adjust in order to allow sample optical readout, accounting nanoparticles plasmon. Dose enhancement was assessed irradiating dosimeters setting X-ray beams energies below and above silver K-edge. (Author)

  5. SU-F-SPS-09: Parallel MC Kernel Calculations for VMAT Plan Improvement

    International Nuclear Information System (INIS)

    Chamberlain, S; French, S; Nazareth, D

    2016-01-01

    Purpose: Adding kernels (small perturbations in leaf positions) to the existing apertures of VMAT control points may improve plan quality. We investigate the calculation of kernel doses using a parallelized Monte Carlo (MC) method. Methods: A clinical prostate VMAT DICOM plan was exported from Eclipse. An arbitrary control point and leaf were chosen, and a modified MLC file was created, corresponding to the leaf position offset by 0.5cm. The additional dose produced by this 0.5 cm × 0.5 cm kernel was calculated using the DOSXYZnrc component module of BEAMnrc. A range of particle history counts were run (varying from 3 × 10"6 to 3 × 10"7); each job was split among 1, 10, or 100 parallel processes. A particle count of 3 × 10"6 was established as the lower range because it provided the minimal accuracy level. Results: As expected, an increase in particle counts linearly increases run time. For the lowest particle count, the time varied from 30 hours for the single-processor run, to 0.30 hours for the 100-processor run. Conclusion: Parallel processing of MC calculations in the EGS framework significantly decreases time necessary for each kernel dose calculation. Particle counts lower than 1 × 10"6 have too large of an error to output accurate dose for a Monte Carlo kernel calculation. Future work will investigate increasing the number of parallel processes and optimizing run times for multiple kernel calculations.

  6. Safety of dose escalation by simultaneous integrated boosting radiation dose within the primary tumor guided by 18FDG-PET/CT for esophageal cancer

    International Nuclear Information System (INIS)

    Yu, Wen; Cai, Xu-Wei; Liu, Qi; Zhu, Zheng-Fei; Feng, Wen; Zhang, Qin; Zhang, Ying-Jian; Yao, Zhi-Feng; Fu, Xiao-Long

    2015-01-01

    Purpose: To observe the safety of selective dose boost to the pre-treatment high 18 F-deoxyglucose (FDG) uptake areas of the esophageal GTV. Methods: Patients with esophageal squamous cell carcinoma were treated with escalating radiation dose of 4 levels, with a simultaneous integrated boost (SIB) to the pre-treatment 50% SUVmax area of the primary tumor. Patients received 4 monthly cycles of cisplatin and fluorouracil. Dose-limiting toxicity (DLT) was defined as any Grade 3 or higher acute toxicities causing continuous interruption of radiation for over 1 week. Results: From April 2012 to February 2014, dose has been escalated up to LEVEL 4 (70 Gy). All of the 25 patients finished the prescribed dose without DLT, and 10 of them developed Grade 3 acute esophagitis. One patient of LEVEL 2 died of esophageal hemorrhage within 1 month after completion of radiotherapy, which was not definitely correlated with treatment yet. Late toxicities remained under observation. With median follow up of 8.9 months, one-year overall survival and local control was 69.2% and 77.4%, respectively. Conclusions: Dose escalation in esophageal cancer based on 18 FDG-PET/CT has been safely achieved up to 70 Gy using the SIB technique. Acute toxicities were well tolerated, whereas late toxicities and long-term outcomes deserved further observation

  7. Complex use of cottonseed kernels

    Energy Technology Data Exchange (ETDEWEB)

    Glushenkova, A I

    1977-01-01

    A review with 41 references is made on the manufacture of oil, protein, and other products from cottonseed, the effects of gossypol on protein yield and quality and technology of gossypol removal. A process eliminating thermal treatment of the kernels and permitting the production of oil, proteins, phytin, gossypol, sugar, sterols, phosphatides, tocopherols, and residual shells and baggase is described.

  8. Kernel regression with functional response

    OpenAIRE

    Ferraty, Frédéric; Laksaci, Ali; Tadj, Amel; Vieu, Philippe

    2011-01-01

    We consider kernel regression estimate when both the response variable and the explanatory one are functional. The rates of uniform almost complete convergence are stated as function of the small ball probability of the predictor and as function of the entropy of the set on which uniformity is obtained.

  9. Paramecium: An Extensible Object-Based Kernel

    NARCIS (Netherlands)

    van Doorn, L.; Homburg, P.; Tanenbaum, A.S.

    1995-01-01

    In this paper we describe the design of an extensible kernel, called Paramecium. This kernel uses an object-based software architecture which together with instance naming, late binding and explicit overrides enables easy reconfiguration. Determining which components reside in the kernel protection

  10. Veto-Consensus Multiple Kernel Learning

    NARCIS (Netherlands)

    Zhou, Y.; Hu, N.; Spanos, C.J.

    2016-01-01

    We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The

  11. An Extreme Learning Machine Based on the Mixed Kernel Function of Triangular Kernel and Generalized Hermite Dirichlet Kernel

    Directory of Open Access Journals (Sweden)

    Senyue Zhang

    2016-01-01

    Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.

  12. Feasible Dose Reduction in Routine Chest Computed Tomography Maintaining Constant Image Quality Using the Last Three Scanner Generations: From Filtered Back Projection to Sinogram-affirmed Iterative Reconstruction and Impact of the Novel Fully Integrated Detector Design Minimizing Electronic Noise

    Directory of Open Access Journals (Sweden)

    Lukas Ebner

    2014-01-01

    Full Text Available Objective:The aim of the present study was to evaluate a dose reduction in contrast-enhanced chest computed tomography (CT by comparing the three latest generations of Siemens CT scanners used in clinical practice. We analyzed the amount of radiation used with filtered back projection (FBP and an iterative reconstruction (IR algorithm to yield the same image quality. Furthermore, the influence on the radiation dose of the most recent integrated circuit detector (ICD; Stellar detector, Siemens Healthcare, Erlangen, Germany was investigated. Materials and Methods: 136 Patients were included. Scan parameters were set to a thorax routine: SOMATOM Sensation 64 (FBP, SOMATOM Definition Flash (IR, and SOMATOM Definition Edge (ICD and IR. Tube current was set constantly to the reference level of 100 mA automated tube current modulation using reference milliamperes. Care kV was used on the Flash and Edge scanner, while tube potential was individually selected between 100 and 140 kVp by the medical technologists at the SOMATOM Sensation. Quality assessment was performed on soft-tissue kernel reconstruction. Dose was represented by the dose length product. Results: Dose-length product (DLP with FBP for the average chest CT was 308 mGycm ± 99.6. In contrast, the DLP for the chest CT with IR algorithm was 196.8 mGycm ± 68.8 (P = 0.0001. Further decline in dose can be noted with IR and the ICD: DLP: 166.4 mGycm ± 54.5 (P = 0.033. The dose reduction compared to FBP was 36.1% with IR and 45.6% with IR/ICD. Signal-to-noise ratio (SNR was favorable in the aorta, bone, and soft tissue for IR/ICD in combination compared to FBP (the P values ranged from 0.003 to 0.048. Overall contrast-to-noise ratio (CNR improved with declining DLP. Conclusion: The most recent technical developments, namely IR in combination with integrated circuit detectors, can significantly lower radiation dose in chest CT examinations.

  13. A cost-effective technique for integrating personal radiation dose assessment with personal gravimetric sampling

    International Nuclear Information System (INIS)

    Strydom, R.; Rolle, R.; Van der Linde, A.

    1992-01-01

    During recent years there has been an increasing awareness internationally of radiation levels in the mining and milling of radioactive ores, including those from non-uranium mines. A major aspect of radiation control is concerned with the measurement of radiation levels and the assessment of radiation doses incurred by individual workers. Current techniques available internationally for personnel monitoring of radiation exposures are expensive and there is a particular need to reduce the cost of personal radiation monitoring in South African gold mines because of the large labour force employed. In this regard the obvious benefits of integrating personal radiation monitoring with existing personal monitoring systems already in place in South African gold mines should be exploited. A system which can be utilized for this purpose is personal gravimetric sampling. A new cost-effective technique for personal radiation monitoring, which can be fully integrated with the personal gravimetric sampling strategy being implemented on mines, has been developed in South Africa. The basic principles of this technique and its potential in South African mines are described. 9 refs., 7 figs

  14. Quality assurance device for four-dimensional IMRT or SBRT and respiratory gating using patient-specific intrafraction motion kernels.

    Science.gov (United States)

    Nelms, Benjamin E; Ehler, Eric; Bragg, Henry; Tomé, Wolfgang A

    2007-09-17

    Emerging technologies such as four-dimensional computed tomography (4D CT) and implanted beacons are expected to allow clinicians to accurately model intrafraction motion and to quantitatively estimate internal target volumes (ITVs) for radiation therapy involving moving targets. In the case of intensity-modulated (IMRT) and stereotactic body radiation therapy (SBRT) delivery, clinicians must consider the interplay between the temporal nature of the modulation and the target motion within the ITV. A need exists for a 4D IMRT/SBRT quality assurance (QA) device that can incorporate and analyze customized intrafraction motion as it relates to dose delivery and respiratory gating. We built a 4D IMRT/SBRT prototype device and entered (X, Y, Z)(T) coordinates representing a motion kernel into a software application that 1. transformed the kernel into beam-specific two-dimensional (2D) motion "projections," 2. previewed the motion in real time, and 3. drove a recision X-Y motorized device that had, atop it, a mounted planar IMRT QA measurement device. The detectors that intersected the target in the beam's-eye-view of any single phase of the breathing cycle (a small subset of all the detectors) were defined as "target detectors" to be analyzed for dose uniformity between multiple fractions. Data regarding the use of this device to quantify dose variation fraction-to-fraction resulting from target motion (for several delivery modalities and with and without gating) have been recently published. A combined software and hardware solution for patient-customized 4D IMRT/SBRT QA is an effective tool for assessing IMRT delivery under conditions of intrafraction motion. The 4D IMRT QA device accurately reproduced the projected motion kernels for all beam's-eye-view motion kernels. This device has been proved to, effectively quantify the degradation in dose uniformity resulting from a moving target within a static planning target volume, and, integrate with a commercial

  15. Quality assurance device for four‐dimensional IMRT or SBRT and respiratory gating using patient‐specific intrafraction motion kernels

    Science.gov (United States)

    Ehler, Eric; Bragg, Henry; Tomé, Wolfgang A.

    2007-01-01

    Emerging technologies such as four‐dimensional computed tomography (4D CT) and implanted beacons are expected to allow clinicians to accurately model intrafraction motion and to quantitatively estimate internal target volumes (ITVs) for radiation therapy involving moving targets. In the case of intensity‐modulated (IMRT) and stereotactic body radiation therapy (SBRT) delivery, clinicians must consider the interplay between the temporal nature of the modulation and the target motion within the ITV. A need exists for a 4D IMRT/SBRT quality assurance (QA) device that can incorporate and analyze customized intrafraction motion as it relates to dose delivery and respiratory gating. We built a 4D IMRT/SBRT prototype device and entered (X, Y, Z)(T) coordinates representing a motion kernel into a software application that transformed the kernel into beam‐specific two‐dimensional (2D) motion “projections,”previewed the motion in real time, anddrove a precision X–Y motorized device that had, atop it, a mounted planar IMRT QA measurement device. The detectors that intersected the target in the beam's‐eye‐view of any single phase of the breathing cycle (a small subset of all the detectors) were defined as “target detectors” to be analyzed for dose uniformity between multiple fractions. Data regarding the use of this device to quantify dose variation fraction‐to‐fraction resulting from target motion (for several delivery modalities and with and without gating) have been recently published. A combined software and hardware solution for patient‐customized 4D IMRT/ SBRT QA is an effective tool for assessing IMRT delivery under conditions of intrafraction motion. The 4D IMRT QA device accurately reproduced the projected motion kernels for all beam's‐eye‐view motion kernels. This device has been proved to • effectively quantify the degradation in dose uniformity resulting from a moving target within a static planning target volume, and • integrate

  16. Viscozyme L pretreatment on palm kernels improved the aroma of palm kernel oil after kernel roasting.

    Science.gov (United States)

    Zhang, Wencan; Leong, Siew Mun; Zhao, Feifei; Zhao, Fangju; Yang, Tiankui; Liu, Shaoquan

    2018-05-01

    With an interest to enhance the aroma of palm kernel oil (PKO), Viscozyme L, an enzyme complex containing a wide range of carbohydrases, was applied to alter the carbohydrates in palm kernels (PK) to modulate the formation of volatiles upon kernel roasting. After Viscozyme treatment, the content of simple sugars and free amino acids in PK increased by 4.4-fold and 4.5-fold, respectively. After kernel roasting and oil extraction, significantly more 2,5-dimethylfuran, 2-[(methylthio)methyl]-furan, 1-(2-furanyl)-ethanone, 1-(2-furyl)-2-propanone, 5-methyl-2-furancarboxaldehyde and 2-acetyl-5-methylfuran but less 2-furanmethanol and 2-furanmethanol acetate were found in treated PKO; the correlation between their formation and simple sugar profile was estimated by using partial least square regression (PLS1). Obvious differences in pyrroles and Strecker aldehydes were also found between the control and treated PKOs. Principal component analysis (PCA) clearly discriminated the treated PKOs from that of control PKOs on the basis of all volatile compounds. Such changes in volatiles translated into distinct sensory attributes, whereby treated PKO was more caramelic and burnt after aqueous extraction and more nutty, roasty, caramelic and smoky after solvent extraction. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Wigner functions defined with Laplace transform kernels.

    Science.gov (United States)

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America

  18. A Dosimetric Comparison of Dose Escalation with Simultaneous Integrated Boost for Locally Advanced Non-Small-Cell Lung Cancer

    Directory of Open Access Journals (Sweden)

    Wenjuan Yang

    2017-01-01

    Full Text Available Background. Many studies have demonstrated that a higher radiotherapy dose is associated with improved outcomes in non-small-cell lung cancer (NSCLC. We performed a dosimetric planning study to assess the dosimetric feasibility of intensity-modulated radiation therapy (IMRT with a simultaneous integrated boost (SIB in locally advanced NSCLC. Methods. We enrolled twenty patients. Five different dose plans were generated for each patient. All plans were prescribed a dose of 60 Gy to the planning tumor volume (PTV. In the three SIB groups, the prescribed dose was 69 Gy, 75 Gy, and 81 Gy in 30 fractions to the internal gross tumor volume (iGTV. Results. The SIB-IMRT plans were associated with a significant increase in the iGTV dose (P < 0.05, without increased normal tissue exposure or prolonged overall treatment time. Significant differences were not observed in the dose to the normal lung in terms of the V5 and V20 among the four IMRT plans. The maximum dose (Dmax in the esophagus moderately increased along with the prescribed dose (P < 0.05. Conclusions. Our results indicated that escalating the dose by SIB-IMRT is dosimetrically feasible; however, systematic evaluations via clinical trials are still warranted. We have designed a further clinical study (which is registered with ClinicalTrials.gov, number NCT02841228.

  19. Credit scoring analysis using kernel discriminant

    Science.gov (United States)

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  20. Testing Infrastructure for Operating System Kernel Development

    DEFF Research Database (Denmark)

    Walter, Maxwell; Karlsson, Sven

    2014-01-01

    Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....

  1. Kernel parameter dependence in spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....

  2. Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies

    Science.gov (United States)

    Manitz, Juliane; Burger, Patricia; Amos, Christopher I.; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike

    2017-01-01

    The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility. PMID:28785300

  3. Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies.

    Science.gov (United States)

    Friedrichs, Stefanie; Manitz, Juliane; Burger, Patricia; Amos, Christopher I; Risch, Angela; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike; Hofner, Benjamin

    2017-01-01

    The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility.

  4. Integral Dose and Radiation-Induced Secondary Malignancies: Comparison between Stereotactic Body Radiation Therapy and Three-Dimensional Conformal Radiotherapy

    Directory of Open Access Journals (Sweden)

    Stefano G. Masciullo

    2012-11-01

    Full Text Available The aim of the present paper is to compare the integral dose received by non-tumor tissue (NTID in stereotactic body radiation therapy (SBRT with modified LINAC with that received by three-dimensional conformal radiotherapy (3D-CRT, estimating possible correlations between NTID and radiation-induced secondary malignancy risk. Eight patients with intrathoracic lesions were treated with SBRT, 23 Gy × 1 fraction. All patients were then replanned for 3D-CRT, maintaining the same target coverage and applying a dose scheme of 2 Gy × 32 fractions. The dose equivalence between the different treatment modalities was achieved assuming α/β = 10Gy for tumor tissue and imposing the same biological effective dose (BED on the target (BED = 76Gy10. Total NTIDs for both techniques was calculated considering α/β = 3Gy for healthy tissue. Excess absolute cancer risk (EAR was calculated for various organs using a mechanistic model that includes fractionation effects. A paired two-tailed Student t-test was performed to determine statistically significant differences between the data (p ≤ 0.05. Our study indicates that despite the fact that for all patients integral dose is higher for SBRT treatments than 3D-CRT (p = 0.002, secondary cancer risk associated to SBRT patients is significantly smaller than that calculated for 3D-CRT (p = 0.001. This suggests that integral dose is not a good estimator for quantifying cancer induction. Indeed, for the model and parameters used, hypofractionated radiotherapy has the potential for secondary cancer reduction. The development of reliable secondary cancer risk models seems to be a key issue in fractionated radiotherapy. Further assessments of integral doses received with 3D-CRT and other special techniques are also strongly encouraged.

  5. Validation of Born Traveltime Kernels

    Science.gov (United States)

    Baig, A. M.; Dahlen, F. A.; Hung, S.

    2001-12-01

    Most inversions for Earth structure using seismic traveltimes rely on linear ray theory to translate observed traveltime anomalies into seismic velocity anomalies distributed throughout the mantle. However, ray theory is not an appropriate tool to use when velocity anomalies have scale lengths less than the width of the Fresnel zone. In the presence of these structures, we need to turn to a scattering theory in order to adequately describe all of the features observed in the waveform. By coupling the Born approximation to ray theory, the first order dependence of heterogeneity on the cross-correlated traveltimes (described by the Fréchet derivative or, more colourfully, the banana-doughnut kernel) may be determined. To determine for what range of parameters these banana-doughnut kernels outperform linear ray theory, we generate several random media specified by their statistical properties, namely the RMS slowness perturbation and the scale length of the heterogeneity. Acoustic waves are numerically generated from a point source using a 3-D pseudo-spectral wave propagation code. These waves are then recorded at a variety of propagation distances from the source introducing a third parameter to the problem: the number of wavelengths traversed by the wave. When all of the heterogeneity has scale lengths larger than the width of the Fresnel zone, ray theory does as good a job at predicting the cross-correlated traveltime as the banana-doughnut kernels do. Below this limit, wavefront healing becomes a significant effect and ray theory ceases to be effective even though the kernels remain relatively accurate provided the heterogeneity is weak. The study of wave propagation in random media is of a more general interest and we will also show our measurements of the velocity shift and the variance of traveltime compare to various theoretical predictions in a given regime.

  6. The collapsed cone algorithm for (192)Ir dosimetry using phantom-size adaptive multiple-scatter point kernels.

    Science.gov (United States)

    Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-07

    The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient

  7. The Influence of Reconstruction Kernel on Bone Mineral and Strength Estimates Using Quantitative Computed Tomography and Finite Element Analysis.

    Science.gov (United States)

    Michalski, Andrew S; Edwards, W Brent; Boyd, Steven K

    2017-10-17

    Quantitative computed tomography has been posed as an alternative imaging modality to investigate osteoporosis. We examined the influence of computed tomography convolution back-projection reconstruction kernels on the analysis of bone quantity and estimated mechanical properties in the proximal femur. Eighteen computed tomography scans of the proximal femur were reconstructed using both a standard smoothing reconstruction kernel and a bone-sharpening reconstruction kernel. Following phantom-based density calibration, we calculated typical bone quantity outcomes of integral volumetric bone mineral density, bone volume, and bone mineral content. Additionally, we performed finite element analysis in a standard sideways fall on the hip loading configuration. Significant differences for all outcome measures, except integral bone volume, were observed between the 2 reconstruction kernels. Volumetric bone mineral density measured using images reconstructed by the standard kernel was significantly lower (6.7%, p kernel. Furthermore, the whole-bone stiffness and the failure load measured in images reconstructed by the standard kernel were significantly lower (16.5%, p kernel. These data suggest that for future quantitative computed tomography studies, a standardized reconstruction kernel will maximize reproducibility, independent of the use of a quantitative calibration phantom. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  8. Adaptation of mechano-graphy to the measurement of integrated doses (1962); Adaptation de la mecanographie pour la comptabilisation des doses integrees (1962)

    Energy Technology Data Exchange (ETDEWEB)

    Tabardel-Brian, R; Dulau, J [Commissariat a l' Energie Atomique, Centre de Production de Plutonium, Marcoule (France). Centre d' Etudes Nucleaires

    1962-07-01

    Because of the increasing number of workers exposed to nuclear radiation, and with a view to avoiding doses above the maximum permissible dose for these workers, it was necessary to find a system whereby the total irradiation assimilated during periods of 3 and 12 consecutive months could be known and circulated rapidly, within 48 hours. The method chosen for this requires the services of the Mechano-graphic service of the Marcoule Centre. The present article describes the mechanical data processing solution to this problem. (authors) [French] Devant le nombre croissant de travailleurs appeles a etre exposes a une irradiation d'origine nucleaire et avec le souci constant d'eviter les depassements de Dose Maximum Admissible a ces agents, il convenait de trouver un systeme de diffusion de doses integrees capable de nous renseigner, dans un delai de 48 h, sur le capital irradiation integre au cours des periodes 3 mois et 12 mois consecutifs. Le mode de diffusion retenu fait appel aux services de l'Atelier Mecanographie du Centre de Marcoule. Le present rapport expose la solution mecanographique apportee a ce probleme. (auteurs)

  9. Accuracy of approximations of solutions to Fredholm equations by kernel methods

    Czech Academy of Sciences Publication Activity Database

    Gnecco, G.; Kůrková, Věra; Sanguineti, M.

    2012-01-01

    Roč. 218, č. 14 (2012), s. 7481-7497 ISSN 0096-3003 R&D Projects: GA ČR GAP202/11/1368; GA MŠk OC10047 Grant - others:CNR-AV ČR(CZ-IT) Project 2010–2012 “Complexity of Neural -Network and Kernel Computational Models Institutional research plan: CEZ:AV0Z10300504 Keywords : approximate solutions to integral equations * radial and kernel-based networks * Gaussian kernels * model complexity * analysis of algorithms Subject RIV: IN - Informatics, Computer Science Impact factor: 1.349, year: 2012

  10. Assessment of diurnal systemic dose of agrochemicals in regulatory toxicity testing--an integrated approach without additional animal use.

    Science.gov (United States)

    Saghir, Shakil A; Bartels, Michael J; Rick, David L; McCoy, Alene T; Rasoulpour, Reza J; Ellis-Hutchings, Robert G; Sue Marty, M; Terry, Claire; Bailey, Jason P; Billington, Richard; Bus, James S

    2012-07-01

    Integrated toxicokinetics (TK) data provide information on the rate, extent and duration of systemic exposure across doses, species, strains, gender, and life stages within a toxicology program. While routine for pharmaceuticals, TK assessments of non-pharmaceuticals are still relatively rare, and have never before been included in a full range of guideline studies for a new agrochemical. In order to better understand the relationship between diurnal systemic dose (AUC(24h)) and toxicity of agrochemicals, TK analyses in the study animals is now included in all short- (excluding acute), medium- and long-term guideline mammalian toxicity studies including reproduction/developmental tests. This paper describes a detailed procedure for the implementation of TK in short-, medium- and long-term regulatory toxicity studies, without the use of satellite animals, conducted on three agrochemicals (X11422208, 2,4-D and X574175). In these studies, kinetically-derived maximum doses (KMD) from short-term studies instead of, or along with, maximum tolerated doses (MTD) were used for the selection of the high dose in subsequent longer-term studies. In addition to leveraging TK data to guide dose level selection, the integrated program was also used to select the most appropriate method of oral administration (i.e., gavage versus dietary) of test materials for rat and rabbit developmental toxicity studies. The integrated TK data obtained across toxicity studies (without the use of additional/satellite animals) provided data critical to understanding differences in response across doses, species, strains, sexes, and life stages. Such data should also be useful in mode of action studies and to improve human risk assessments. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Modern industrial simulation tools: Kernel-level integration of high performance parallel processing, object-oriented numerics, and adaptive finite element analysis. Final report, July 16, 1993--September 30, 1997

    Energy Technology Data Exchange (ETDEWEB)

    Deb, M.K.; Kennon, S.R.

    1998-04-01

    A cooperative R&D effort between industry and the US government, this project, under the HPPP (High Performance Parallel Processing) initiative of the Dept. of Energy, started the investigations into parallel object-oriented (OO) numerics. The basic goal was to research and utilize the emerging technologies to create a physics-independent computational kernel for applications using adaptive finite element method. The industrial team included Computational Mechanics Co., Inc. (COMCO) of Austin, TX (as the primary contractor), Scientific Computing Associates, Inc. (SCA) of New Haven, CT, Texaco and CONVEX. Sandia National Laboratory (Albq., NM) was the technology partner from the government side. COMCO had the responsibility of the main kernel design and development, SCA had the lead in parallel solver technology and guidance on OO technologies was Sandia`s main expertise in this venture. CONVEX and Texaco supported the partnership by hardware resource and application knowledge, respectively. As such, a minimum of fifty-percent cost-sharing was provided by the industry partnership during this project. This report describes the R&D activities and provides some details about the prototype kernel and example applications.

  12. A hybrid electron and photon IMRT planning technique that lowers normal tissue integral patient dose using standard hardware.

    Science.gov (United States)

    Rosca, Florin

    2012-06-01

    To present a mixed electron and photon IMRT planning technique using electron beams with an energy range of 6-22 MeV and standard hardware that minimizes integral dose to patients for targets as deep as 7.5 cm. Ten brain cases, two lung, a thyroid, an abdominal, and a parotid case were planned using two planning techniques: a photon-only IMRT (IMRT) versus a mixed modality treatment (E+IMRT) that includes an enface electron beam and a photon IMRT portion that ensures a uniform target coverage. The electron beam is delivered using a regular cutout placed in an electron cone. The electron energy was chosen to provide a good trade-off between minimizing integral dose and generating a uniform, deliverable plan. The authors choose electron energies that cover the deepest part of PTV with the 65%-70% isodose line. The normal tissue integral dose, the dose for ring structures around the PTV, and the volumes of the 75%, 50%, and 25% isosurfaces were used to compare the dose distributions generated by the two planning techniques. The normal tissue integral dose was lowered by about 20% by the E+IMRT plans compared to the photon-only IMRT ones for most studied cases. With the exception of lungs, the dose reduction associated to the E+IMRT plans was more pronounced further away from the target. The average dose ratio delivered to the 0-2 cm and the 2-4 cm ring structures for brain patients for the two planning techniques were 89.6% and 70.8%, respectively. The enhanced dose sparing away from the target for the brain patients can also be observed in the ratio of the 75%, 50%, and 25% isodose line volumes for the two techniques, which decreases from 85.5% to 72.6% and further to 65.1%, respectively. For lungs, the lateral electron beams used in the E+IMRT plans were perpendicular to the mostly anterior/posterior photon beams, generating much more conformal plans. The authors proved that even using the existing electron delivery hardware, a mixed electron/photon planning

  13. Theory of reproducing kernels and applications

    CERN Document Server

    Saitoh, Saburou

    2016-01-01

    This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...

  14. Convergence of barycentric coordinates to barycentric kernels

    KAUST Repository

    Kosinka, Jiří

    2016-02-12

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  15. Convergence of barycentric coordinates to barycentric kernels

    KAUST Repository

    Kosinka, Jiří ; Barton, Michael

    2016-01-01

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  16. Kernel principal component analysis for change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Morton, J.C.

    2008-01-01

    region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....

  17. Recovery of the proliferative and functional integrity of mouse bone marrow in long-term cultures established after whole-body irradiation at different doses and dose rates

    International Nuclear Information System (INIS)

    Bierkens, J.G.; Hendry, J.H.; Testa, N.G.

    1991-01-01

    Injury inflicted upon the bone marrow stroma following whole-body irradiation and its repair over a 1-year period has been assessed in murine long-term bone marrow cultures established at increasing time intervals after irradiation. Different doses at different dose rates (10 Gy at 0.05 cGy/min, 4.5 Gy and 10 Gy at 1.6 cGy/min, and 4 x 4.5 Gy [3 weeks between doses] at 60 cGy/min) were chosen so as to maximize differences in effect in the stroma. The cellularity of the adherent layer in long-term cultures established 1 month after irradiation was reduced by 40%-90% depending on the dose and dose rate. Simultaneous with the poor ability of the marrow to form adherent layers, the cumulative spleen colony-forming unit (CFU-S) and granulocyte-macrophage colony-forming cell (GM-CFC) production over a 7-week period was reduced to 0% and 30% of control cultures, respectively. The slow recovery of the adherent layer was paralleled by an increase in the numbers of CFU-S and GM-CFC in the supernatant. Cultures established from repeatedly irradiated mice performed poorly over the entire 1-year period. Whereas the regeneration of the stroma was near complete 1 year after irradiation, the CFU-S and GM-CFC levels reached only between 50% and 80% of control cultures, respectively. Also, the concentration of CFU-S and GM-CFC in the supernatant remained persistently lower in cultures established from irradiated mice as compared to control cultures. The levels of sulfated glycosaminoglycans, which have been implicated in the establishment of the functional integrity of the microenvironment, were not reduced in the adherent layers at any time after irradiation. These results indicate that the regeneration of the stroma is accompanied by an incomplete recovery of active hemopoiesis in vitro

  18. Partial Deconvolution with Inaccurate Blur Kernel.

    Science.gov (United States)

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning

  19. Comparison of IMRT Treatment Plans Between Linac and Helical Tomotherapy Based on Integral Dose and Inhomogeneity Index

    International Nuclear Information System (INIS)

    Shi Chengyu; Penagaricano, Jose; Papanikolaou, Niko

    2008-01-01

    Intensity modulated radiotherapy (IMRT) is an advanced treatment technology for radiation therapy. There are several treatment planning systems (TPS) that can generate IMRT plans. These plans may show different inhomogeneity indices to the planning target volume (PTV) and integral dose to organs at risk (OAR). In this study, we compared clinical cases covering different anatomical treatment sites, including head and neck, brain, lung, prostate, pelvis, and cranio-spinal axis. Two treatment plans were developed for each case using Pinnacle 3 and helical tomotherapy (HT) TPS. The inhomogeneity index of the PTV and the non-tumor integral dose (NTID) were calculated and compared for each case. Despite the difference in the number of effective beams, in several cases, NTID did not increase from HT as compared to the step-and-shoot delivery method. Six helical tomotherapy treatment plans for different treatment sites have been analyzed and compared against corresponding step-and-shoot plans generated with the Pinnacle 3 planning system. Results show that HT may produce plans with smaller integral doses to healthy organs, and fairly homogeneous doses to the target as compared to linac-based step-and-shoot IMRT planning in special treatment site such as cranio-spinal

  20. Process for producing metal oxide kernels and kernels so obtained

    International Nuclear Information System (INIS)

    Lelievre, Bernard; Feugier, Andre.

    1974-01-01

    The process desbribed is for producing fissile or fertile metal oxide kernels used in the fabrication of fuels for high temperature nuclear reactors. This process consists in adding to an aqueous solution of at least one metallic salt, particularly actinide nitrates, at least one chemical compound capable of releasing ammonia, in dispersing drop by drop the solution thus obtained into a hot organic phase to gel the drops and transform them into solid particles. These particles are then washed, dried and treated to turn them into oxide kernels. The organic phase used for the gel reaction is formed of a mixture composed of two organic liquids, one acting as solvent and the other being a product capable of extracting the anions from the metallic salt of the drop at the time of gelling. Preferably an amine is used as product capable of extracting the anions. Additionally, an alcohol that causes a part dehydration of the drops can be employed as solvent, thus helping to increase the resistance of the particles [fr

  1. Hilbertian kernels and spline functions

    CERN Document Server

    Atteia, M

    1992-01-01

    In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.

  2. Optimization of total arc degree for stereotactic radiotherapy by using integral biologically effective dose and irradiated volume

    International Nuclear Information System (INIS)

    Lim, Do Hoon; Kim, Dae Yong; Lee, Myung Za; Chun, Ha Chung

    2001-01-01

    To find the optimal values of total arc degree to protect the normal brain tissue from high dose radiation in stereotactic radiotherapy planning. With Xknife-3 planning system and 4 MV linear accelerator, the authors planned under various values of parameters. One isocenter, 12, 20, 30, 40, 50, and 60 mm of collimator diameters, 100 deg, 200 deg, 300 deg, 400 deg, 500 deg, 600 deg, of total arc degrees, and 30 deg or 45 deg of arc intervals were used. After the completion of planning, the plans were compared each other using V 50 (the volume of normal brain that is delivered high dose radiation) and integral biologically effective dose. At 30 deg of arc interval, the values of V 50 had the decreased pattern with the increase of total arc degree in any collimator diameter. At 45 deg arc interval, up to 400 deg of total arc degree, the values of V 50 decreased with the increase of total arc degree, but at 500 deg and 600 deg of total arc degrees, the values increased. At 30 deg of arc interval, integral biologically effective dose showed the decreased pattern with the increase of total arc degree in any collimator diameter. At 45 deg arc interval with less than 40 mm collimator diameter, the integral biologically effective dose decreased with the increase of total arc degree, but with 50 and 60 mm of collimator diameters, up to 400 deg of total arc degree, integral biologically effective dose decreased with the increase of total arc degree, but at 500 deg and 600 deg of total arc degrees, the values increased. In the stereotactic radiotherapy planning for brain lesions, planning with 400 deg of total arc degree is optimal. Especially, when the larger collimator more than 50 mm diameter should be used, the uses of 500 deg and 600 deg of total arc degrees make the increase of V 50 and integral biologically effective dose, Therefore stereotactic radiotherapy planning using 400 deg of total arc degree can increase the therapeutic ratio and produce the effective outcome

  3. Dense Medium Machine Processing Method for Palm Kernel/ Shell ...

    African Journals Online (AJOL)

    ADOWIE PERE

    Cracked palm kernel is a mixture of kernels, broken shells, dusts and other impurities. In ... machine processing method using dense medium, a separator, a shell collector and a kernel .... efficiency, ease of maintenance and uniformity of.

  4. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge; Schuster, Gerard T.

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently

  5. Implementation of a revised numerical integration technique into QAD

    International Nuclear Information System (INIS)

    De Gangi, N.L.

    1983-01-01

    A technique for numerical integration through a uniform volume source is developed. It is applied to gamma radiation transport shielding problems. The method is based on performing a numerical angular and ray point kernel integration and is incorporated into the QAD-CG computer code (i.e. QAD-UE). Several test problems are analyzed with this technique. Convergence properties of the method are analyzed. Gamma dose rates from a large tank and post LOCA dose rates inside a containment building are evaluated. Results are consistent with data from other methods. The new technique provides several advantages. User setup requirements for large volume source problems are reduced from standard point kernel requirements. Calculational efficiencies are improved. An order of magnitude improvement is seen with a test problem

  6. The effect of low dose ionizing radiation on homeostasis and functional integrity in an organotypic human skin model

    Energy Technology Data Exchange (ETDEWEB)

    Neubeck, Claere von [German Cancer Consortium DKTK partner site Dresden, OncoRay - National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Fetscherstrasse 74, 01307 Dresden (Germany); German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg (Germany); Geniza, Matthew J. [Molecular and Cellular Biology Program, Oregon State University, Corvallis OR 97331 (United States); Kauer, Paula M.; Robinson, R. Joe; Chrisler, William B. [Health Impacts and Exposure Science, Pacific Northwest National Laboratory, Richland WA 99352 (United States); Sowa, Marianne B., E-mail: marianne.sowa@pnnl.gov [Health Impacts and Exposure Science, Pacific Northwest National Laboratory, Richland WA 99352 (United States)

    2015-05-15

    Highlights: • Low doses of high LET radiation influence skin homeostasis. • Effects on proliferation and differentiation profiles are LET dependent. • Skin barrier function is not compromised following low dose exposure. - Abstract: Outside the protection of Earth's atmosphere, astronauts are exposed to low doses of high linear energy transfer (LET) radiation. Future NASA plans for deep space missions or a permanent settlement on the moon are limited by the health risks associated with space radiation exposures. There is a paucity of direct epidemiological data for low dose exposures to space radiation-relevant high LET ions. Health risk models are used to estimate the risk for such exposures, though these models are based on high dose experiments. There is increasing evidence, however, that low and high dose exposures result in different signaling events at the molecular level, and may involve different response mechanisms. Further, despite their low abundance, high LET particles have been identified as the major contributor to health risk during manned space flight. The human skin is exposed in every external radiation scenario, making it an ideal epithelial tissue model in which to study radiation induced effects. Here, we exposed an in vitro three dimensional (3-D) human organotypic skin tissue model to low doses of high LET oxygen (O), silicon (Si) and iron (Fe) ions. We measured proliferation and differentiation profiles in the skin tissue and examined the integrity of the skin's barrier function. We discuss the role of secondary particles in changing the proportion of cells receiving a radiation dose, emphasizing the possible impact on radiation-induced health issues in astronauts.

  7. Integrated boost IMRT with FET-PET-adapted local dose escalation in glioblastomas. Results of a prospective phase II study

    International Nuclear Information System (INIS)

    Piroth, M.D.; Pinkawa, M.; Holy, R.; Forschungszentrum Juelich GmbH

    2012-01-01

    Dose escalations above 60 Gy based on MRI have not led to prognostic benefits in glioblastoma patients yet. With positron emission tomography (PET) using [ 18 F]fluorethyl-L-tyrosine (FET), tumor coverage can be optimized with the option of regional dose escalation in the area of viable tumor tissue. In a prospective phase II study (January 2008 to December 2009), 22 patients (median age 55 years) received radiochemotherapy after surgery. The radiotherapy was performed as an MRI and FET-PET-based integrated-boost intensity-modulated radiotherapy (IMRT). The prescribed dose was 72 and 60 Gy (single dose 2.4 and 2.0 Gy, respectively) for the FET-PET- and MR-based PTV-FET (72 Gy) and PTV-MR (60 Gy) . FET-PET and MRI were performed routinely for follow-up. Quality of life and cognitive aspects were recorded by the EORTC-QLQ-C30/QLQ Brain20 and Mini-Mental Status Examination (MMSE), while the therapy-related toxicity was recorded using the CTC3.0 and RTOG scores. Median overall survival (OS) and disease-free survival (DFS) were 14.8 and 7.8 months, respectively. All local relapses were detected at least partly within the 95% dose volume of PTV-MR (60 Gy) . No relevant radiotherapy-related side effects were observed (excepted alopecia). In 2 patients, a pseudoprogression was observed in the MRI. Tumor progression could be excluded by FET-PET and was confirmed in further MRI and FET-PET imaging. No significant changes were observed in MMSE scores and in the EORTC QLQ-C30/QLQ-Brain20 questionnaires. Our dose escalation concept with a total dose of 72 Gy, based on FET-PET, did not lead to a survival benefit. Acute and late toxicity were not increased, compared with historical controls and published dose-escalation studies. (orig.)

  8. Integrated boost IMRT with FET-PET-adapted local dose escalation in glioblastomas. Results of a prospective phase II study

    Energy Technology Data Exchange (ETDEWEB)

    Piroth, M.D.; Pinkawa, M.; Holy, R. [RWTH Aachen University Hospital (Germany). Dept. of Radiation Oncology; Forschungszentrum Juelich GmbH (DE). Juelich-Aachen Research Alliance (JARA) - Section JARA-Brain] (and others)

    2012-04-15

    Dose escalations above 60 Gy based on MRI have not led to prognostic benefits in glioblastoma patients yet. With positron emission tomography (PET) using [{sup 18}F]fluorethyl-L-tyrosine (FET), tumor coverage can be optimized with the option of regional dose escalation in the area of viable tumor tissue. In a prospective phase II study (January 2008 to December 2009), 22 patients (median age 55 years) received radiochemotherapy after surgery. The radiotherapy was performed as an MRI and FET-PET-based integrated-boost intensity-modulated radiotherapy (IMRT). The prescribed dose was 72 and 60 Gy (single dose 2.4 and 2.0 Gy, respectively) for the FET-PET- and MR-based PTV-FET{sub (72 Gy)} and PTV-MR{sub (60 Gy)}. FET-PET and MRI were performed routinely for follow-up. Quality of life and cognitive aspects were recorded by the EORTC-QLQ-C30/QLQ Brain20 and Mini-Mental Status Examination (MMSE), while the therapy-related toxicity was recorded using the CTC3.0 and RTOG scores. Median overall survival (OS) and disease-free survival (DFS) were 14.8 and 7.8 months, respectively. All local relapses were detected at least partly within the 95% dose volume of PTV-MR{sub (60 Gy)}. No relevant radiotherapy-related side effects were observed (excepted alopecia). In 2 patients, a pseudoprogression was observed in the MRI. Tumor progression could be excluded by FET-PET and was confirmed in further MRI and FET-PET imaging. No significant changes were observed in MMSE scores and in the EORTC QLQ-C30/QLQ-Brain20 questionnaires. Our dose escalation concept with a total dose of 72 Gy, based on FET-PET, did not lead to a survival benefit. Acute and late toxicity were not increased, compared with historical controls and published dose-escalation studies. (orig.)

  9. Feasibility study on an integrated AEC-grid device for the optimization of image quality and exposure dose in mammography

    Science.gov (United States)

    Kim, Kyo-Tae; Yun, Ryang-Young; Han, Moo-Jae; Heo, Ye-Ji; Song, Yong-Keun; Heo, Sung-Wook; Oh, Kyeong-Min; Park, Sung-Kwang

    2017-10-01

    Currently, in the radiation diagnosis field, mammography is used for the early detection of breast cancer. In addition, studies are being conducted on a grid to produce high-quality images. Although the grid ratio of the grid, which affects the scattering removal rate, must be increased to improve image quality, it increases the total exposure dose. While the use of automatic exposure control is recommended to minimize this problem, existing mammography equipment, unlike general radiography equipment, is mounted on the back of a detector. Therefore, the device is greatly affected by the detector and supporting device, and it is difficult to control the exposure dose. Accordingly, in this research, an integrated AEC-grid device that simultaneously performs AEC and grid functions was used to minimize the unnecessary exposure dose while removing scattering, thereby realizing superior image quality.

  10. Ranking Support Vector Machine with Kernel Approximation

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2017-01-01

    Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  11. Ranking Support Vector Machine with Kernel Approximation.

    Science.gov (United States)

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  12. Sentiment classification with interpolated information diffusion kernels

    NARCIS (Netherlands)

    Raaijmakers, S.

    2007-01-01

    Information diffusion kernels - similarity metrics in non-Euclidean information spaces - have been found to produce state of the art results for document classification. In this paper, we present a novel approach to global sentiment classification using these kernels. We carry out a large array of

  13. Evolution kernel for the Dirac field

    International Nuclear Information System (INIS)

    Baaquie, B.E.

    1982-06-01

    The evolution kernel for the free Dirac field is calculated using the Wilson lattice fermions. We discuss the difficulties due to which this calculation has not been previously performed in the continuum theory. The continuum limit is taken, and the complete energy eigenfunctions as well as the propagator are then evaluated in a new manner using the kernel. (author)

  14. Panel data specifications in nonparametric kernel regression

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  15. Improving the Bandwidth Selection in Kernel Equating

    Science.gov (United States)

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  16. Kernel Korner : The Linux keyboard driver

    NARCIS (Netherlands)

    Brouwer, A.E.

    1995-01-01

    Our Kernel Korner series continues with an article describing the Linux keyboard driver. This article is not for "Kernel Hackers" only--in fact, it will be most useful to those who wish to use their own keyboard to its fullest potential, and those who want to write programs to take advantage of the

  17. Metabolic network prediction through pairwise rational kernels.

    Science.gov (United States)

    Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian

    2014-09-26

    Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy

  18. Monte Carlo evaluation of a photon pencil kernel algorithm applied to fast neutron therapy treatment planning

    Science.gov (United States)

    Söderberg, Jonas; Alm Carlsson, Gudrun; Ahnesjö, Anders

    2003-10-01

    When dedicated software is lacking, treatment planning for fast neutron therapy is sometimes performed using dose calculation algorithms designed for photon beam therapy. In this work Monte Carlo derived neutron pencil kernels in water were parametrized using the photon dose algorithm implemented in the Nucletron TMS (treatment management system) treatment planning system. A rectangular fast-neutron fluence spectrum with energies 0-40 MeV (resembling a polyethylene filtered p(41)+ Be spectrum) was used. Central axis depth doses and lateral dose distributions were calculated and compared with the corresponding dose distributions from Monte Carlo calculations for homogeneous water and heterogeneous slab phantoms. All absorbed doses were normalized to the reference dose at 10 cm depth for a field of radius 5.6 cm in a 30 × 40 × 20 cm3 water test phantom. Agreement to within 7% was found in both the lateral and the depth dose distributions. The deviations could be explained as due to differences in size between the test phantom and that used in deriving the pencil kernel (radius 200 cm, thickness 50 cm). In the heterogeneous phantom, the TMS, with a directly applied neutron pencil kernel, and Monte Carlo calculated absorbed doses agree approximately for muscle but show large deviations for media such as adipose or bone. For the latter media, agreement was substantially improved by correcting the absorbed doses calculated in TMS with the neutron kerma factor ratio and the stopping power ratio between tissue and water. The multipurpose Monte Carlo code FLUKA was used both in calculating the pencil kernel and in direct calculations of absorbed dose in the phantom.

  19. Integrated task plans for the Hanford Environmental Dose Reconstruction Project, June 1992 through May 1994

    International Nuclear Information System (INIS)

    Shipler, D.B.

    1993-09-01

    The purpose of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate radiation doses from Hanford Site operations since 1944 to representative individuals. The primary objective of work to be performed through May 1994 is to determine the project's appropriate scope: space, time, radionuclides, pathways and representative individuals; determine the project's appropriate level of accuracy/level of uncertainty in dose estimates; complete model and data development; and estimate doses for the Hanford Thyroid Disease Study and representative individuals. A major objective of the HEDR Project is to estimate doses to the thyroid of individuals who were exposed to iodine-131. A principal pathway for many of these individuals was milk from cows that ate vegetation contaminated by iodine-131 released into the air from Hanford facilities. The plan for June 1992 through May 1994 has been prepared based on activities and budgets approved by the Technical Steering Panel (TSP) at its meetings on January 7--9, 1993 and February 25--26, 1993. The activities can be divided into three broad categories: (1) computer code and data development activities, (2) calculation of doses, and (3) technical and communication support to the TSP and the TSP Native American Working Group (NAWG). The following activities will be conducted to accomplish project objectives through May 1994

  20. Bayesian Kernel Mixtures for Counts.

    Science.gov (United States)

    Canale, Antonio; Dunson, David B

    2011-12-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.

  1. SIGMA/B, Doses in Space Vehicle for Multiple Trajectories, Various Radiation Source

    International Nuclear Information System (INIS)

    Jordan, T.M.

    2003-01-01

    1 - Description of problem or function: SIGMA/B calculates radiation dose at arbitrary points inside a space vehicle, taking into account vehicle geometry, heterogeneous placement of equipment and stores, vehicle materials, time-weighted astronaut positions and many radiation sources from mission trajectories, e.g. geomagnetically trapped protons and electrons, solar flare particles, galactic cosmic rays and their secondary radiations. The vehicle geometry, equipment and supplies, and man models are described by quadric surfaces. The irradiating flux field may be anisotropic. The code can be used to perform simultaneous dose calculations for multiple vehicle trajectories, each involving several radiation sources. Results are presented either as dose as a function of shield thickness, or the dose received through designated outer sections of the vehicle. 2 - Method of solution: Automatic sectoring of the vehicle is performed by a Simpson's rule integration over angle; the dose is computed by a numerical angular integration of the dose attenuation kernels about the dose points. The kernels are curve-fit functions constructed from input data tables. 3 - Restrictions on the complexity of the problem: The code uses variable dimensioning techniques to store data. The only restriction on problem size is the available core storage

  2. Integrated Task Plans for the Hanford Environmental Dose Reconstruction Project, FY 1992 through May 1994

    International Nuclear Information System (INIS)

    Shipler, D.B.

    1992-09-01

    The purpose of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate radiation doses from Hanford Site operations since 1944 to populations and individuals. The primary objective of work to be performed through May 1994 is to (1) determine the project's appropriate scope (space, time, radionuclides, pathways and individuals/population groups), (2) determine the project's appropriate level of accuracy (level of uncertainty in dose estimates) for the project, (3) complete model and data development, and (4) estimate doses for the Hanford Thyroid Disease Study (HTDS), representative individuals, and special populations as described herein. The plan for FY 1992 through May 1994 has been prepared based on activities and budgets approved by the Technical Steering Panel (TSP) at its meetings on August 19--20, 1991, and April 23--25, 1992. The activities can be divided into four broad categories: (1) model and data evaluation activities, (2)additional dose estimates, (3) model and data development activities, and (4)technical and communication support

  3. Anisotropic hydrodynamics with a scalar collisional kernel

    Science.gov (United States)

    Almaalol, Dekrayat; Strickland, Michael

    2018-04-01

    Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.

  4. Use of cadmium sulphide to measure integrated dose in short-time irradiation

    International Nuclear Information System (INIS)

    Nimnual, S.

    1975-01-01

    An experiment was made to measure the dose from a short burst of X-rays in the order of 1 second or less by means of the cadmium sulphide photoconductive cell. If protected from light, the CdS cell has a very high resistance such that it does not discharge a capacitor appreciably. But during irradiation, the resistance decreases temporarily and an amount of charge will leak from the capacitor through the Ca S cell. The result to this experiment shows that the principle works very well but it is necessary to add another fixed high resistance of about 10 7 ohms into the circuit in order to get results independent of the dose-rate. The equipment used in this experiment can measure a dose as low as 6 m R

  5. CLAss-Specific Subspace Kernel Representations and Adaptive Margin Slack Minimization for Large Scale Classification.

    Science.gov (United States)

    Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan

    2018-02-01

    In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.

  6. Higher-Order Hybrid Gaussian Kernel in Meshsize Boosting Algorithm

    African Journals Online (AJOL)

    In this paper, we shall use higher-order hybrid Gaussian kernel in a meshsize boosting algorithm in kernel density estimation. Bias reduction is guaranteed in this scheme like other existing schemes but uses the higher-order hybrid Gaussian kernel instead of the regular fixed kernels. A numerical verification of this scheme ...

  7. NLO corrections to the Kernel of the BKP-equations

    Energy Technology Data Exchange (ETDEWEB)

    Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Fadin, V.S. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirskij Gosudarstvennyj Univ., Novosibirsk (Russian Federation); Lipatov, L.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg (Russian Federation); Vacca, G.P. [INFN, Sezione di Bologna (Italy)

    2012-10-02

    We present results for the NLO kernel of the BKP equations for composite states of three reggeized gluons in the Odderon channel, both in QCD and in N=4 SYM. The NLO kernel consists of the NLO BFKL kernel in the color octet representation and the connected 3{yields}3 kernel, computed in the tree approximation.

  8. Adaptive Kernel in Meshsize Boosting Algorithm in KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a meshsize boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  9. Adaptive Kernel In The Bootstrap Boosting Algorithm In KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a bootstrap boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  10. Kernel maximum autocorrelation factor and minimum noise fraction transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt...

  11. 7 CFR 51.1441 - Half-kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  12. 7 CFR 51.2296 - Three-fourths half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  13. 7 CFR 981.401 - Adjusted kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel weight... kernels in excess of five percent; less shells, if applicable; less processing loss of one percent for...

  14. 7 CFR 51.1403 - Kernel color classification.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  15. The Linux kernel as flexible product-line architecture

    NARCIS (Netherlands)

    M. de Jonge (Merijn)

    2002-01-01

    textabstractThe Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what

  16. Effects of indoor residence on radiation doses from routine releases of radionuclides to the atmosphere

    International Nuclear Information System (INIS)

    Kocher, D.C.

    1980-01-01

    Dose reduction factors from indoor residence during routine releases of radionuclides to the atmosphere were studied using models that are suitable for application to arbitrary source terms. Dose reduction factors for internal exposure to inhaled radionuclides account for air ventilation and deposition on inside building surfaces. Estimated internal dose reduction factors are approx. 0.2 to 0.8 for particulates and 0.07 to 0.4 for radioiodine. Dose reduction factors for external photon exposure from airborne and surface-deposited sources are based on the point-kernel integration method. Values for source terms from a fuel reprocessing plant and a hypothetical reactor accident are within a factor of 2 of the value 0.5 adopted by the US Nuclear Regulatory Commission (NRC) for population dose assessments. For the release at Three Mile Island nuclear station, however, the external dose reduction factor may be an order of magnitude less than the value adopted by the NRC

  17. Digital signal processing with kernel methods

    CERN Document Server

    Rojo-Alvarez, José Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustavo

    2018-01-01

    A realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors. * Presents the necess...

  18. Parsimonious Wavelet Kernel Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Wang Qin

    2015-11-01

    Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.

  19. Ensemble Approach to Building Mercer Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive...

  20. Clinical and symptomatological study of pigs subjected to a lethal dose of integral gamma irradiation

    International Nuclear Information System (INIS)

    Vaiman, M.; Guenet, J.-L.; Maas, J.; Nizza, P.

    1966-05-01

    Results are reported from a clinical and haematological study on a Corsican species of pigs wholly exposed to an approximately lethal dose of γ radiation. The aim of this work was to examine the changes in the irradiation syndrome of irradiation for pigs to make it thus possible to devise further experiments, in particular in the therapeutic field. The dose received was 285 rads (measured as the absorption in the vertical antero-posterior medial plane). Data are presented on cyto-haematological changes in the blood circulating immediately after irradiation, and followed up to death, and changes in the medullary cytology after irradiation. The clinical picture of lethal radiation injury in swine is described. (authors) [fr

  1. Online dose rate monitoring: Better information by using the IRMA concept (Integral Radiological Multidetector Arrays)

    International Nuclear Information System (INIS)

    Genrich, V.

    1989-01-01

    A new GM detector system has been developed for online environmental monitoring. The approach is unorthodox, but simple: A) Take a set of radiological probes and feed all their information to an 'intelligent' front-end. B) Elaborate some algorithms, so that the system will give out just one gamma dose rate (running over more than nine decades). C) If necessary, associate some additional sensors, and the system will exhibit better discriminating qualities for the detection of artificial nuclides in the environment. (orig.)

  2. Control Transfer in Operating System Kernels

    Science.gov (United States)

    1994-05-13

    microkernel system that runs less code in the kernel address space. To realize the performance benefit of allocating stacks in unmapped kseg0 memory, the...review how I modified the Mach 3.0 kernel to use continuations. Because of Mach’s message-passing microkernel structure, interprocess communication was...critical control transfer paths, deeply- nested call chains are undesirable in any case because of the function call overhead. 4.1.3 Microkernel Operating

  3. Uranium kernel formation via internal gelation

    International Nuclear Information System (INIS)

    Hunt, R.D.; Collins, J.L.

    2004-01-01

    In the 1970s and 1980s, U.S. Department of Energy (DOE) conducted numerous studies on the fabrication of nuclear fuel particles using the internal gelation process. These amorphous kernels were prone to flaking or breaking when gases tried to escape from the kernels during calcination and sintering. These earlier kernels would not meet today's proposed specifications for reactor fuel. In the interim, the internal gelation process has been used to create hydrous metal oxide microspheres for the treatment of nuclear waste. With the renewed interest in advanced nuclear fuel by the DOE, the lessons learned from the nuclear waste studies were recently applied to the fabrication of uranium kernels, which will become tri-isotropic (TRISO) fuel particles. These process improvements included equipment modifications, small changes to the feed formulations, and a new temperature profile for the calcination and sintering. The modifications to the laboratory-scale equipment and its operation as well as small changes to the feed composition increased the product yield from 60% to 80%-99%. The new kernels were substantially less glassy, and no evidence of flaking was found. Finally, key process parameters were identified, and their effects on the uranium microspheres and kernels are discussed. (orig.)

  4. Quantum tomography, phase-space observables and generalized Markov kernels

    International Nuclear Information System (INIS)

    Pellonpaeae, Juha-Pekka

    2009-01-01

    We construct a generalized Markov kernel which transforms the observable associated with the homodyne tomography into a covariant phase-space observable with a regular kernel state. Illustrative examples are given in the cases of a 'Schroedinger cat' kernel state and the Cahill-Glauber s-parametrized distributions. Also we consider an example of a kernel state when the generalized Markov kernel cannot be constructed.

  5. Step-and-Shoot versus Compensator-based IMRT: Calculation and Comparison of Integral Dose in Non-tumoral and Target Organs in Prostate Cancer

    Directory of Open Access Journals (Sweden)

    Kaveh Shirani Tak Abi

    2015-05-01

    Full Text Available Introduction Intensity-Modulated Radiotherapy (IMRT is becoming an increasingly routine treatment method. IMRT can be delivered by use of conventional Multileaf Collimators (MLCs and/or physical compensators. One of the most important factors in selecting an appropriate IMRT technique is integral dose. Integral dose is equal to the mean energy deposited in the total irradiated volume of the patient. The aim of the present study was to calculate and compare the integral dose in normal and target organs in two different procedures of IMRT: Step-and-Shoot (SAS and compensator-based IMRT. Materials and Methods In this comparative study, five patients with prostate cancer were selected. Module Integrated Radiotherapy System was applied, using three energy ranges. In both treatment planning methods, the integral dose dramatically decreased by increasing energy. Results Comparison of two treatment methods showed that on average, the integral dose of body in SAS radiation therapy was about 1.62% lower than that reported in compensator-based IMRT. In planning target volume, rectum, bladder, and left and right femoral heads, the integral doses for SAS method were 1.01%, 1.02%, 1.11%, 1.47%, and 1.40% lower than compensator-based IMRT, respectively. Conclusion Considering the treatment conditions, the definition of dose volume constraints for healthy tissues, and the equal volume of organs in both treatment methods, SAS radiation therapy by providing a lower integral dose seems to be more advantageous and efficient for prostate cancer treatment, compared to compensator-based IMRT.

  6. Bioconversions of Palm Kernel Cake and Rice Bran Mixtures by Trichoderma viride Toward Nutritional Contents

    OpenAIRE

    Yana Sukaryana; Umi Atmomarsono; Vitus D. Yunianto; Ejeng Supriyatna

    2010-01-01

    The objective of the research is to examine the mixtures of palm kernel cake and rice bran of fermented by Trichoderma viride. Completely randomized design in factorial pattern 4 x 4 was used in this experiment. factor I is the doses of inoculums; D1 = 0%, D2 =  0,1% , D3 =  0,2%, D4 =  0,3%, and  complement factor II is mixtures of palm kernel cake and rice bran : T1=20:80% ; T2=40:60% ; T3=60:40% ; T4=80:20%. The treatment each of three replicate. Fermentation was conduc...

  7. Penetuan Bilangan Iodin pada Hydrogenated Palm Kernel Oil (HPKO) dan Refined Bleached Deodorized Palm Kernel Oil (RBDPKO)

    OpenAIRE

    Sitompul, Monica Angelina

    2015-01-01

    Have been conducted Determination of Iodin Value by method titration to some Hydrogenated Palm Kernel Oil (HPKO) and Refined Bleached Deodorized Palm Kernel Oil (RBDPKO). The result of analysis obtained the Iodin Value in Hydrogenated Palm Kernel Oil (A) = 0,16 gr I2/100gr, Hydrogenated Palm Kernel Oil (B) = 0,20 gr I2/100gr, Hydrogenated Palm Kernel Oil (C) = 0,24 gr I2/100gr. And in Refined Bleached Deodorized Palm Kernel Oil (A) = 17,51 gr I2/100gr, Refined Bleached Deodorized Palm Kernel ...

  8. Kernel methods for large-scale genomic data analysis

    Science.gov (United States)

    Xing, Eric P.; Schaid, Daniel J.

    2015-01-01

    Machine learning, particularly kernel methods, has been demonstrated as a promising new tool to tackle the challenges imposed by today’s explosive data growth in genomics. They provide a practical and principled approach to learning how a large number of genetic variants are associated with complex phenotypes, to help reveal the complexity in the relationship between the genetic markers and the outcome of interest. In this review, we highlight the potential key role it will have in modern genomic data processing, especially with regard to integration with classical methods for gene prioritizing, prediction and data fusion. PMID:25053743

  9. Sustainability Instruction in High Doses: Results From Incorporation of Multiple InTeGrate Modules Into an Environmental Science Class

    Science.gov (United States)

    Rademacher, L. K.

    2017-12-01

    The Interdisciplinary Teaching about Earth for a Sustainable Future (InTeGrate) community has developed extensive courses and modules designed for broad adoption into geoscience classrooms in diverse environments. I participated in a three-semester research project designed to test the efficacy of incorporating "high doses" (minimum 3 modules or 18 class periods) of InTeGrate materials into a course, in my case, an introductory environmental science class. InTeGrate materials were developed by groups of instructors from a range of institutions across the US. These materials include an emphasis on systems thinking, interdisciplinary approaches, and sustainability, and those themes are woven throughout the modules. The three semesters included a control in which no InTeGrate materials were used, a pilot in which InTeGrate materials were tested, and a treatment semesters in which tested materials were modified as needed and fully implemented into the course. Data were collected each semester on student attitudes using the InTeGrate Attitudinal Instrument (pre and post), a subset of Geoscience Literacy Exam questions (pre and post), and a series of assessments and essay exam questions (post only). Although results suggest that learning gains were mixed, changes in attitudes pre- and post-instruction were substantial. Changes in attitudes regarding the importance of sustainable employers, the frequency of self-reported individual sustainable actions, and motivation level for creating a sustainable society were observed in the control and treatment semesters, with the treatment semester showing the greatest gains. Importantly, one of the biggest differences between the control and treatment semesters is the reported impact that the course had on influencing students' sustainable behaviors. The treatment semester course impacted students' sustainable behaviors far more than the control semester.

  10. Intensity-modulated radiotherapy (IMRT) and conventional three-dimensional conformal radiotherapy for high-grade gliomas: Does IMRT increase the integral dose to normal brain?

    International Nuclear Information System (INIS)

    Hermanto, Ulrich; Frija, Erik K.; Lii, MingFwu J.; Chang, Eric L.; Mahajan, Anita; Woo, Shiao Y.

    2007-01-01

    Purpose: To determine whether intensity-modulated radiotherapy (IMRT) treatment increases the total integral dose of nontarget tissue relative to the conventional three-dimensional conformal radiotherapy (3D-CRT) technique for high-grade gliomas. Methods and Materials: Twenty patients treated with 3D-CRT for glioblastoma multiforme were selected for a comparative dosimetric evaluation with IMRT. Original target volumes, organs at risk (OAR), and dose-volume constraints were used for replanning with IMRT. Predicted isodose distributions, cumulative dose-volume histograms of target volumes and OAR, normal tissue integral dose, target coverage, dose conformity, and normal tissue sparing with 3D-CRT and IMRT planning were compared. Statistical analyses were performed to determine differences. Results: In all 20 patients, IMRT maintained equivalent target coverage, improved target conformity (conformity index [CI] 95% 1.52 vs. 1.38, p mean by 19.8% and D max by 10.7%), optic chiasm (D mean by 25.3% and D max by 22.6%), right optic nerve (D mean by 37.3% and D max by 28.5%), and left optic nerve (D mean by 40.6% and D max by 36.7%), p ≤ 0.01. This was achieved without increasing the total nontarget integral dose by greater than 0.5%. Overall, total integral dose was reduced by 7-10% with IMRT, p < 0.001, without significantly increasing the 0.5-5 Gy low-dose volume. Conclusions: These results indicate that IMRT treatment for high-grade gliomas allows for improved target conformity, better critical tissue sparing, and importantly does so without increasing integral dose and the volume of normal tissue exposed to low doses of radiation

  11. Irreducible kernels and nonperturbative expansions in a theory with pure m -> m interaction

    International Nuclear Information System (INIS)

    Iagolnitzer, D.

    1983-01-01

    Recent results on the structure of the S matrix at the m-particle threshold (m>=2) in a simplified m->m scattering theory with no subchannel interaction are extended to the Green function F on the basis of off-shell unitarity, through an adequate mathematical extension of some results of Fredholm theory: local two-sheeted or infinite-sheeted structure of F around s=(mμ) 2 depending on the parity of (m-1) (ν-1) (where μ>0 is the mass and ν is the dimension of space-time), off-shell definition of the irreducible kernel U [which is the analogue of the K matrix in the two different parity cases (m-1)(ν-1) odd or even] and related local expansion of F, for (m-1)(ν-1) even, in powers of sigmasup(β)lnsigma(sigma=(mμ) 2 -s). It is shown that each term in this expansion is the dominant contribution to a Feynman-type integral in which each vertex is a kernel U. The links between kernel U and Bethe-Salpeter type kernels G of the theory are exhibited in both parity cases, as also the links between the above expansion of F and local expansions, in the Bethe-Salpeter type framework, of Fsub(lambda) in terms of Feynman-type integrals in which each vertex is a kernel G and which include both dominant and subdominant contributions. (orig.)

  12. Exact Heat Kernel on a Hypersphere and Its Applications in Kernel SVM

    Directory of Open Access Journals (Sweden)

    Chenchao Zhao

    2018-01-01

    Full Text Available Many contemporary statistical learning methods assume a Euclidean feature space. This paper presents a method for defining similarity based on hyperspherical geometry and shows that it often improves the performance of support vector machine compared to other competing similarity measures. Specifically, the idea of using heat diffusion on a hypersphere to measure similarity has been previously proposed and tested by Lafferty and Lebanon [1], demonstrating promising results based on a heuristic heat kernel obtained from the zeroth order parametrix expansion; however, how well this heuristic kernel agrees with the exact hyperspherical heat kernel remains unknown. This paper presents a higher order parametrix expansion of the heat kernel on a unit hypersphere and discusses several problems associated with this expansion method. We then compare the heuristic kernel with an exact form of the heat kernel expressed in terms of a uniformly and absolutely convergent series in high-dimensional angular momentum eigenmodes. Being a natural measure of similarity between sample points dwelling on a hypersphere, the exact kernel often shows superior performance in kernel SVM classifications applied to text mining, tumor somatic mutation imputation, and stock market analysis.

  13. Population pharmacokinetic model of THC integrates oral, intravenous, and pulmonary dosing and characterizes short- and long-term pharmacokinetics.

    Science.gov (United States)

    Heuberger, Jules A A C; Guan, Zheng; Oyetayo, Olubukayo-Opeyemi; Klumpers, Linda; Morrison, Paul D; Beumer, Tim L; van Gerven, Joop M A; Cohen, Adam F; Freijer, Jan

    2015-02-01

    Δ(9)-Tetrahydrocannobinol (THC), the main psychoactive compound of Cannabis, is known to have a long terminal half-life. However, this characteristic is often ignored in pharmacokinetic (PK) studies of THC, which may affect the accuracy of predictions in different pharmacologic areas. For therapeutic use for example, it is important to accurately describe the terminal phase of THC to describe accumulation of the drug. In early clinical research, the THC challenge test can be optimized through more accurate predictions of the dosing sequence and the wash-out between occasions in a crossover setting, which is mainly determined by the terminal half-life of the compound. The purpose of this study is to better quantify the long-term pharmacokinetics of THC. A population-based PK model for THC was developed describing the profile up to 48 h after an oral, intravenous, and pulmonary dose of THC in humans. In contrast to earlier models, the current model integrates all three major administration routes and covers the long terminal phase of THC. Results show that THC has a fast initial and intermediate half-life, while the apparent terminal half-life is long (21.5 h), with a clearance of 38.8 L/h. Because the current model characterizes the long-term pharmacokinetics, it can be used to assess the accumulation of THC in a multiple-dose setting and to forecast concentration profiles of the drug under many different dosing regimens or administration routes. Additionally, this model could provide helpful insights into the THC challenge test used for the development of (novel) compounds targeting the cannabinoid system for different therapeutic applications and could improve decision making in future clinical trials.

  14. Effect of Acrocomia aculeata Kernel Oil on Adiposity in Type 2 Diabetic Rats.

    Science.gov (United States)

    Nunes, Ângela A; Buccini, Danieli F; Jaques, Jeandre A S; Portugal, Luciane C; Guimarães, Rita C A; Favaro, Simone P; Caldas, Ruy A; Carvalho, Cristiano M E

    2018-03-01

    The macauba palm (Acrocomia aculeata) is native of tropical America and is found mostly in the Cerrados and Pantanal biomes. The fruits provide an oily pulp, rich in long chain fatty acids, and a kernel that encompass more than 50% of lipids rich in medium chain fatty acids (MCFA). Based on biochemical and nutritional evidences MCFA is readily catabolized and can reduce body fat accumulation. In this study, an animal model was employed to evaluate the effect of Acrocomia aculeata kernel oil (AKO) on the blood glucose level and the fatty acid deposit in the epididymal adipose tissue. The A. aculeata kernel oil obtained by cold pressing presented suitable quality as edible oil. Its fatty acid profile indicates high concentration of MCFA, mainly lauric, capric and caprilic. Type 2 diabetic rats fed with that kernel oil showed reduction of blood glucose level in comparison with the diabetic control group. Acrocomia aculeata kernel oil showed hypoglycemic effect. A small fraction of total dietary medium chain fatty acid was accumulated in the epididymal adipose tissue of rats fed with AKO at both low and high doses and caprilic acid did not deposit at all.

  15. Low-dose non-enhanced CT versus full-dose contrast-enhanced CT in integrated PET/CT studies for the diagnosis of uterine cancer recurrence

    Energy Technology Data Exchange (ETDEWEB)

    Kitajima, Kazuhiro [Institute of Biomedical Research and Innovation, Department of PET Diagnosis, Kobe (Japan); Kobe University Graduate School of Medicine, Department of Radiology, Kobe (Japan); Suzuki, Kayo [Institute of Biomedical Research and Innovation, Department of PET Diagnosis, Kobe (Japan); Nakamoto, Yuji [Kyoto University Hospital, Department of Diagnostic Radiology, Kyoto (Japan); Onishi, Yumiko; Sakamoto, Setsu; Sugimura, Kazuro [Kobe University Graduate School of Medicine, Department of Radiology, Kobe (Japan); Senda, Michio [Institute of Biomedical Research and Innovation, Department of Molecular Imaging, Kobe (Japan); Kita, Masato [Kobe City Medical Center General Hospital, Department of Obstetrics and Gynecology, Kobe (Japan)

    2010-08-15

    To evaluate low-dose non-enhanced CT (ldCT) and full-dose contrast-enhanced CT (ceCT) in integrated {sup 18}F-fluorodeoxyglucose (FDG) PET/CT studies for restaging of uterine cancer. A group of 100 women who had undergone treatment for uterine cervical (n=55) or endometrial cancer (n=45) underwent a conventional PET/CT scans with ldCT, and then a ceCT scan. Two observers retrospectively reviewed and interpreted the PET/ldCT and PET/ceCT images in consensus using a three-point grading scale (negative, equivocal, or positive) per patient and per lesion. Final diagnoses were obtained by histopathological examination, or clinical follow-up for at least 6 months. Patient-based analysis showed that the sensitivity, specificity and accuracy of PET/ceCT were 90% (27/30), 97% (68/70) and 95% (95/100), respectively, whereas those of PET/ldCT were 83% (25/30), 94% (66/70) and 91% (91/100), respectively. Sensitivity, specificity and accuracy did not significantly differ between two methods (McNemar test, p=0.48, p=0.48, and p=0.13, respectively). There were 52 sites of lesion recurrence: 12 pelvic lymph node (LN), 11 local recurrence, 8 peritoneum, 7 abdominal LN, 5 lung, 3 supraclavicular LN, 3 liver, 2 mediastinal LN, and 1 muscle and bone. The grading results for the 52 sites of recurrence were: negative 5, equivocal 0 and positive 47 for PET/ceCT, and negative 5, equivocal 4 and positive 43 for PET/ldCT, respectively. Four equivocal regions by PET/ldCT (local recurrence, pelvic LN metastasis, liver metastasis and muscle metastasis) were correctly interpreted as positive by PET/ceCT. PET/ceCT is an accurate imaging modality for the assessment of uterine cancer recurrence. Its use reduces the frequency of equivocal interpretations. (orig.)

  16. Aflatoxin contamination of developing corn kernels.

    Science.gov (United States)

    Amer, M A

    2005-01-01

    Preharvest of corn and its contamination with aflatoxin is a serious problem. Some environmental and cultural factors responsible for infection and subsequent aflatoxin production were investigated in this study. Stage of growth and location of kernels on corn ears were found to be one of the important factors in the process of kernel infection with A. flavus & A. parasiticus. The results showed positive correlation between the stage of growth and kernel infection. Treatment of corn with aflatoxin reduced germination, protein and total nitrogen contents. Total and reducing soluble sugar was increase in corn kernels as response to infection. Sucrose and protein content were reduced in case of both pathogens. Shoot system length, seeding fresh weigh and seedling dry weigh was also affected. Both pathogens induced reduction of starch content. Healthy corn seedlings treated with aflatoxin solution were badly affected. Their leaves became yellow then, turned brown with further incubation. Moreover, their total chlorophyll and protein contents showed pronounced decrease. On the other hand, total phenolic compounds were increased. Histopathological studies indicated that A. flavus & A. parasiticus could colonize corn silks and invade developing kernels. Germination of A. flavus spores was occurred and hyphae spread rapidly across the silk, producing extensive growth and lateral branching. Conidiophores and conidia had formed in and on the corn silk. Temperature and relative humidity greatly influenced the growth of A. flavus & A. parasiticus and aflatoxin production.

  17. Analog forecasting with dynamics-adapted kernels

    Science.gov (United States)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  18. PKI, Gamma Radiation Reactor Shielding Calculation by Point-Kernel Method

    International Nuclear Information System (INIS)

    Li Chunhuai; Zhang Liwu; Zhang Yuqin; Zhang Chuanxu; Niu Xihua

    1990-01-01

    1 - Description of program or function: This code calculates radiation shielding problem of gamma-ray in geometric space. 2 - Method of solution: PKI uses a point kernel integration technique, describes radiation shielding geometric space by using geometric space configuration method and coordinate conversion, and makes use of calculation result of reactor primary shielding and flow regularity in loop system for coolant

  19. A Framework for Simplifying the Development of Kernel Schedulers: Design and Performance Evaluation

    DEFF Research Database (Denmark)

    Muller, Gilles; Lawall, Julia Laetitia; Duschene, Hervé

    2005-01-01

    Writing a new scheduler and integrating it into an existing OS is a daunting task, requiring the understanding of multiple low-level kernel mechanisms. Indeed, implementing a new scheduler is outside the expertise of application programmers, even though they are the ones who understand best the s...

  20. Ultrafast convolution/superposition using tabulated and exponential kernels on GPU

    Energy Technology Data Exchange (ETDEWEB)

    Chen Quan; Chen Mingli; Lu Weiguo [TomoTherapy Inc., 1240 Deming Way, Madison, Wisconsin 53717 (United States)

    2011-03-15

    Purpose: Collapsed-cone convolution/superposition (CCCS) dose calculation is the workhorse for IMRT dose calculation. The authors present a novel algorithm for computing CCCS dose on the modern graphic processing unit (GPU). Methods: The GPU algorithm includes a novel TERMA calculation that has no write-conflicts and has linear computation complexity. The CCCS algorithm uses either tabulated or exponential cumulative-cumulative kernels (CCKs) as reported in literature. The authors have demonstrated that the use of exponential kernels can reduce the computation complexity by order of a dimension and achieve excellent accuracy. Special attentions are paid to the unique architecture of GPU, especially the memory accessing pattern, which increases performance by more than tenfold. Results: As a result, the tabulated kernel implementation in GPU is two to three times faster than other GPU implementations reported in literature. The implementation of CCCS showed significant speedup on GPU over single core CPU. On tabulated CCK, speedups as high as 70 are observed; on exponential CCK, speedups as high as 90 are observed. Conclusions: Overall, the GPU algorithm using exponential CCK is 1000-3000 times faster over a highly optimized single-threaded CPU implementation using tabulated CCK, while the dose differences are within 0.5% and 0.5 mm. This ultrafast CCCS algorithm will allow many time-sensitive applications to use accurate dose calculation.

  1. OS X and iOS Kernel Programming

    CERN Document Server

    Halvorsen, Ole Henry

    2011-01-01

    OS X and iOS Kernel Programming combines essential operating system and kernel architecture knowledge with a highly practical approach that will help you write effective kernel-level code. You'll learn fundamental concepts such as memory management and thread synchronization, as well as the I/O Kit framework. You'll also learn how to write your own kernel-level extensions, such as device drivers for USB and Thunderbolt devices, including networking, storage and audio drivers. OS X and iOS Kernel Programming provides an incisive and complete introduction to the XNU kernel, which runs iPhones, i

  2. The Classification of Diabetes Mellitus Using Kernel k-means

    Science.gov (United States)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  3. Object classification and detection with context kernel descriptors

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2014-01-01

    Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...... consistency of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature selection, Kernel Entropy Component Analysis (KECA) is exploited to learn a subset of discriminative CKD. Different from Kernel Principal Component...

  4. Revisiting the definition of local hardness and hardness kernel.

    Science.gov (United States)

    Polanco-Ramírez, Carlos A; Franco-Pérez, Marco; Carmona-Espíndola, Javier; Gázquez, José L; Ayers, Paul W

    2017-05-17

    An analysis of the hardness kernel and local hardness is performed to propose new definitions for these quantities that follow a similar pattern to the one that characterizes the quantities associated with softness, that is, we have derived new definitions for which the integral of the hardness kernel over the whole space of one of the variables leads to local hardness, and the integral of local hardness over the whole space leads to global hardness. A basic aspect of the present approach is that global hardness keeps its identity as the second derivative of energy with respect to the number of electrons. Local hardness thus obtained depends on the first and second derivatives of energy and electron density with respect to the number of electrons. When these derivatives are approximated by a smooth quadratic interpolation of energy, the expression for local hardness reduces to the one intuitively proposed by Meneses, Tiznado, Contreras and Fuentealba. However, when one combines the first directional derivatives with smooth second derivatives one finds additional terms that allow one to differentiate local hardness for electrophilic attack from the one for nucleophilic attack. Numerical results related to electrophilic attacks on substituted pyridines, substituted benzenes and substituted ethenes are presented to show the overall performance of the new definition.

  5. Effects of building structures on radiation doses from routine releases of radionuclides to the atmosphere

    International Nuclear Information System (INIS)

    Kocher, D.C.

    1978-01-01

    Realistic assessments of radiation doses to the population from routine releases of radionuclides to the atmosphere require consideration of man's largely indoor environment. The effect of a building structure on radiation doses is described quantitatively by a dose reduction factor, which is the ratio of the dose to a reference individual inside a structure to the corresponding dose with no structure present. We have implemented models to estimate dose reduction factors for internal dose from inhaled radionuclides and for external photon dose from airborne and surface-deposited radionuclides. The models are particularly useful in radiological assessment applications, since dose reduction factors may readily be estimated for arbitrary mixtures and concentrations of radionuclides in the atmosphere and on the ground. The model for inhalation dose reduction factors accounts for radioactive decay, air ventilation into and out of the structure, and deposition of radionuclides on inside surfaces of the structure. External dose reduction factors are estimated using the point-kernel integration method including consideration of buildup in air and the walls of the building. The potential importance of deposition of radionuclides on inside surfaces of a structure on both inhalation and external dose reduction factors has been demonstrated. Model formulation and the assumptions used in the calculations are discussed. Results of model-parameter sensitivity studies and estimates of dose reduction factors for radionuclides occurring in routine releases from an LWR fuel reprocessing plant are presented. (author)

  6. Jdpd: an open java simulation kernel for molecular fragment dissipative particle dynamics.

    Science.gov (United States)

    van den Broek, Karina; Kuhn, Hubert; Zielesny, Achim

    2018-05-21

    Jdpd is an open Java simulation kernel for Molecular Fragment Dissipative Particle Dynamics with parallelizable force calculation, efficient caching options and fast property calculations. It is characterized by an interface and factory-pattern driven design for simple code changes and may help to avoid problems of polyglot programming. Detailed input/output communication, parallelization and process control as well as internal logging capabilities for debugging purposes are supported. The new kernel may be utilized in different simulation environments ranging from flexible scripting solutions up to fully integrated "all-in-one" simulation systems.

  7. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    Science.gov (United States)

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  8. Bioconversions of Palm Kernel Cake and Rice Bran Mixtures by Trichoderma viride Toward Nutritional Contents

    Directory of Open Access Journals (Sweden)

    Yana Sukaryana

    2010-12-01

    Full Text Available The objective of the research is to examine the mixtures of palm kernel cake and rice bran of fermented by Trichoderma viride. Completely randomized design in factorial pattern 4 x 4 was used in this experiment. factor I is the doses of inoculums; D1 = 0%, D2 =  0,1% , D3 =  0,2%, D4 =  0,3%, and  complement factor II is mixtures of palm kernel cake and rice bran : T1=20:80% ; T2=40:60% ; T3=60:40% ; T4=80:20%. The treatment each of three replicate. Fermentation was conducted at temperature 28 oC as long as 9 days. Determining the best of the mixtures be based on the crude protein increased and the crude fibre decreased. The results showed that the combination of product mix is the best fermentation inoculums doses 0.3% in mixture of palm kernel cake and rice bran ; 80%: 20%, which produces dry matter of 88,12%, crude protein 17.34%, ether extract 5,35%, crude fibre 23.67%, and ash 6.43%. When compared with a mixture of palm kernel cake and rice bran; 80%: 20% without of fermentation is crude protein increase 29.58% and crude fibre decreased 22.53%.

  9. Kernel abortion in maize. II. Distribution of 14C among kernel carboydrates

    International Nuclear Information System (INIS)

    Hanft, J.M.; Jones, R.J.

    1986-01-01

    This study was designed to compare the uptake and distribution of 14 C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35 0 C were transferred to [ 14 C]sucrose media 10 days after pollination. Kernels cultured at 35 0 C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on [ 14 C]sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35 0 C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35 0 C compared to kernels cultured at 30 0 C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35 0 C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30 0 C (89%). Kernels cultured at 35 0 C had a correspondingly higher proportion of 14 C in endosperm fructose, glucose, and sucrose

  10. Fluidization calculation on nuclear fuel kernel coating

    International Nuclear Information System (INIS)

    Sukarsono; Wardaya; Indra-Suryawan

    1996-01-01

    The fluidization of nuclear fuel kernel coating was calculated. The bottom of the reactor was in the from of cone on top of the cone there was a cylinder, the diameter of the cylinder for fluidization was 2 cm and at the upper part of the cylinder was 3 cm. Fluidization took place in the cone and the first cylinder. The maximum and the minimum velocity of the gas of varied kernel diameter, the porosity and bed height of varied stream gas velocity were calculated. The calculation was done by basic program

  11. Reduced multiple empirical kernel learning machine.

    Science.gov (United States)

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  12. Extent of radiosensitization by the PARP inhibitor olaparib depends on its dose, the radiation dose and the integrity of the homologous recombination pathway of tumor cells

    NARCIS (Netherlands)

    Verhagen, Caroline V. M.; de Haan, Rosemarie; Hageman, Floor; Oostendorp, Tim P. D.; Carli, Annalisa L. E.; O'Connor, Mark J.; Jonkers, Jos; Verheij, Marcel; van den Brekel, Michiel W.; Vens, Conchita

    2015-01-01

    The PARP inhibitor olaparib is currently tested in clinical phase 1 trials to define safe dose levels in combination with RT. However, certain clinically relevant insights are still lacking. Here we test, while comparing to single agent activity, the olaparib dose and genetic background dependence

  13. Extent of radiosensitization by the PARP inhibitor olaparib depends on its dose, the radiation dose and the integrity of the homologous recombination pathway of tumor cells

    NARCIS (Netherlands)

    Verhagen, C.V.M.; Haan, R. den; Hageman, F.; Oostendorp, T.P.; Carli, A.L.; O'Connor, M.J.; Jonkers, J.; Verheij, M.; Brekel, M.W. van den; Vens, C.

    2015-01-01

    BACKGROUND AND PURPOSE: The PARP inhibitor olaparib is currently tested in clinical phase 1 trials to define safe dose levels in combination with RT. However, certain clinically relevant insights are still lacking. Here we test, while comparing to single agent activity, the olaparib dose and genetic

  14. Extent of radiosensitization by the PARP inhibitor olaparib depends on its dose, the radiation dose and the integrity of the homologous recombination pathway of tumor cells

    NARCIS (Netherlands)

    Verhagen, C.V.M.; de Haan, R.; Hageman, F.; Oostendorp, T.P.D.; Carli, A.L.E.; O'Connor, M.J.; Jonkers, J.; Verheij, M.; van den Brekel, M.W.; Vens, C.

    2015-01-01

    Background and purpose The PARP inhibitor olaparib is currently tested in clinical phase 1 trials to define safe dose levels in combination with RT. However, certain clinically relevant insights are still lacking. Here we test, while comparing to single agent activity, the olaparib dose and genetic

  15. Simulation of worst-case operating conditions for integrated circuits operating in a total dose environment

    International Nuclear Information System (INIS)

    Bhuva, B.L.

    1987-01-01

    Degradations in the circuit performance created by the radiation exposure of integrated circuits are so unique and abnormal that thorough simulation and testing of VLSI circuits is almost impossible, and new ways to estimate the operating performance in a radiation environment must be developed. The principal goal of this work was the development of simulation techniques for radiation effects on semiconductor devices. The mixed-mode simulation approach proved to be the most promising. The switch-level approach is used to identify the failure mechanisms and critical subcircuits responsible for operational failure along with worst-case operating conditions during and after irradiation. For precise simulations of critical subcircuits, SPICE is used. The identification of failure mechanisms enables the circuit designer to improve the circuit's performance and failure-exposure level. Identification of worst-case operating conditions during and after irradiation reduces the complexity of testing VLSI circuits for radiation environments. The results of test circuits for failure simulations using a conventional simulator and the new simulator showed significant time savings using the new simulator. The savings in simulation time proved to be circuit topology-dependent. However, for large circuits, the simulation time proved to be orders of magnitude smaller than simulation time for conventional simulators

  16. Comparative Analysis of Kernel Methods for Statistical Shape Learning

    National Research Council Canada - National Science Library

    Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen

    2006-01-01

    .... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...

  17. Variable kernel density estimation in high-dimensional feature spaces

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2017-02-01

    Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...

  18. Influence of differently processed mango seed kernel meal on ...

    African Journals Online (AJOL)

    Influence of differently processed mango seed kernel meal on performance response of west African ... and TD( consisted spear grass and parboiled mango seed kernel meal with concentrate diet in a ratio of 35:30:35). ... HOW TO USE AJOL.

  19. On methods to increase the security of the Linux kernel

    International Nuclear Information System (INIS)

    Matvejchikov, I.V.

    2014-01-01

    Methods to increase the security of the Linux kernel for the implementation of imposed protection tools have been examined. The methods of incorporation into various subsystems of the kernel on the x86 architecture have been described [ru

  20. Linear and kernel methods for multi- and hypervariate change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton J.

    2010-01-01

    . Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual...... formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution......, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component...

  1. Kernel methods in orthogonalization of multi- and hypervariate data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    A kernel version of maximum autocorrelation factor (MAF) analysis is described very briefly and applied to change detection in remotely sensed hyperspectral image (HyMap) data. The kernel version is based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis...... via inner products in the Gram matrix only. In the kernel version the inner products are replaced by inner products between nonlinear mappings into higher dimensional feature space of the original data. Via kernel substitution also known as the kernel trick these inner products between the mappings...... are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MAF analysis handle nonlinearities by implicitly transforming data into high (even infinite...

  2. Research on offense and defense technology for iOS kernel security mechanism

    Science.gov (United States)

    Chu, Sijun; Wu, Hao

    2018-04-01

    iOS is a strong and widely used mobile device system. It's annual profits make up about 90% of the total profits of all mobile phone brands. Though it is famous for its security, there have been many attacks on the iOS operating system, such as the Trident apt attack in 2016. So it is important to research the iOS security mechanism and understand its weaknesses and put forward targeted protection and security check framework. By studying these attacks and previous jailbreak tools, we can see that an attacker could only run a ROP code and gain kernel read and write permissions based on the ROP after exploiting kernel and user layer vulnerabilities. However, the iOS operating system is still protected by the code signing mechanism, the sandbox mechanism, and the not-writable mechanism of the system's disk area. This is far from the steady, long-lasting control that attackers expect. Before iOS 9, breaking these security mechanisms was usually done by modifying the kernel's important data structures and security mechanism code logic. However, after iOS 9, the kernel integrity protection mechanism was added to the 64-bit operating system and none of the previous methods were adapted to the new versions of iOS [1]. But this does not mean that attackers can not break through. Therefore, based on the analysis of the vulnerability of KPP security mechanism, this paper implements two possible breakthrough methods for kernel security mechanism for iOS9 and iOS10. Meanwhile, we propose a defense method based on kernel integrity detection and sensitive API call detection to defense breakthrough method mentioned above. And we make experiments to prove that this method can prevent and detect attack attempts or invaders effectively and timely.

  3. Implementation of pencil kernel and depth penetration algorithms for treatment planning of proton beams

    International Nuclear Information System (INIS)

    Russell, K.R.; Saxner, M.; Ahnesjoe, A.; Montelius, A.; Grusell, E.; Dahlgren, C.V.

    2000-01-01

    The implementation of two algorithms for calculating dose distributions for radiation therapy treatment planning of intermediate energy proton beams is described. A pencil kernel algorithm and a depth penetration algorithm have been incorporated into a commercial three-dimensional treatment planning system (Helax-TMS, Helax AB, Sweden) to allow conformal planning techniques using irregularly shaped fields, proton range modulation, range modification and dose calculation for non-coplanar beams. The pencil kernel algorithm is developed from the Fermi-Eyges formalism and Moliere multiple-scattering theory with range straggling corrections applied. The depth penetration algorithm is based on the energy loss in the continuous slowing down approximation with simple correction factors applied to the beam penumbra region and has been implemented for fast, interactive treatment planning. Modelling of the effects of air gaps and range modifying device thickness and position are implicit to both algorithms. Measured and calculated dose values are compared for a therapeutic proton beam in both homogeneous and heterogeneous phantoms of varying complexity. Both algorithms model the beam penumbra as a function of depth in a homogeneous phantom with acceptable accuracy. Results show that the pencil kernel algorithm is required for modelling the dose perturbation effects from scattering in heterogeneous media. (author)

  4. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.

  5. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    Science.gov (United States)

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function

  6. Fabrication of Uranium Oxycarbide Kernels for HTR Fuel

    International Nuclear Information System (INIS)

    Barnes, Charles; Richardson, Clay; Nagley, Scott; Hunn, John; Shaber, Eric

    2010-01-01

    Babcock and Wilcox (B and W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-(micro)m, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B and W produced 425-(micro)m, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B and W also produced 500-(micro)m, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B and W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.

  7. Consistent Estimation of Pricing Kernels from Noisy Price Data

    OpenAIRE

    Vladislav Kargin

    2003-01-01

    If pricing kernels are assumed non-negative then the inverse problem of finding the pricing kernel is well-posed. The constrained least squares method provides a consistent estimate of the pricing kernel. When the data are limited, a new method is suggested: relaxed maximization of the relative entropy. This estimator is also consistent. Keywords: $\\epsilon$-entropy, non-parametric estimation, pricing kernel, inverse problems.

  8. SU-E-T-166: Evaluation of Integral Dose in Intensity-Modulated Radiotherapy and Volumetric Modulated Arc Therapy for Head and Neck Cancer Patient

    Energy Technology Data Exchange (ETDEWEB)

    Al-Basheer, A; Hunag, J; Kaminski, J; Dasher, B; Howington, J; Stewart, J; Martin, D; Kong, F; Jin, J [Georgia Regents University, Augusta, GA (Georgia)

    2014-06-01

    Purpose: Volumetric Modulated Arc Therapy (VMAT) usually achieves higher conformity of radiation doses to targets and less delivery time than Intensity Modulated Radiation Therapy (IMRT). We hypothesized that VMAT will increase integral dose (ID) to patients which will decrease the count of white blood count (WBC) lymphocytes, and consequently has a subsequent impact on the immune system. The purpose of this study is to evaluate the ID to patients undergoing IMRT and VMAT for Head and Neck cancers and its impact on the immune system. Methods: As a pilot study, 30 head and neck patients who received 9-fields IMRT or 3-arcs Radip-Arcbased VMAT were included in this study. Ten of these patients who received the VMAT plans were re-planned using IMRT with the same objectives. ID was calculated for all cases. All patients also had a baseline WBC obtained prior to treatment, and 3 sets of labs drawn during the course of radiation treatment. Results: For the 10 re-planned patients, the mean ID was 13.3 Gy/voxel (range 10.2–17.5 Gy/voxel) for the 9-fields IMRT plans, and was 15.9 Gy/voxel (range 12.4-20.9 Gy/voxel) for the 3-Arc VMAT plan (p=0.01). The integral dose was significant correlated with reducing WBC count during RT even when controlling for concurrent chemotherapy (R square =0.56, p=0.008). Conclusion: Although VMAT can deliver higher radiation dose conformality to targets, this benefit is achieved generally at the cost of greater integral doses to normal tissue outside the planning target volume (PTV). Lower WBC counts during RT were associated with higher Integral doses even when controlling for concurrent chemotherapy. This study is ongoing in our Institution to exam the impact of integral doses and WBC on overall survival.

  9. Impact of deep learning on the normalization of reconstruction kernel effects in imaging biomarker quantification: a pilot study in CT emphysema

    Science.gov (United States)

    Jin, Hyeongmin; Heo, Changyong; Kim, Jong Hyo

    2018-02-01

    Differing reconstruction kernels are known to strongly affect the variability of imaging biomarkers and thus remain as a barrier in translating the computer aided quantification techniques into clinical practice. This study presents a deep learning application to CT kernel conversion which converts a CT image of sharp kernel to that of standard kernel and evaluates its impact on variability reduction of a pulmonary imaging biomarker, the emphysema index (EI). Forty cases of low-dose chest CT exams obtained with 120kVp, 40mAs, 1mm thickness, of 2 reconstruction kernels (B30f, B50f) were selected from the low dose lung cancer screening database of our institution. A Fully convolutional network was implemented with Keras deep learning library. The model consisted of symmetric layers to capture the context and fine structure characteristics of CT images from the standard and sharp reconstruction kernels. Pairs of the full-resolution CT data set were fed to input and output nodes to train the convolutional network to learn the appropriate filter kernels for converting the CT images of sharp kernel to standard kernel with a criterion of measuring the mean squared error between the input and target images. EIs (RA950 and Perc15) were measured with a software package (ImagePrism Pulmo, Seoul, South Korea) and compared for the data sets of B50f, B30f, and the converted B50f. The effect of kernel conversion was evaluated with the mean and standard deviation of pair-wise differences in EI. The population mean of RA950 was 27.65 +/- 7.28% for B50f data set, 10.82 +/- 6.71% for the B30f data set, and 8.87 +/- 6.20% for the converted B50f data set. The mean of pair-wise absolute differences in RA950 between B30f and B50f is reduced from 16.83% to 1.95% using kernel conversion. Our study demonstrates the feasibility of applying the deep learning technique for CT kernel conversion and reducing the kernel-induced variability of EI quantification. The deep learning model has a

  10. Characterization of Brazilian mango kernel fat before and after gamma irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Aquino, Fabiana da Silva; Ramos, Clecio Souza, E-mail: fasiaquino@yahoo.com.br, E-mail: clecio@dcm.ufrpe.br [Universidade Federal Rural de Pernambuco (UFRPE), Recife, PE (Brazil); Aquino, Katia Aparecida da Silva, E-mail: aquino@ufpe.br [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil)

    2013-07-01

    Mangifera indica Linn (family of Anacardiaceae) is a tree indigenous to India, whose both unripe and ripe fruits (mangoes) are widely used by the local population. After consumption or industrial processing of the fruits, considerable amounts of mango seeds are discarded as waste. The kernel inside the seed represents from 45% to 75% of the seed and about 20% of the whole fruit and lipid composition of mango seed kernels has attracted the attention of researches because of their unique physical and chemical characteristics. Our study showed that fat of the mango kernel obtained by Soxhlet extraction with hexane had a solid consistency at environmental temperature (27 deg C) because it is rich in saturated acid. The fat contents of the seed of Mangifera indica was calculated to 10% and are comparable to the ones for commercial vegetable oils like soybean (11-25%). One problem found in the storage of fast and oils is the attack by microorganisms and the sterilization process becomes necessary. Samples of kernel fat were irradiated with gamma radiation ({sup 60}Co) at room temperature and air atmosphere at 5 and 10 kGy (sterilization doses). The data of GC-MS analysis revealed the presence of four major fatty acids in the sample of mango kernel examined and that the chemical profile of the sample not altered after being irradiated. Moreover, analysis of Proton Nuclear Magnetic Resonance (NMR H{sup 1}) was used to obtain the mango kernel fat parameters before and after gamma irradiation. The data interpretation of RMN H{sup 1} indicated that there are significant differences in the acidity and saponification indexes of fat. However, it was found an increase of 14% in iodine index of fat after irradiation. This result means that some double bonds were formed on the irradiation process of the fat. (author)

  11. Characterization of Brazilian mango kernel fat before and after gamma irradiation

    International Nuclear Information System (INIS)

    Aquino, Fabiana da Silva; Ramos, Clecio Souza; Aquino, Katia Aparecida da Silva

    2013-01-01

    Mangifera indica Linn (family of Anacardiaceae) is a tree indigenous to India, whose both unripe and ripe fruits (mangoes) are widely used by the local population. After consumption or industrial processing of the fruits, considerable amounts of mango seeds are discarded as waste. The kernel inside the seed represents from 45% to 75% of the seed and about 20% of the whole fruit and lipid composition of mango seed kernels has attracted the attention of researches because of their unique physical and chemical characteristics. Our study showed that fat of the mango kernel obtained by Soxhlet extraction with hexane had a solid consistency at environmental temperature (27 deg C) because it is rich in saturated acid. The fat contents of the seed of Mangifera indica was calculated to 10% and are comparable to the ones for commercial vegetable oils like soybean (11-25%). One problem found in the storage of fast and oils is the attack by microorganisms and the sterilization process becomes necessary. Samples of kernel fat were irradiated with gamma radiation ( 60 Co) at room temperature and air atmosphere at 5 and 10 kGy (sterilization doses). The data of GC-MS analysis revealed the presence of four major fatty acids in the sample of mango kernel examined and that the chemical profile of the sample not altered after being irradiated. Moreover, analysis of Proton Nuclear Magnetic Resonance (NMR H 1 ) was used to obtain the mango kernel fat parameters before and after gamma irradiation. The data interpretation of RMN H 1 indicated that there are significant differences in the acidity and saponification indexes of fat. However, it was found an increase of 14% in iodine index of fat after irradiation. This result means that some double bonds were formed on the irradiation process of the fat. (author)

  12. Quantum logic in dagger kernel categories

    NARCIS (Netherlands)

    Heunen, C.; Jacobs, B.P.F.

    2009-01-01

    This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial

  13. Quantum logic in dagger kernel categories

    NARCIS (Netherlands)

    Heunen, C.; Jacobs, B.P.F.; Coecke, B.; Panangaden, P.; Selinger, P.

    2011-01-01

    This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial

  14. Symbol recognition with kernel density matching.

    Science.gov (United States)

    Zhang, Wan; Wenyin, Liu; Zhang, Kun

    2006-12-01

    We propose a novel approach to similarity assessment for graphic symbols. Symbols are represented as 2D kernel densities and their similarity is measured by the Kullback-Leibler divergence. Symbol orientation is found by gradient-based angle searching or independent component analysis. Experimental results show the outstanding performance of this approach in various situations.

  15. Flexible Scheduling in Multimedia Kernels: An Overview

    NARCIS (Netherlands)

    Jansen, P.G.; Scholten, Johan; Laan, Rene; Chow, W.S.

    1999-01-01

    Current Hard Real-Time (HRT) kernels have their timely behaviour guaranteed on the cost of a rather restrictive use of the available resources. This makes current HRT scheduling techniques inadequate for use in a multimedia environment where we can make a considerable profit by a better and more

  16. Reproducing kernel Hilbert spaces of Gaussian priors

    NARCIS (Netherlands)

    Vaart, van der A.W.; Zanten, van J.H.; Clarke, B.; Ghosal, S.

    2008-01-01

    We review definitions and properties of reproducing kernel Hilbert spaces attached to Gaussian variables and processes, with a view to applications in nonparametric Bayesian statistics using Gaussian priors. The rate of contraction of posterior distributions based on Gaussian priors can be described

  17. A synthesis of empirical plant dispersal kernels

    Czech Academy of Sciences Publication Activity Database

    Bullock, J. M.; González, L. M.; Tamme, R.; Götzenberger, Lars; White, S. M.; Pärtel, M.; Hooftman, D. A. P.

    2017-01-01

    Roč. 105, č. 1 (2017), s. 6-19 ISSN 0022-0477 Institutional support: RVO:67985939 Keywords : dispersal kernel * dispersal mode * probability density function Subject RIV: EH - Ecology, Behaviour OBOR OECD: Ecology Impact factor: 5.813, year: 2016

  18. Analytic continuation of weighted Bergman kernels

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav

    2010-01-01

    Roč. 94, č. 6 (2010), s. 622-650 ISSN 0021-7824 R&D Projects: GA AV ČR IAA100190802 Keywords : Bergman kernel * analytic continuation * Toeplitz operator Subject RIV: BA - General Mathematics Impact factor: 1.450, year: 2010 http://www.sciencedirect.com/science/article/pii/S0021782410000942

  19. Kernel based subspace projection of hyperspectral images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF...

  20. Kernel Temporal Differences for Neural Decoding

    Science.gov (United States)

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  1. Scattering kernels and cross sections working group

    International Nuclear Information System (INIS)

    Russell, G.; MacFarlane, B.; Brun, T.

    1998-01-01

    Topics addressed by this working group are: (1) immediate needs of the cold-moderator community and how to fill them; (2) synthetic scattering kernels; (3) very simple synthetic scattering functions; (4) measurements of interest; and (5) general issues. Brief summaries are given for each of these topics

  2. Enhanced gluten properties in soft kernel durum wheat

    Science.gov (United States)

    Soft kernel durum wheat is a relatively recent development (Morris et al. 2011 Crop Sci. 51:114). The soft kernel trait exerts profound effects on kernel texture, flour milling including break flour yield, milling energy, and starch damage, and dough water absorption (DWA). With the caveat of reduce...

  3. Predictive Model Equations for Palm Kernel (Elaeis guneensis J ...

    African Journals Online (AJOL)

    Estimated error of ± 0.18 and ± 0.2 are envisaged while applying the models for predicting palm kernel and sesame oil colours respectively. Keywords: Palm kernel, Sesame, Palm kernel, Oil Colour, Process Parameters, Model. Journal of Applied Science, Engineering and Technology Vol. 6 (1) 2006 pp. 34-38 ...

  4. Stable Kernel Representations as Nonlinear Left Coprime Factorizations

    NARCIS (Netherlands)

    Paice, A.D.B.; Schaft, A.J. van der

    1994-01-01

    A representation of nonlinear systems based on the idea of representing the input-output pairs of the system as elements of the kernel of a stable operator has been recently introduced. This has been denoted the kernel representation of the system. In this paper it is demonstrated that the kernel

  5. 7 CFR 981.60 - Determination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  6. 21 CFR 176.350 - Tamarind seed kernel powder.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  7. End-use quality of soft kernel durum wheat

    Science.gov (United States)

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...

  8. Heat kernel analysis for Bessel operators on symmetric cones

    DEFF Research Database (Denmark)

    Möllers, Jan

    2014-01-01

    . The heat kernel is explicitly given in terms of a multivariable $I$-Bessel function on $Ω$. Its corresponding heat kernel transform defines a continuous linear operator between $L^p$-spaces. The unitary image of the $L^2$-space under the heat kernel transform is characterized as a weighted Bergmann space...

  9. A Fast and Simple Graph Kernel for RDF

    NARCIS (Netherlands)

    de Vries, G.K.D.; de Rooij, S.

    2013-01-01

    In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster

  10. 7 CFR 981.61 - Redetermination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds...

  11. Single pass kernel k-means clustering method

    Indian Academy of Sciences (India)

    paper proposes a simple and faster version of the kernel k-means clustering ... It has been considered as an important tool ... On the other hand, kernel-based clustering methods, like kernel k-means clus- ..... able at the UCI machine learning repository (Murphy 1994). ... All the data sets have only numeric valued features.

  12. Parameters used in the environmental pathways and radiological dose modules (DESCARTES, CIDER, and CRD codes) of the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC)

    Energy Technology Data Exchange (ETDEWEB)

    Snyder, S.F.; Farris, W.T.; Napier, B.A.; Ikenberry, T.A.; Gilbert, R.O.

    1994-05-01

    This letter report is a description of work performed for the Hanford Environmental Dose Reconstruction (HEDR) Project. The HEDR Project was established to estimate the radiation doses to individuals resulting from releases of radionuclides from the Hanford Site during the period of 1944 to 1992. This work is being done by staff at Battelle, Pacific Northwest Laboratories under a contract with the Centers for Disease Control and Prevention with technical direction provided by an independent Technical Steering Panel (TSP).

  13. Parameters used in the environmental pathways and radiological dose modules (DESCARTES, CIDER, and CRD codes) of the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC)

    International Nuclear Information System (INIS)

    Snyder, S.F.; Farris, W.T.; Napier, B.A.; Ikenberry, T.A.; Gilbert, R.O.

    1994-05-01

    This letter report is a description of work performed for the Hanford Environmental Dose Reconstruction (HEDR) Project. The HEDR Project was established to estimate the radiation doses to individuals resulting from releases of radionuclides from the Hanford Site during the period of 1944 to 1992. This work is being done by staff at Battelle, Pacific Northwest Laboratories under a contract with the Centers for Disease Control and Prevention with technical direction provided by an independent Technical Steering Panel (TSP)

  14. Extreme Scale FMM-Accelerated Boundary Integral Equation Solver for Wave Scattering

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed; Al Farhan, Mohammed; Al-Harthi, Noha A.; Chen, Rui; Yokota, Rio; Bagci, Hakan; Keyes, David E.

    2018-01-01

    scattering, which uses FMM as a matrix-vector multiplication inside the GMRES iterative method. Our FMM Helmholtz kernels treat nontrivial singular and near-field integration points. We implement highly optimized kernels for both shared and distributed memory

  15. A scatter model for fast neutron beams using convolution of diffusion kernels

    International Nuclear Information System (INIS)

    Moyers, M.F.; Horton, J.L.; Boyer, A.L.

    1988-01-01

    A new model is proposed to calculate dose distributions in materials irradiated with fast neutron beams. Scattered neutrons are transported away from the point of production within the irradiated material in the forward, lateral and backward directions, while recoil protons are transported in the forward and lateral directions. The calculation of dose distributions, such as for radiotherapy planning, is accomplished by convolving a primary attenuation distribution with a diffusion kernel. The primary attenuation distribution may be quickly calculated for any given set of beam and material conditions as it describes only the magnitude and distribution of first interaction sites. The calculation of energy diffusion kernels is very time consuming but must be calculated only once for a given energy. Energy diffusion distributions shown in this paper have been calculated using a Monte Carlo type of program. To decrease beam calculation time, convolutions are performed using a Fast Fourier Transform technique. (author)

  16. Kernel based orthogonalization for change detection in hyperspectral images

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MNF analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via...... analysis all 126 spectral bands of the HyMap are included. Changes on the ground are most likely due to harvest having taken place between the two acquisitions and solar effects (both solar elevation and azimuth have changed). Both types of kernel analysis emphasize change and unlike kernel PCA, kernel MNF...

  17. A laser optical method for detecting corn kernel defects

    Energy Technology Data Exchange (ETDEWEB)

    Gunasekaran, S.; Paulsen, M. R.; Shove, G. C.

    1984-01-01

    An opto-electronic instrument was developed to examine individual corn kernels and detect various kernel defects according to reflectance differences. A low power helium-neon (He-Ne) laser (632.8 nm, red light) was used as the light source in the instrument. Reflectance from good and defective parts of corn kernel surfaces differed by approximately 40%. Broken, chipped, and starch-cracked kernels were detected with nearly 100% accuracy; while surface-split kernels were detected with about 80% accuracy. (author)

  18. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    Science.gov (United States)

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  19. Windows Vista Kernel-Mode: Functions, Security Enhancements and Flaws

    Directory of Open Access Journals (Sweden)

    Mohammed D. ABDULMALIK

    2008-06-01

    Full Text Available Microsoft has made substantial enhancements to the kernel of the Microsoft Windows Vista operating system. Kernel improvements are significant because the kernel provides low-level operating system functions, including thread scheduling, interrupt and exception dispatching, multiprocessor synchronization, and a set of routines and basic objects.This paper describes some of the kernel security enhancements for 64-bit edition of Windows Vista. We also point out some weakness areas (flaws that can be attacked by malicious leading to compromising the kernel.

  20. Difference between standard and quasi-conformal BFKL kernels

    International Nuclear Information System (INIS)

    Fadin, V.S.; Fiore, R.; Papa, A.

    2012-01-01

    As it was recently shown, the colour singlet BFKL kernel, taken in Möbius representation in the space of impact parameters, can be written in quasi-conformal shape, which is unbelievably simple compared with the conventional form of the BFKL kernel in momentum space. It was also proved that the total kernel is completely defined by its Möbius representation. In this paper we calculated the difference between standard and quasi-conformal BFKL kernels in momentum space and discovered that it is rather simple. Therefore we come to the conclusion that the simplicity of the quasi-conformal kernel is caused mainly by using the impact parameter space.

  1. A Tensor-Product-Kernel Framework for Multiscale Neural Activity Decoding and Control

    Science.gov (United States)

    Li, Lin; Brockmeier, Austin J.; Choi, John S.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2014-01-01

    Brain machine interfaces (BMIs) have attracted intense attention as a promising technology for directly interfacing computers or prostheses with the brain's motor and sensory areas, thereby bypassing the body. The availability of multiscale neural recordings including spike trains and local field potentials (LFPs) brings potential opportunities to enhance computational modeling by enriching the characterization of the neural system state. However, heterogeneity on data type (spike timing versus continuous amplitude signals) and spatiotemporal scale complicates the model integration of multiscale neural activity. In this paper, we propose a tensor-product-kernel-based framework to integrate the multiscale activity and exploit the complementary information available in multiscale neural activity. This provides a common mathematical framework for incorporating signals from different domains. The approach is applied to the problem of neural decoding and control. For neural decoding, the framework is able to identify the nonlinear functional relationship between the multiscale neural responses and the stimuli using general purpose kernel adaptive filtering. In a sensory stimulation experiment, the tensor-product-kernel decoder outperforms decoders that use only a single neural data type. In addition, an adaptive inverse controller for delivering electrical microstimulation patterns that utilizes the tensor-product kernel achieves promising results in emulating the responses to natural stimulation. PMID:24829569

  2. Influence of apricot kernels on blood plasma levels of selected anterior pituitary hormones in male and female rabbits in vivo

    Directory of Open Access Journals (Sweden)

    Katarína Michalcová

    2016-05-01

    Full Text Available Amygdalin is represented in the family Rosacea more precisely in an apricot kernels and an almonds. There are a lot of components such as trace elements, vitamins, carbohydrates, organic acids, esters, phenols, terpenoids, except cyanogenic glycoside in the seeds. It is known that bioregulators can modulate the activity of specific enzymes and hormones very exactly at low levels and in a short time. The aim of our study was examine the effects of selected doses (0, 60, 300, 420 mg/kg b.w. of apricot kernels in feed on the plasma levels of anterior pituitary hormones in young male and female rabbits in vivo. A sensitive, biochemical method, ELISA was used to determine the hormones prolactin (PRL, luteinizing hormone (LH and follicle stimulating hormone (FSH. 28-day application of apricot kernels did not affect the concentration of PRL, LH, FSH in blood plasma of males. No significant (P≤0.05 differences in case of PRL and LH levels in the blood plasma of females were found. On the other hand a significant (P≤0.05 inhibition of FSH release induced by kernels at the doses 300, 420 mg/kg was found. Our results indicate that apricot kernels could affect secretion of anterior pituitary hormone FSH in female rabbits.

  3. Photon beam convolution using polyenergetic energy deposition kernels

    International Nuclear Information System (INIS)

    Hoban, P.W.; Murray, D.C.; Round, W.H.

    1994-01-01

    In photon beam convolution calculations where polyenergetic energy deposition kernels (EDKs) are used, the primary photon energy spectrum should be correctly accounted for in Monte Carlo generation of EDKs. This requires the probability of interaction, determined by the linear attenuation coefficient, μ, to be taken into account when primary photon interactions are forced to occur at the EDK origin. The use of primary and scattered EDKs generated with a fixed photon spectrum can give rise to an error in the dose calculation due to neglecting the effects of beam hardening with depth. The proportion of primary photon energy that is transferred to secondary electrons increases with depth of interaction, due to the increase in the ratio μ ab /μ as the beam hardens. Convolution depth-dose curves calculated using polyenergetic EDKs generated for the primary photon spectra which exist at depths of 0, 20 and 40 cm in water, show a fall-off which is too steep when compared with EGS4 Monte Carlo results. A beam hardening correction factor applied to primary and scattered 0 cm EDKs, based on the ratio of kerma to terma at each depth, gives primary, scattered and total dose in good agreement with Monte Carlo results. (Author)

  4. Integrated analysis of genetic variation and gene expression reveals novel variant for increased warfarin dose requirement in African Americans

    NARCIS (Netherlands)

    Hernandez, W.; Gamazon, E. R.; Aquino-Michaels, K.; Smithberger, E.; O'Brien, T. J.; Harralson, A. F.; Tuck, M.; Barbour, A.; Cavallari, L. H.; Perera, M. A.

    2017-01-01

    Essentials Genetic variants controlling gene regulation have not been explored in pharmacogenomics. We tested liver expression quantitative trait loci for association with warfarin dose response. A novel predictor for increased warfarin dose response in African Americans was identified. Precision

  5. Efficacy of integrated treatment of UV light and low dose gamma irradiation on Escherichia coli O157:H7 and Salmonella enterica on grape tomatoes

    Science.gov (United States)

    Efficacy of integrated treatment of UVC and low dose Gamma irradiation to inactivate mixed Strains of Escherichia coli O157:H7 and Salmonella enterica inoculated on whole Grape tomatoes was evaluated. A mixed bacterial cocktail composed of a three strain mixture of E. coli O157:H7 (C9490, E02128 an...

  6. Parameters used in the environmental pathways (DESCARTES) and radiological dose (CIDER) modules of the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC) for the air pathway

    Energy Technology Data Exchange (ETDEWEB)

    Snyder, S.F.; Farris, W.T.; Napier, B.A.; Ikenberry, T.A.; Gilbert, R.O.

    1992-09-01

    This letter report is a description of work performed for the Hanford Environmental Dose Reconstruction (HEDR) Project. The HEDR Project was established to estimate the radiation doses to individuals resulting from releases of radionuclides from the Hanford Site since 1944. This work is being done by staff at Battelle, Pacific Northwest Laboratories (Battelle) under a contract with the Centers for Disease Control (CDC) with technical direction provided by an independent Technical Steering Panel (TSP). The objective of this report is to-document the environmental accumulation and dose-assessment parameters that will be used to estimate the impacts of past Hanford Site airborne releases. During 1993, dose estimates made by staff at Battelle will be used by the Fred Hutchinson Cancer Research Center as part of the Hanford Thyroid Disease Study (HTDS). This document contains information on parameters that are specific to the airborne release of the radionuclide iodine-131. Future versions of this document will include parameter information pertinent to other pathways and radionuclides.

  7. Parameters used in the environmental pathways (DESCARTES) and radiological dose (CIDER) modules of the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC) for the air pathway

    International Nuclear Information System (INIS)

    Snyder, S.F.; Farris, W.T.; Napier, B.A.; Ikenberry, T.A.; Gilbert, R.O.

    1992-09-01

    This letter report is a description of work performed for the Hanford Environmental Dose Reconstruction (HEDR) Project. The HEDR Project was established to estimate the radiation doses to individuals resulting from releases of radionuclides from the Hanford Site since 1944. This work is being done by staff at Battelle, Pacific Northwest Laboratories (Battelle) under a contract with the Centers for Disease Control (CDC) with technical direction provided by an independent Technical Steering Panel (TSP). The objective of this report is to-document the environmental accumulation and dose-assessment parameters that will be used to estimate the impacts of past Hanford Site airborne releases. During 1993, dose estimates made by staff at Battelle will be used by the Fred Hutchinson Cancer Research Center as part of the Hanford Thyroid Disease Study (HTDS). This document contains information on parameters that are specific to the airborne release of the radionuclide iodine-131. Future versions of this document will include parameter information pertinent to other pathways and radionuclides

  8. Independent dose calculation in IMRT for the Tps Iplan using the Clarkson modified integral; Calculo independiente de dosis en IMRT para el TPS Iplan usando la integral modificada de Clarkson

    Energy Technology Data Exchange (ETDEWEB)

    Adrada, A.; Tello, Z.; Garrigo, E.; Venencia, D., E-mail: jorge.alberto.adrada@gmail.com [Instituto Privado de Radioterapia, Obispo Oro 423, X5000BFI Cordoba (Argentina)

    2014-08-15

    Intensity-Modulated Radiation Therapy (IMRT) treatments require a quality assurance (Q A) specific patient before delivery. These controls include the experimental verification in dose phantom of the total plan as well as dose distributions. The use of independent dose calculation (IDC) is used in 3D-Crt treatments; however its application in IMRT requires the implementation of an algorithm that allows considering a non-uniform intensity beam. The purpose of this work was to develop IDC software in IMRT with MLC using the algorithm proposed by Kung (Kung et al. 2000). The software was done using Matlab programming. The Clarkson modified integral was implemented on each flowing, applying concentric rings for the dose determination. From the integral of each field was calculated the dose anywhere. One time finished a planning; all data are exported to a phantom where a Q A plan is generated. On this is calculated the half dose in a representative volume of the ionization chamber and the dose at the center of it. Until now 230 IMRT planning were analyzed carried out ??in the treatment planning system (Tps) Iplan. For each one of them Q A plan was generated, were calculated and compared calculated dose with the Tps, IDC system and measurement with ionization chamber. The average difference between measured and calculated dose with the IDC system was 0.4% ± 2.2% [-6.8%, 6.4%]. The difference between the measured and the calculated doses by the pencil-beam algorithm (Pb) of Tps was 2.6% ± 1.41% [-2.0%, 5.6%] and with the Monte Carlo algorithm was 0.4% ± 1.5% [-4.9%, 3.7%]. The differences of the carried out software are comparable to the obtained with the ionization chamber and Tps in Monte Carlo mode. (author)

  9. Analytic scattering kernels for neutron thermalization studies

    International Nuclear Information System (INIS)

    Sears, V.F.

    1990-01-01

    Current plans call for the inclusion of a liquid hydrogen or deuterium cold source in the NRU replacement vessel. This report is part of an ongoing study of neutron thermalization in such a cold source. Here, we develop a simple analytical model for the scattering kernel of monatomic and diatomic liquids. We also present the results of extensive numerical calculations based on this model for liquid hydrogen, liquid deuterium, and mixtures of the two. These calculations demonstrate the dependence of the scattering kernel on the incident and scattered-neutron energies, the behavior near rotational thresholds, the dependence on the centre-of-mass pair correlations, the dependence on the ortho concentration, and the dependence on the deuterium concentration in H 2 /D 2 mixtures. The total scattering cross sections are also calculated and compared with available experimental results

  10. Quantized kernel least mean square algorithm.

    Science.gov (United States)

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  11. Kernel-based tests for joint independence

    DEFF Research Database (Denmark)

    Pfister, Niklas; Bühlmann, Peter; Schölkopf, Bernhard

    2018-01-01

    if the $d$ variables are jointly independent, as long as the kernel is characteristic. Based on an empirical estimate of dHSIC, we define three different non-parametric hypothesis tests: a permutation test, a bootstrap test and a test based on a Gamma approximation. We prove that the permutation test......We investigate the problem of testing whether $d$ random variables, which may or may not be continuous, are jointly (or mutually) independent. Our method builds on ideas of the two variable Hilbert-Schmidt independence criterion (HSIC) but allows for an arbitrary number of variables. We embed...... the $d$-dimensional joint distribution and the product of the marginals into a reproducing kernel Hilbert space and define the $d$-variable Hilbert-Schmidt independence criterion (dHSIC) as the squared distance between the embeddings. In the population case, the value of dHSIC is zero if and only...

  12. Wilson Dslash Kernel From Lattice QCD Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Joo, Balint [Jefferson Lab, Newport News, VA; Smelyanskiy, Mikhail [Parallel Computing Lab, Intel Corporation, California, USA; Kalamkar, Dhiraj D. [Parallel Computing Lab, Intel Corporation, India; Vaidyanathan, Karthikeyan [Parallel Computing Lab, Intel Corporation, India

    2015-07-01

    Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.

  13. Evaluation of the Effects of Integrated Management Weed Control on Corn Field by Using Reduced Dose of Foramsulfuron and Nicosulfuron Herbicides

    Directory of Open Access Journals (Sweden)

    M. Matinfar

    2011-09-01

    Full Text Available In order to evaluate the effects of integrated weed management on weed control by using reduced herbicide dose, a field experiment was conducted in 2010 in Qazvin. The experiment was conducted in randomized complete block design with 24 treatments and 4 replications. The treatments were: different planting patterns at three levels (single row, square double rows and zigzag double row  plantings and doses of  Nicosulfuron and Foramsulfuron application at four levels (1, 1/5, 2 and 2/5 liters per hectare, The results showed that among the different planting patterns, zigzag planting reduced weed populations and their dry weights significantly. Foramsulfuron herbicide could control weeds better than Nicosulfuron. Among the herbicide dosages, 2/5 litter dose per hectare highly reduced weed density its dry weight as compared to one litter dose.

  14. Heat kernel expansion in the background field formalism

    CERN Document Server

    Barvinsky, Andrei

    2015-01-01

    Heat kernel expansion and background field formalism represent the combination of two calculational methods within the functional approach to quantum field theory. This approach implies construction of generating functionals for matrix elements and expectation values of physical observables. These are functionals of arbitrary external sources or the mean field of a generic configuration -- the background field. Exact calculation of quantum effects on a generic background is impossible. However, a special integral (proper time) representation for the Green's function of the wave operator -- the propagator of the theory -- and its expansion in the ultraviolet and infrared limits of respectively short and late proper time parameter allow one to construct approximations which are valid on generic background fields. Current progress of quantum field theory, its renormalization properties, model building in unification of fundamental physical interactions and QFT applications in high energy physics, gravitation and...

  15. Recognition of higher order patterns in proteins: immunologic kernels.

    Directory of Open Access Journals (Sweden)

    Robert D Bremel

    Full Text Available By applying analysis of the principal components of amino acid physical properties we predicted cathepsin cleavage sites, MHC binding affinity, and probability of B-cell epitope binding of peptides in tetanus toxin and in ten diverse additional proteins. Cross-correlation of these metrics, for peptides of all possible amino acid index positions, each evaluated in the context of a ±25 amino acid flanking region, indicated that there is a strongly repetitive pattern of short peptides of approximately thirty amino acids each bounded by cathepsin cleavage sites and each comprising B-cell linear epitopes, MHC-I and MHC-II binding peptides. Such "immunologic kernel" peptides comprise all signals necessary for adaptive immunologic cognition, response and recall. The patterns described indicate a higher order spatial integration that forms a symbolic logic coordinating the adaptive immune system.

  16. A Kernel for Protein Secondary Structure Prediction

    OpenAIRE

    Guermeur , Yann; Lifchitz , Alain; Vert , Régis

    2004-01-01

    http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=10338&mode=toc; International audience; Multi-class support vector machines have already proved efficient in protein secondary structure prediction as ensemble methods, to combine the outputs of sets of classifiers based on different principles. In this chapter, their implementation as basic prediction methods, processing the primary structure or the profile of multiple alignments, is investigated. A kernel devoted to the task is in...

  17. Scalar contribution to the BFKL kernel

    International Nuclear Information System (INIS)

    Gerasimov, R. E.; Fadin, V. S.

    2010-01-01

    The contribution of scalar particles to the kernel of the Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation is calculated. A great cancellation between the virtual and real parts of this contribution, analogous to the cancellation in the quark contribution in QCD, is observed. The reason of this cancellation is discovered. This reason has a common nature for particles with any spin. Understanding of this reason permits to obtain the total contribution without the complicated calculations, which are necessary for finding separate pieces.

  18. Weighted Bergman Kernels for Logarithmic Weights

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav

    2010-01-01

    Roč. 6, č. 3 (2010), s. 781-813 ISSN 1558-8599 R&D Projects: GA AV ČR IAA100190802 Keywords : Bergman kernel * Toeplitz operator * logarithmic weight * pseudodifferential operator Subject RIV: BA - General Mathematics Impact factor: 0.462, year: 2010 http://www.intlpress.com/site/pub/pages/journals/items/pamq/content/vols/0006/0003/a008/

  19. Heat kernels and zeta functions on fractals

    International Nuclear Information System (INIS)

    Dunne, Gerald V

    2012-01-01

    On fractals, spectral functions such as heat kernels and zeta functions exhibit novel features, very different from their behaviour on regular smooth manifolds, and these can have important physical consequences for both classical and quantum physics in systems having fractal properties. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker's 75th birthday devoted to ‘Applications of zeta functions and other spectral functions in mathematics and physics’. (paper)

  20. Exploiting graph kernels for high performance biomedical relation extraction.

    Science.gov (United States)

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM

  1. Identification of Fusarium damaged wheat kernels using image analysis

    Directory of Open Access Journals (Sweden)

    Ondřej Jirsa

    2011-01-01

    Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.

  2. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    Science.gov (United States)

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  3. The effects of food irradiation on quality of pine nut kernels

    International Nuclear Information System (INIS)

    Goelge, Evren; Ova, Guelden

    2008-01-01

    Pine nuts (Pinus pinae) undergo gamma irradiation process with the doses 0.5, 1.0, 3.0, and 5.0 kGy. The changes in chemical, physical and sensory attributes were observed in the following 3 months of storage period. The data obtained from the experiments showed the peroxide values of the pine nut kernels increased proportionally to the dose. On contrary, irradiation process has no effect on the physical quality such as texture and color, fatty acid composition and sensory attributes

  4. Kernel based subspace projection of near infrared hyperspectral images of maize kernels

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben

    2009-01-01

    In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods ......- tor transform outperform the linear methods as well as kernel principal components in producing interesting projections of the data.......In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...... including principal component analysis and maximum autocorrelation factor analysis. The latter utilizes the fact that interesting phenomena in images exhibit spatial autocorrelation. However, linear projections often fail to grasp the underlying variability on the data. Therefore we propose to use so...

  5. A Weighted Spatial-Spectral Kernel RX Algorithm and Efficient Implementation on GPUs

    Directory of Open Access Journals (Sweden)

    Chunhui Zhao

    2017-02-01

    Full Text Available The kernel RX (KRX detector proposed by Kwon and Nasrabadi exploits a kernel function to obtain a better detection performance. However, it still has two limits that can be improved. On the one hand, reasonable integration of spatial-spectral information can be used to further improve its detection accuracy. On the other hand, parallel computing can be used to reduce the processing time in available KRX detectors. Accordingly, this paper presents a novel weighted spatial-spectral kernel RX (WSSKRX detector and its parallel implementation on graphics processing units (GPUs. The WSSKRX utilizes the spatial neighborhood resources to reconstruct the testing pixels by introducing a spectral factor and a spatial window, thereby effectively reducing the interference of background noise. Then, the kernel function is redesigned as a mapping trick in a KRX detector to implement the anomaly detection. In addition, a powerful architecture based on the GPU technique is designed to accelerate WSSKRX. To substantiate the performance of the proposed algorithm, both synthetic and real data are conducted for experiments.

  6. Half-dose non-contrast CT in the investigation of urolithiasis: image quality improvement with third-generation integrated circuit CT detectors.

    Science.gov (United States)

    Wang, Jun; Kang, Tony; Arepalli, Chesnal; Barrett, Sarah; O'Connell, Tim; Louis, Luck; Nicolaou, Savvakis; McLaughlin, Patrick

    2015-06-01

    The objective of this study is to establish the effect of third-generation integrated circuit (IC) CT detector on objective image quality in full- and half-dose non-contrast CT of the urinary tract. 51 consecutive patients with acute renal colic underwent non-contrast CT of the urinary tract using a 128-slice dual-source CT before (n = 24) and after (n = 27) the installation of third-generation IC detectors. Half-dose images were generated using projections from detector A using the dual-source RAW data. Objective image noise in the liver, spleen, right renal cortex, and right psoas muscle was compared between DC and IC cohorts for full-dose and half-dose images reconstructed with FBP and IR algorithms using 1 cm(2) regions of interest. Presence and size of obstructing ureteric calculi were also compared for full-dose and half-dose reconstructions using DC and IC detectors. No statistical difference in age and lateral body size was found between patients in the IC and DC cohorts. Radiation dose, as measured by size-specific dose estimates, did not differ significantly either between the two cohorts (10.02 ± 4.54 mGy IC vs. 12.28 ± 7.03 mGy DC). At full dose, objective image noise was not significantly lower in the IC cohort as compared to the DC cohort for the liver, spleen, and right psoas muscle. At half dose, objective image noise was lower in the IC cohort as compared to DC cohort at the liver (21.32 IC vs. 24.99 DC, 14.7% decrease, p 0.05 for all comparisons). Third-generation IC detectors result in lower objective image noise at full- and half-radiation dose levels as compared with traditional DC detectors. The magnitude of noise reduction was greater at half-radiation dose indicating that the benefits of using novel IC detectors are greater in low and ultra-low-dose CT imaging.

  7. Helical Tomotherapy for Whole-Brain Irradiation With Integrated Boost to Multiple Brain Metastases: Evaluation of Dose Distribution Characteristics and Comparison With Alternative Techniques

    International Nuclear Information System (INIS)

    Levegrün, Sabine; Pöttgen, Christoph; Wittig, Andrea; Lübcke, Wolfgang; Abu Jawad, Jehad; Stuschke, Martin

    2013-01-01

    Purpose: To quantitatively evaluate dose distribution characteristics achieved with helical tomotherapy (HT) for whole-brain irradiation (WBRT) with integrated boost (IB) to multiple brain metastases in comparison with alternative techniques. Methods and Materials: Dose distributions for 23 patients with 81 metastases treated with WBRT (30 Gy/10 fractions) and IB (50 Gy) were analyzed. The median number of metastases per patient (N mets ) was 3 (range, 2-8). Mean values of the composite planning target volume of all metastases per patient (PTV mets ) and of the individual metastasis planning target volume (PTV ind met ) were 8.7 ± 8.9 cm 3 (range, 1.3-35.5 cm 3 ) and 2.5 ± 4.5 cm 3 (range, 0.19-24.7 cm 3 ), respectively. Dose distributions in PTV mets and PTV ind met were evaluated with respect to dose conformity (conformation number [CN], RTOG conformity index [PITV]), target coverage (TC), and homogeneity (homogeneity index [HI], ratio of maximum dose to prescription dose [MDPD]). The dependence of dose conformity on target size and N mets was investigated. The dose distribution characteristics were benchmarked against alternative irradiation techniques identified in a systematic literature review. Results: Mean ± standard deviation of dose distribution characteristics derived for PTV mets amounted to CN = 0.790 ± 0.101, PITV = 1.161 ± 0.154, TC = 0.95 ± 0.01, HI = 0.142 ± 0.022, and MDPD = 1.147 ± 0.029, respectively, demonstrating high dose conformity with acceptable homogeneity. Corresponding numbers for PTV ind met were CN = 0.708 ± 0.128, PITV = 1.174 ± 0.237, TC = 0.90 ± 0.10, HI = 0.140 ± 0.027, and MDPD = 1.129 ± 0.030, respectively. The target size had a statistically significant influence on dose conformity to PTV mets (CN = 0.737 for PTV mets ≤4.32 cm 3 vs CN = 0.848 for PTV mets >4.32 cm 3 , P=.006), in contrast to N mets . The achieved dose conformity to PTV mets , assessed by both CN and PITV, was in all investigated volume strata

  8. Optimized probes for dose rate measurements at local government sites and in emergency planning zones and their integration into measurement networks

    International Nuclear Information System (INIS)

    Kuca, Petr; Helebrant, Jan; Cespirova, Irena; Judas, Libor; Skala, Lukas

    2015-01-01

    The results of a security project aimed at the development of a radiation situation monitoring system using optimized probes for dose rate measurements are described. The system is suitable for use at local government sites as well as at other sites. The system includes dose rate measurement probes with the variable configuration functionality (detection part), equipment for data transfer to a central workplace (communication part) and application for collection, storage and administration of the results and their presentation at a website (presentation part). The dosimetric and other operational properties of the probes were tested and the feasibility of their integration into measurement networks using the IMS central application was examined. (orig.)

  9. Heat kernel estimates for pseudodifferential operators, fractional Laplacians and Dirichlet-to-Neumann operators

    DEFF Research Database (Denmark)

    Gimperlein, Heiko; Grubb, Gerd

    2014-01-01

    The purpose of this article is to establish upper and lower estimates for the integral kernel of the semigroup exp(−t P) associated to a classical, strongly elliptic pseudodifferential operator P of positive order on a closed manifold. The Poissonian bounds generalize those obtained for perturbat......The purpose of this article is to establish upper and lower estimates for the integral kernel of the semigroup exp(−t P) associated to a classical, strongly elliptic pseudodifferential operator P of positive order on a closed manifold. The Poissonian bounds generalize those obtained...... for perturbations of fractional powers of the Laplacian. In the selfadjoint case, extensions to t∈C+  are studied. In particular, our results apply to the Dirichlet-to-Neumann semigroup....

  10. Aida-CMK multi-algorithm optimization kernel applied to analog IC sizing

    CERN Document Server

    Lourenço, Ricardo; Horta, Nuno

    2015-01-01

    This work addresses the research and development of an innovative optimization kernel applied to analog integrated circuit (IC) design. Particularly, this works describes the modifications inside the AIDA Framework, an electronic design automation framework fully developed by at the Integrated Circuits Group-LX of the Instituto de Telecomunicações, Lisbon. It focusses on AIDA-CMK, by enhancing AIDA-C, which is the circuit optimizer component of AIDA, with a new multi-objective multi-constraint optimization module that constructs a base for multiple algorithm implementations. The proposed solution implements three approaches to multi-objective multi-constraint optimization, namely, an evolutionary approach with NSGAII, a swarm intelligence approach with MOPSO and stochastic hill climbing approach with MOSA. Moreover, the implemented structure allows the easy hybridization between kernels transforming the previous simple NSGAII optimization module into a more evolved and versatile module supporting multiple s...

  11. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    Science.gov (United States)

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  12. Kernel based eigenvalue-decomposition methods for analysing ham

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming

    2010-01-01

    methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...

  13. Three dimensional implementation of anisotropy corrected fast fourier transform dose calculation around brachytherapy seeds

    International Nuclear Information System (INIS)

    Kyeremeh, P.O.

    2011-01-01

    Current-available brachytherapy dose computation algorithms ignore heterogeneities such as tissue-air interfaces, shielded gynaecological colpostats, and tissue-composition variations in source implants despite dose computation errors as large as 40%. A convolution kernel, which takes into consideration anisotropy of the dose distribution around a brachytherapy source, and to compute dose in the presence of tissue and applicator heterogeneities, has been established. Resulting from the convolution kernel are functions with polynomial and exponential terms. the solution to the convolution integral was represented by the Fast Fourier transform. The Fast Fourier transform has shown enough potency in accounting for errors due to these heterogeneities and the versatility of this Fast Fourier transform is evident from its capability of switching in between fields. Thus successful procedures in external beam could be adopted in brachytherapy to a yield similar effect. A dose deposition kernel was developed for a 64x64x64 matrix size with wrap around ordering and convoluted with the distribution of the sources in 3D. With MatLab's inverse Fast Fourier transform, dose rate distribution for a given array of interstitial sources, typical of brachytherapy was calculated. The shape of the dose rate distribution peaks appeared comparable with the output expected from computerized treatment planning systems for brachytherapy. Subsequently, the study confirmed the speed and accuracy of dose computation using the FFT convolution as well juxtaposed. Although, dose rate peaks from both the FFT convolution and the TPS(TG43) did not compare quantitatively, which was mainly due to the TPS(TG43) initiation computations from the origin (0,0,0) unlike the FFT convolution which uses sampling points; N=1,2,3..., there is a strong basis for establishing parity since the dose rate peaks compared qualitatively. With both modes compared, the discrepancies in the dose rates ranged between 3.6% to

  14. Isocentric integration of intensity-modulated radiotherapy with electron fields improves field junction dose uniformity in postmastectomy radiotherapy.

    Science.gov (United States)

    Wright, Pauliina; Suilamo, Sami; Lindholm, Paula; Kulmala, Jarmo

    2014-08-01

    In postmastectomy radiotherapy (PMRT), the dose coverage of the planning target volume (PTV) with additional margins, including the chest wall, supraclavicular, interpectoral, internal mammary and axillar level I-III lymph nodes, is often compromised. Electron fields may improve the medial dose coverage while maintaining organ at risk (OAR) doses at an acceptable level, but at the cost of hot and cold spots at the electron and photon field junction. To improve PMRT dose coverage and uniformity, an isocentric technique combining tangential intensity-modulated (IM)RT fields with one medial electron field was implemented. For 10 postmastectomy patients isocentric IMRT with electron plans were created and compared with a standard electron/photon mix and a standard tangent technique. PTV dose uniformity was evaluated based on the tolerance range (TR), i.e. the ratio of the standard deviation to the mean dose, a dice similarity coefficient (DSC) and the 90% isodose coverage and the hot spot volumes. OAR and contralateral breast doses were also recorded. IMRT with electrons significantly improved the PTV dose homogeneity and conformity based on the TR and DSC values when compared with the standard electron/photon and tangent technique (p < 0.02). The 90% isodose coverage improved to 86% compared with 82% and 80% for the standard techniques (p < 0.02). Compared with the standard electron/photon mix, IMRT smoothed the dose gradient in the electron and photon field junction and the volumes receiving a dose of 110% or more were reduced by a third. For all three strategies, the OAR and contralateral breast doses were within clinically tolerable limits. Based on these results two-field IMRT combined with an electron field is a suitable strategy for PMRT.

  15. Calculation and Verification of a Planar Pencil Beam Kernel Through the Hankel Transform of Measured OARs for a Radiosurgery System with Cones

    International Nuclear Information System (INIS)

    Vargas Verdesoto, Milton X.; Alvarez Romero, Jose T.

    2010-01-01

    A planar multienergetic pencil beam kernel with rotational symmetry is calculated for a stereotactic radiosurgery system, SRS, BrainLAB with cones, employing the deconvolution method of the off axis ratio profile, OAR, corresponding to the cone of 35 mm in diameter for a 6 MV photon beam produced by a linear accelerator Varian 2100 C/D. Before the deconvolution, the experimental OAR is corrected for beam divergence and variations of the spectral fluence Φ, using a boundary function BF. The function BF and the fluence Φ are transformed to the conjugate space with the zero order Hankel function, which is the appropriate transform due to the radial symmetry of the circular beams generated by the cones. The kernel in the conjugate space is obtained as the ratio of the transform of BF to the transform of Φ, therefore the kernel in the real space is calculated as the inverse transform of the kernel in the conjugate space. To validate the kernel in the real space, it is convolved with the fluence of the cones of 7.5, 12.5, 15, 17.5, 20, 22.5, 25, 30 and 35 mm in diameter. The comparison of the OARs calculated and measured shows a maximum difference of 4.5% in the zones of high gradient of dose, and a difference less than 2% in the regions of low gradient of dose. Finally, the expanded uncertainty of the kernel is estimated and reported.

  16. Classification of maize kernels using NIR hyperspectral imaging

    DEFF Research Database (Denmark)

    Williams, Paul; Kucheryavskiy, Sergey V.

    2016-01-01

    NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual...... and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale....

  17. Ideal gas scattering kernel for energy dependent cross-sections

    International Nuclear Information System (INIS)

    Rothenstein, W.; Dagan, R.

    1998-01-01

    A third, and final, paper on the calculation of the joint kernel for neutron scattering by an ideal gas in thermal agitation is presented, when the scattering cross-section is energy dependent. The kernel is a function of the neutron energy after scattering, and of the cosine of the scattering angle, as in the case of the ideal gas kernel for a constant bound atom scattering cross-section. The final expression is suitable for numerical calculations

  18. Embedded real-time operating system micro kernel design

    Science.gov (United States)

    Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng

    2005-12-01

    Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.

  19. An SVM model with hybrid kernels for hydrological time series

    Science.gov (United States)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  20. Influence of wheat kernel physical properties on the pulverizing process.

    Science.gov (United States)

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  1. Hadamard Kernel SVM with applications for breast cancer outcome predictions.

    Science.gov (United States)

    Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong

    2017-12-21

    Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.

  2. Parameter optimization in the regularized kernel minimum noise fraction transformation

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2012-01-01

    Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....

  3. Analysis of Advanced Fuel Kernel Technology

    International Nuclear Information System (INIS)

    Oh, Seung Chul; Jeong, Kyung Chai; Kim, Yeon Ku; Kim, Young Min; Kim, Woong Ki; Lee, Young Woo; Cho, Moon Sung

    2010-03-01

    The reference fuel for prismatic reactor concepts is based on use of an LEU UCO TRISO fissile particle. This fuel form was selected in the early 1980s for large high-temperature gas-cooled reactor (HTGR) concepts using LEU, and the selection was reconfirmed for modular designs in the mid-1980s. Limited existing irradiation data on LEU UCO TRISO fuel indicate the need for a substantial improvement in performance with regard to in-pile gaseous fission product release. Existing accident testing data on LEU UCO TRISO fuel are extremely limited, but it is generally expected that performance would be similar to that of LEU UO 2 TRISO fuel if performance under irradiation were successfully improved. Initial HTGR fuel technology was based on carbide fuel forms. In the early 1980s, as HTGR technology was transitioning from high-enriched uranium (HEU) fuel to LEU fuel. An initial effort focused on LEU prismatic design for large HTGRs resulted in the selection of UCO kernels for the fissile particles and thorium oxide (ThO 2 ) for the fertile particles. The primary reason for selection of the UCO kernel over UO 2 was reduced CO pressure, allowing higher burnup for equivalent coating thicknesses and reduced potential for kernel migration, an important failure mechanism in earlier fuels. A subsequent assessment in the mid-1980s considering modular HTGR concepts again reached agreement on UCO for the fissile particle for a prismatic design. In the early 1990s, plant cost-reduction studies led to a decision to change the fertile material from thorium to natural uranium, primarily because of a lower long-term decay heat level for the natural uranium fissile particles. Ongoing economic optimization in combination with anticipated capabilities of the UCO particles resulted in peak fissile particle burnup projection of 26% FIMA in steam cycle and gas turbine concepts

  4. Learning Rotation for Kernel Correlation Filter

    KAUST Repository

    Hamdi, Abdullah

    2017-08-11

    Kernel Correlation Filters have shown a very promising scheme for visual tracking in terms of speed and accuracy on several benchmarks. However it suffers from problems that affect its performance like occlusion, rotation and scale change. This paper tries to tackle the problem of rotation by reformulating the optimization problem for learning the correlation filter. This modification (RKCF) includes learning rotation filter that utilizes circulant structure of HOG feature to guesstimate rotation from one frame to another and enhance the detection of KCF. Hence it gains boost in overall accuracy in many of OBT50 detest videos with minimal additional computation.

  5. Research of Performance Linux Kernel File Systems

    Directory of Open Access Journals (Sweden)

    Andrey Vladimirovich Ostroukh

    2015-10-01

    Full Text Available The article describes the most common Linux Kernel File Systems. The research was carried out on a personal computer, the characteristics of which are written in the article. The study was performed on a typical workstation running GNU/Linux with below characteristics. On a personal computer for measuring the file performance, has been installed the necessary software. Based on the results, conclusions and proposed recommendations for use of file systems. Identified and recommended by the best ways to store data.

  6. Fixed kernel regression for voltammogram feature extraction

    International Nuclear Information System (INIS)

    Acevedo Rodriguez, F J; López-Sastre, R J; Gil-Jiménez, P; Maldonado Bascón, S; Ruiz-Reyes, N

    2009-01-01

    Cyclic voltammetry is an electroanalytical technique for obtaining information about substances under analysis without the need for complex flow systems. However, classifying the information in voltammograms obtained using this technique is difficult. In this paper, we propose the use of fixed kernel regression as a method for extracting features from these voltammograms, reducing the information to a few coefficients. The proposed approach has been applied to a wine classification problem with accuracy rates of over 98%. Although the method is described here for extracting voltammogram information, it can be used for other types of signals

  7. Reciprocity relation for multichannel coupling kernels

    International Nuclear Information System (INIS)

    Cotanch, S.R.; Satchler, G.R.

    1981-01-01

    Assuming time-reversal invariance of the many-body Hamiltonian, it is proven that the kernels in a general coupled-channels formulation are symmetric, to within a specified spin-dependent phase, under the interchange of channel labels and coordinates. The theorem is valid for both Hermitian and suitably chosen non-Hermitian Hamiltonians which contain complex effective interactions. While of direct practical consequence for nuclear rearrangement reactions, the reciprocity relation is also appropriate for other areas of physics which involve coupled-channels analysis

  8. Wheat kernel dimensions: how do they contribute to kernel weight at ...

    Indian Academy of Sciences (India)

    2011-12-02

    Dec 2, 2011 ... yield components, is greatly influenced by kernel dimensions. (KD), such as ..... six linkage gaps, and it covered 3010.70 cM of the whole genome with an ...... Ersoz E. et al. 2009 The Genetic architecture of maize flowering.

  9. Kernel Multivariate Analysis Framework for Supervised Subspace Learning: A Tutorial on Linear and Kernel Multivariate Methods

    DEFF Research Database (Denmark)

    Arenas-Garcia, J.; Petersen, K.; Camps-Valls, G.

    2013-01-01

    correlation analysis (CCA), and orthonormalized PLS (OPLS), as well as their nonlinear extensions derived by means of the theory of reproducing kernel Hilbert spaces (RKHSs). We also review their connections to other methods for classification and statistical dependence estimation and introduce some recent...

  10. Total integrated dose testing of solid-state scientific CD4011, CD4013, and CD4060 devices by irradiation with CO-60 gamma rays

    Science.gov (United States)

    Dantas, A. R. V.; Gauthier, M. K.; Coss, J. R.

    1985-01-01

    The total integrated dose response of three CMOS devices manufactured by Solid State Scientific has been measured using CO-60 gamma rays. Key parameter measurements were made and compared for each device type. The data show that the CD4011, CD4013, and CD4060 produced by this manufacturers should not be used in any environments where radiation levels might exceed 1,000 rad(Si).

  11. Simultaneous saccharification and cofermentation of lignocellulosic residues from commercial furfural production and corn kernels using different nutrient media

    Directory of Open Access Journals (Sweden)

    Cristhian Carrasco

    2011-07-01

    Full Text Available Abstract Background As the supply of starch grain and sugar cane, currently the main feedstocks for bioethanol production, become limited, lignocelluloses will be sought as alternative materials for bioethanol production. Production of cellulosic ethanol is still cost-inefficient because of the low final ethanol concentration and the addition of nutrients. We report the use of simultaneous saccharification and cofermentation (SSCF of lignocellulosic residues from commercial furfural production (furfural residue, FR and corn kernels to compare different nutritional media. The final ethanol concentration, yield, number of live yeast cells, and yeast-cell death ratio were investigated to evaluate the effectiveness of integrating cellulosic and starch ethanol. Results Both the ethanol yield and number of live yeast cells increased with increasing corn-kernel concentration, whereas the yeast-cell death ratio decreased in SSCF of FR and corn kernels. An ethanol concentration of 73.1 g/L at 120 h, which corresponded to a 101.1% ethanol yield based on FR cellulose and corn starch, was obtained in SSCF of 7.5% FR and 14.5% corn kernels with mineral-salt medium. SSCF could simultaneously convert cellulose into ethanol from both corn kernels and FR, and SSCF ethanol yield was similar between the organic and mineral-salt media. Conclusions Starch ethanol promotes cellulosic ethanol by providing important nutrients for fermentative organisms, and in turn cellulosic ethanol promotes starch ethanol by providing cellulosic enzymes that convert the cellulosic polysaccharides in starch materials into additional ethanol. It is feasible to produce ethanol in SSCF of FR and corn kernels with mineral-salt medium. It would be cost-efficient to produce ethanol in SSCF of high concentrations of water-insoluble solids of lignocellulosic materials and corn kernels. Compared with prehydrolysis and fed-batch strategy using lignocellulosic materials, addition of starch

  12. SU-E-T-423: Fast Photon Convolution Calculation with a 3D-Ideal Kernel On the GPU

    Energy Technology Data Exchange (ETDEWEB)

    Moriya, S; Sato, M [Komazawa University, Setagaya, Tokyo (Japan); Tachibana, H [National Cancer Center Hospital East, Kashiwa, Chiba (Japan)

    2015-06-15

    Purpose: The calculation time is a trade-off for improving the accuracy of convolution dose calculation with fine calculation spacing of the KERMA kernel. We investigated to accelerate the convolution calculation using an ideal kernel on the Graphic Processing Units (GPU). Methods: The calculation was performed on the AMD graphics hardware of Dual FirePro D700 and our algorithm was implemented using the Aparapi that convert Java bytecode to OpenCL. The process of dose calculation was separated with the TERMA and KERMA steps. The dose deposited at the coordinate (x, y, z) was determined in the process. In the dose calculation running on the central processing unit (CPU) of Intel Xeon E5, the calculation loops were performed for all calculation points. On the GPU computation, all of the calculation processes for the points were sent to the GPU and the multi-thread computation was done. In this study, the dose calculation was performed in a water equivalent homogeneous phantom with 150{sup 3} voxels (2 mm calculation grid) and the calculation speed on the GPU to that on the CPU and the accuracy of PDD were compared. Results: The calculation time for the GPU and the CPU were 3.3 sec and 4.4 hour, respectively. The calculation speed for the GPU was 4800 times faster than that for the CPU. The PDD curve for the GPU was perfectly matched to that for the CPU. Conclusion: The convolution calculation with the ideal kernel on the GPU was clinically acceptable for time and may be more accurate in an inhomogeneous region. Intensity modulated arc therapy needs dose calculations for different gantry angles at many control points. Thus, it would be more practical that the kernel uses a coarse spacing technique if the calculation is faster while keeping the similar accuracy to a current treatment planning system.

  13. Kernel learning at the first level of inference.

    Science.gov (United States)

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Biological stress response terminology: Integrating the concepts of adaptive response and preconditioning stress within a hormetic dose-response framework

    International Nuclear Information System (INIS)

    Calabrese, Edward J.; Bachmann, Kenneth A.; Bailer, A. John; Bolger, P. Michael; Borak, Jonathan; Cai, Lu; Cedergreen, Nina; Cherian, M. George; Chiueh, Chuang C.; Clarkson, Thomas W.; Cook, Ralph R.; Diamond, David M.; Doolittle, David J.; Dorato, Michael A.; Duke, Stephen O.; Feinendegen, Ludwig; Gardner, Donald E.; Hart, Ronald W.; Hastings, Kenneth L.; Hayes, A. Wallace; Hoffmann, George R.; Ives, John A.; Jaworowski, Zbigniew; Johnson, Thomas E.; Jonas, Wayne B.; Kaminski, Norbert E.; Keller, John G.; Klaunig, James E.; Knudsen, Thomas B.; Kozumbo, Walter J.; Lettieri, Teresa; Liu, Shu-Zheng; Maisseu, Andre; Maynard, Kenneth I.; Masoro, Edward J.; McClellan, Roger O.; Mehendale, Harihara M.; Mothersill, Carmel; Newlin, David B.; Nigg, Herbert N.; Oehme, Frederick W.; Phalen, Robert F.; Philbert, Martin A.; Rattan, Suresh I.S.; Riviere, Jim E.; Rodricks, Joseph; Sapolsky, Robert M.; Scott, Bobby R.; Seymour, Colin; Sinclair, David A.; Smith-Sonneborn, Joan; Snow, Elizabeth T.; Spear, Linda; Stevenson, Donald E.; Thomas, Yolene; Tubiana, Maurice; Williams, Gary M.; Mattson, Mark P.

    2007-01-01

    Many biological subdisciplines that regularly assess dose-response relationships have identified an evolutionarily conserved process in which a low dose of a stressful stimulus activates an adaptive response that increases the resistance of the cell or organism to a moderate to severe level of stress. Due to a lack of frequent interaction among scientists in these many areas, there has emerged a broad range of terms that describe such dose-response relationships. This situation has become problematic because the different terms describe a family of similar biological responses (e.g., adaptive response, preconditioning, hormesis), adversely affecting interdisciplinary communication, and possibly even obscuring generalizable features and central biological concepts. With support from scientists in a broad range of disciplines, this article offers a set of recommendations we believe can achieve greater conceptual harmony in dose-response terminology, as well as better understanding and communication across the broad spectrum of biological disciplines

  15. The Kernel Estimation in Biosystems Engineering

    Directory of Open Access Journals (Sweden)

    Esperanza Ayuga Téllez

    2008-04-01

    Full Text Available In many fields of biosystems engineering, it is common to find works in which statistical information is analysed that violates the basic hypotheses necessary for the conventional forecasting methods. For those situations, it is necessary to find alternative methods that allow the statistical analysis considering those infringements. Non-parametric function estimation includes methods that fit a target function locally, using data from a small neighbourhood of the point. Weak assumptions, such as continuity and differentiability of the target function, are rather used than "a priori" assumption of the global target function shape (e.g., linear or quadratic. In this paper a few basic rules of decision are enunciated, for the application of the non-parametric estimation method. These statistical rules set up the first step to build an interface usermethod for the consistent application of kernel estimation for not expert users. To reach this aim, univariate and multivariate estimation methods and density function were analysed, as well as regression estimators. In some cases the models to be applied in different situations, based on simulations, were defined. Different biosystems engineering applications of the kernel estimation are also analysed in this review.

  16. Consistent Valuation across Curves Using Pricing Kernels

    Directory of Open Access Journals (Sweden)

    Andrea Macrina

    2018-03-01

    Full Text Available The general problem of asset pricing when the discount rate differs from the rate at which an asset’s cash flows accrue is considered. A pricing kernel framework is used to model an economy that is segmented into distinct markets, each identified by a yield curve having its own market, credit and liquidity risk characteristics. The proposed framework precludes arbitrage within each market, while the definition of a curve-conversion factor process links all markets in a consistent arbitrage-free manner. A pricing formula is then derived, referred to as the across-curve pricing formula, which enables consistent valuation and hedging of financial instruments across curves (and markets. As a natural application, a consistent multi-curve framework is formulated for emerging and developed inter-bank swap markets, which highlights an important dual feature of the curve-conversion factor process. Given this multi-curve framework, existing multi-curve approaches based on HJM and rational pricing kernel models are recovered, reviewed and generalised and single-curve models extended. In another application, inflation-linked, currency-based and fixed-income hybrid securities are shown to be consistently valued using the across-curve valuation method.

  17. Aligning Biomolecular Networks Using Modular Graph Kernels

    Science.gov (United States)

    Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant

    Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.

  18. Pareto-path multitask multiple kernel learning.

    Science.gov (United States)

    Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2015-01-01

    A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.

  19. Formal truncations of connected kernel equations

    International Nuclear Information System (INIS)

    Dixon, R.M.

    1977-01-01

    The Connected Kernel Equations (CKE) of Alt, Grassberger and Sandhas (AGS); Kouri, Levin and Tobocman (KLT); and Bencze, Redish and Sloan (BRS) are compared against reaction theory criteria after formal channel space and/or operator truncations have been introduced. The Channel Coupling Class concept is used to study the structure of these CKE's. The related wave function formalism of Sandhas, of L'Huillier, Redish and Tandy and of Kouri, Krueger and Levin are also presented. New N-body connected kernel equations which are generalizations of the Lovelace three-body equations are derived. A method for systematically constructing fewer body models from the N-body BRS and generalized Lovelace (GL) equations is developed. The formally truncated AGS, BRS, KLT and GL equations are analyzed by employing the criteria of reciprocity and two-cluster unitarity. Reciprocity considerations suggest that formal truncations of BRS, KLT and GL equations can lead to reciprocity-violating results. This study suggests that atomic problems should employ three-cluster connected truncations and that the two-cluster connected truncations should be a useful starting point for nuclear systems

  20. Scientific Computing Kernels on the Cell Processor

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  1. Delimiting areas of endemism through kernel interpolation.

    Science.gov (United States)

    Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  2. Delimiting areas of endemism through kernel interpolation.

    Directory of Open Access Journals (Sweden)

    Ubirajara Oliveira

    Full Text Available We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE, based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  3. Analysis of a Statistical Relationship Between Dose and Error Tallies in Semiconductor Digital Integrated Circuits for Application to Radiation Monitoring Over a Wireless Sensor Network

    Science.gov (United States)

    Colins, Karen; Li, Liqian; Liu, Yu

    2017-05-01

    Mass production of widely used semiconductor digital integrated circuits (ICs) has lowered unit costs to the level of ordinary daily consumables of a few dollars. It is therefore reasonable to contemplate the idea of an engineered system that consumes unshielded low-cost ICs for the purpose of measuring gamma radiation dose. Underlying the idea is the premise of a measurable correlation between an observable property of ICs and radiation dose. Accumulation of radiation-damage-induced state changes or error events is such a property. If correct, the premise could make possible low-cost wide-area radiation dose measurement systems, instantiated as wireless sensor networks (WSNs) with unshielded consumable ICs as nodes, communicating error events to a remote base station. The premise has been investigated quantitatively for the first time in laboratory experiments and related analyses performed at the Canadian Nuclear Laboratories. State changes or error events were recorded in real time during irradiation of samples of ICs of different types in a 60Co gamma cell. From the error-event sequences, empirical distribution functions of dose were generated. The distribution functions were inverted and probabilities scaled by total error events, to yield plots of the relationship between dose and error tallies. Positive correlation was observed, and discrete functional dependence of dose quantiles on error tallies was measured, demonstrating the correctness of the premise. The idea of an engineered system that consumes unshielded low-cost ICs in a WSN, for the purpose of measuring gamma radiation dose over wide areas, is therefore tenable.

  4. a Comparison Study of Different Kernel Functions for Svm-Based Classification of Multi-Temporal Polarimetry SAR Data

    Science.gov (United States)

    Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.

    2014-10-01

    In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.

  5. Parameters used in the environmental pathways (DESCARTES) and radiological dose (CIDER) modules of the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC) for the air pathway. Hanford Environmental Dose Reconstruction Project

    Energy Technology Data Exchange (ETDEWEB)

    Snyder, S.F.; Farris, W.T.; Napier, B.A.; Ikenberry, T.A.; Gilbert, R.O.

    1992-09-01

    This letter report is a description of work performed for the Hanford Environmental Dose Reconstruction (HEDR) Project. The HEDR Project was established to estimate the radiation doses to individuals resulting from releases of radionuclides from the Hanford Site since 1944. This work is being done by staff at Battelle, Pacific Northwest Laboratories (Battelle) under a contract with the Centers for Disease Control (CDC) with technical direction provided by an independent Technical Steering Panel (TSP). The objective of this report is to-document the environmental accumulation and dose-assessment parameters that will be used to estimate the impacts of past Hanford Site airborne releases. During 1993, dose estimates made by staff at Battelle will be used by the Fred Hutchinson Cancer Research Center as part of the Hanford Thyroid Disease Study (HTDS). This document contains information on parameters that are specific to the airborne release of the radionuclide iodine-131. Future versions of this document will include parameter information pertinent to other pathways and radionuclides.

  6. Commutators of integral operators with variable kernels on Hardy ...

    Indian Academy of Sciences (India)

    [8] Coifman R R, Lions P L, Meyer Y and Semmes S, Compensated compactness and Hardy spaces, J. Math. Pure Appl. 72(3) (1993) 247–286. [9] Coifman R R, Rochberg R and Weiss G, Factorization theorems for Hardy spaces in several variable, Ann. Math. 103 (1976) 611–635. [10] Ding Y, Weak type bounds for a class ...

  7. On an integral transform

    Directory of Open Access Journals (Sweden)

    D. Naylor

    1986-01-01

    Full Text Available This paper establishes properties of a convolution type integral transform whose kernel is a Macdonald type Bessel function of zero order. An inversion formula is developed and the transform is applied to obtain the solution of some related integral equations.

  8. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically

  9. Replacement Value of Palm Kernel Meal for Maize on Carcass ...

    African Journals Online (AJOL)

    This study was conducted to evaluate the effect of replacing maize with palm kernel meal on nutrient composition, fatty acid profile and sensory qualities of the meat of turkeys fed the dietary treatments. Six dietary treatments were formulated using palm kernel meal to replace maize at 0, 20, 40, 60, 80 and 100 percent.

  10. Effect of Palm Kernel Cake Replacement and Enzyme ...

    African Journals Online (AJOL)

    A feeding trial which lasted for twelve weeks was conducted to study the performance of finisher pigs fed five different levels of palm kernel cake replacement for maize (0%, 40%, 40%, 60%, 60%) in a maize-palm kernel cake based ration with or without enzyme supplementation. It was a completely randomized design ...

  11. Capturing option anomalies with a variance-dependent pricing kernel

    NARCIS (Netherlands)

    Christoffersen, P.; Heston, S.; Jacobs, K.

    2013-01-01

    We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is

  12. Nonlinear Forecasting With Many Predictors Using Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter; Groenen, Patrick J.F.; Heij, Christiaan

    This paper puts forward kernel ridge regression as an approach for forecasting with many predictors that are related nonlinearly to the target variable. In kernel ridge regression, the observed predictor variables are mapped nonlinearly into a high-dimensional space, where estimation of the predi...

  13. Discrete non-parametric kernel estimation for global sensitivity analysis

    International Nuclear Information System (INIS)

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  14. Kernel Function Tuning for Single-Layer Neural Networks

    Czech Academy of Sciences Publication Activity Database

    Vidnerová, Petra; Neruda, Roman

    -, accepted 28.11. 2017 (2018) ISSN 2278-0149 R&D Projects: GA ČR GA15-18108S Institutional support: RVO:67985807 Keywords : single-layer neural networks * kernel methods * kernel function * optimisation Subject RIV: IN - Informatics, Computer Science http://www.ijmerr.com/

  15. Geodesic exponential kernels: When Curvature and Linearity Conflict

    DEFF Research Database (Denmark)

    Feragen, Aase; Lauze, François; Hauberg, Søren

    2015-01-01

    manifold, the geodesic Gaussian kernel is only positive definite if the Riemannian manifold is Euclidean. This implies that any attempt to design geodesic Gaussian kernels on curved Riemannian manifolds is futile. However, we show that for spaces with conditionally negative definite distances the geodesic...

  16. Denoising by semi-supervised kernel PCA preimaging

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai

    2014-01-01

    Kernel Principal Component Analysis (PCA) has proven a powerful tool for nonlinear feature extraction, and is often applied as a pre-processing step for classification algorithms. In denoising applications Kernel PCA provides the basis for dimensionality reduction, prior to the so-called pre-imag...

  17. Design and construction of palm kernel cracking and separation ...

    African Journals Online (AJOL)

    Design and construction of palm kernel cracking and separation machines. ... Username, Password, Remember me, or Register. DOWNLOAD FULL TEXT Open Access DOWNLOAD FULL TEXT Subscription or Fee Access. Design and construction of palm kernel cracking and separation machines. JO Nordiana, K ...

  18. Kernel Methods for Machine Learning with Life Science Applications

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie

    Kernel methods refer to a family of widely used nonlinear algorithms for machine learning tasks like classification, regression, and feature extraction. By exploiting the so-called kernel trick straightforward extensions of classical linear algorithms are enabled as long as the data only appear a...

  19. Genetic relationship between plant growth, shoot and kernel sizes in ...

    African Journals Online (AJOL)

    Maize (Zea mays L.) ear vascular tissue transports nutrients that contribute to grain yield. To assess kernel heritabilities that govern ear development and plant growth, field studies were conducted to determine the combining abilities of parents that differed for kernel-size, grain-filling rates and shoot-size. Thirty two hybrids ...

  20. Boundary singularity of Poisson and harmonic Bergman kernels

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav

    2015-01-01

    Roč. 429, č. 1 (2015), s. 233-272 ISSN 0022-247X R&D Projects: GA AV ČR IAA100190802 Institutional support: RVO:67985840 Keywords : harmonic Bergman kernel * Poisson kernel * pseudodifferential boundary operators Subject RIV: BA - General Mathematics Impact factor: 1.014, year: 2015 http://www.sciencedirect.com/science/article/pii/S0022247X15003170

  1. Oven-drying reduces ruminal starch degradation in maize kernels

    NARCIS (Netherlands)

    Ali, M.; Cone, J.W.; Hendriks, W.H.; Struik, P.C.

    2014-01-01

    The degradation of starch largely determines the feeding value of maize (Zea mays L.) for dairy cows. Normally, maize kernels are dried and ground before chemical analysis and determining degradation characteristics, whereas cows eat and digest fresh material. Drying the moist maize kernels

  2. Real time kernel performance monitoring with SystemTap

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    SystemTap is a dynamic method of monitoring and tracing the operation of a running Linux kernel. In this talk I will present a few practical use cases where SystemTap allowed me to turn otherwise complex userland monitoring tasks in simple kernel probes.

  3. Resolvent kernel for the Kohn Laplacian on Heisenberg groups

    Directory of Open Access Journals (Sweden)

    Neur Eddine Askour

    2002-07-01

    Full Text Available We present a formula that relates the Kohn Laplacian on Heisenberg groups and the magnetic Laplacian. Then we obtain the resolvent kernel for the Kohn Laplacian and find its spectral density. We conclude by obtaining the Green kernel for fractional powers of the Kohn Laplacian.

  4. A multi-scale kernel bundle for LDDMM

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Nielsen, Mads; Lauze, Francois Bernard

    2011-01-01

    The Large Deformation Diffeomorphic Metric Mapping framework constitutes a widely used and mathematically well-founded setup for registration in medical imaging. At its heart lies the notion of the regularization kernel, and the choice of kernel greatly affects the results of registrations...

  5. Comparison of Kernel Equating and Item Response Theory Equating Methods

    Science.gov (United States)

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  6. An analysis of 1-D smoothed particle hydrodynamics kernels

    International Nuclear Information System (INIS)

    Fulk, D.A.; Quinn, D.W.

    1996-01-01

    In this paper, the smoothed particle hydrodynamics (SPH) kernel is analyzed, resulting in measures of merit for one-dimensional SPH. Various methods of obtaining an objective measure of the quality and accuracy of the SPH kernel are addressed. Since the kernel is the key element in the SPH methodology, this should be of primary concern to any user of SPH. The results of this work are two measures of merit, one for smooth data and one near shocks. The measure of merit for smooth data is shown to be quite accurate and a useful delineator of better and poorer kernels. The measure of merit for non-smooth data is not quite as accurate, but results indicate the kernel is much less important for these types of problems. In addition to the theory, 20 kernels are analyzed using the measure of merit demonstrating the general usefulness of the measure of merit and the individual kernels. In general, it was decided that bell-shaped kernels perform better than other shapes. 12 refs., 16 figs., 7 tabs

  7. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    Science.gov (United States)

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  8. Computing an element in the lexicographic kernel of a game

    NARCIS (Netherlands)

    Faigle, U.; Kern, Walter; Kuipers, Jeroen

    The lexicographic kernel of a game lexicographically maximizes the surplusses $s_{ij}$ (rather than the excesses as would the nucleolus). We show that an element in the lexicographic kernel can be computed efficiently, provided we can efficiently compute the surplusses $s_{ij}(x)$ corresponding to a

  9. Computing an element in the lexicographic kernel of a game

    NARCIS (Netherlands)

    Faigle, U.; Kern, Walter; Kuipers, J.

    2002-01-01

    The lexicographic kernel of a game lexicographically maximizes the surplusses $s_{ij}$ (rather than the excesses as would the nucleolus). We show that an element in the lexicographic kernel can be computed efficiently, provided we can efficiently compute the surplusses $s_{ij}(x)$ corresponding to a

  10. ZZ THERMOS, Multigroup P0 to P5 Thermal Scattering Kernels from ENDF/B Scattering Law Data

    International Nuclear Information System (INIS)

    McCrosson, F.J.; Finch, D.R.

    1975-01-01

    1 - Description of problem or function: Number of groups: 30-group THERMOS thermal scattering kernels. Nuclides: Molecular H 2 O, Molecular D 2 O, Graphite, Polyethylene, Benzene, Zr bound in ZrHx, H bound in ZrHx, Beryllium-9, Beryllium Oxide, Uranium Dioxide. Origin: ENDF/B library. Weighting Spectrum: yes. These data are 30-group THERMOS thermal scattering kernels for P0 to P5 Legendre orders for every temperature of every material from s(alpha,beta) data stored in the ENDF/B library. These scattering kernels were generated using the FLANGE2 computer code (NESC Abstract 368). To test the kernels, the integral properties of each set of kernels were determined by a precision integration of the diffusion length equation and compared to experimental measurements of these properties. In general, the agreement was very good. Details of the methods used and results obtained are contained in the reference. The scattering kernels are organized into a two volume magnetic tape library from which they may be retrieved easily for use in any 30-group THERMOS library. The contents of the tapes are as follows - (Material: ZA/Temperatures (degrees K)): Molecular H 2 O: 100.0/296, 350, 400, 450, 500, 600, Molecular D 2 O: 101.0/296, 350, 400, 450, 500, 600, Graphite: 6000.0/296, 400, 500, 600, 700, 800, Polyethylene: 205.0/296, 350 Benzene: 106.0/296, 350, 400, 450, 500, 600, Zr bound in ZrHx: 203.0/296, 400, 500, 600, 700, 800, H bound in ZrHx: 230.0/296, 400, 500, 600, 700, 800, Beryllium-9: 4009.0/296, 400, 500, 600, 700, 800, Beryllium Oxide: 200.0/296, 400, 500, 600, 700, 800, Uranium Dioxide: 207.0/296, 400, 500, 600, 700, 800 2 - Method of solution: Kernel generation is performed by direct integration of the thermal scattering law data to obtain the differential scattering cross sections for each Legendre order. The integral parameter calculation is done by precision integration of the diffusion length equation for several moderator absorption cross sections followed by a

  11. 3-D waveform tomography sensitivity kernels for anisotropic media

    KAUST Repository

    Djebbi, Ramzi

    2014-01-01

    The complications in anisotropic multi-parameter inversion lie in the trade-off between the different anisotropy parameters. We compute the tomographic waveform sensitivity kernels for a VTI acoustic medium perturbation as a tool to investigate this ambiguity between the different parameters. We use dynamic ray tracing to efficiently handle the expensive computational cost for 3-D anisotropic models. Ray tracing provides also the ray direction information necessary for conditioning the sensitivity kernels to handle anisotropy. The NMO velocity and η parameter kernels showed a maximum sensitivity for diving waves which results in a relevant choice of those parameters in wave equation tomography. The δ parameter kernel showed zero sensitivity; therefore it can serve as a secondary parameter to fit the amplitude in the acoustic anisotropic inversion. Considering the limited penetration depth of diving waves, migration velocity analysis based kernels are introduced to fix the depth ambiguity with reflections and compute sensitivity maps in the deeper parts of the model.

  12. Anatomically-aided PET reconstruction using the kernel method.

    Science.gov (United States)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  13. Open Problem: Kernel methods on manifolds and metric spaces

    DEFF Research Database (Denmark)

    Feragen, Aasa; Hauberg, Søren

    2016-01-01

    Radial kernels are well-suited for machine learning over general geodesic metric spaces, where pairwise distances are often the only computable quantity available. We have recently shown that geodesic exponential kernels are only positive definite for all bandwidths when the input space has strong...... linear properties. This negative result hints that radial kernel are perhaps not suitable over geodesic metric spaces after all. Here, however, we present evidence that large intervals of bandwidths exist where geodesic exponential kernels have high probability of being positive definite over finite...... datasets, while still having significant predictive power. From this we formulate conjectures on the probability of a positive definite kernel matrix for a finite random sample, depending on the geometry of the data space and the spread of the sample....

  14. Compactly Supported Basis Functions as Support Vector Kernels for Classification.

    Science.gov (United States)

    Wittek, Peter; Tan, Chew Lim

    2011-10-01

    Wavelet kernels have been introduced for both support vector regression and classification. Most of these wavelet kernels do not use the inner product of the embedding space, but use wavelets in a similar fashion to radial basis function kernels. Wavelet analysis is typically carried out on data with a temporal or spatial relation between consecutive data points. We argue that it is possible to order the features of a general data set so that consecutive features are statistically related to each other, thus enabling us to interpret the vector representation of an object as a series of equally or randomly spaced observations of a hypothetical continuous signal. By approximating the signal with compactly supported basis functions and employing the inner product of the embedding L2 space, we gain a new family of wavelet kernels. Empirical results show a clear advantage in favor of these kernels.

  15. Product demand forecasts using wavelet kernel support vector machine and particle swarm optimization in manufacture system

    Science.gov (United States)

    Wu, Qi

    2010-03-01

    Demand forecasts play a crucial role in supply chain management. The future demand for a certain product is the basis for the respective replenishment systems. Aiming at demand series with small samples, seasonal character, nonlinearity, randomicity and fuzziness, the existing support vector kernel does not approach the random curve of the sales time series in the space (quadratic continuous integral space). In this paper, we present a hybrid intelligent system combining the wavelet kernel support vector machine and particle swarm optimization for demand forecasting. The results of application in car sale series forecasting show that the forecasting approach based on the hybrid PSOWv-SVM model is effective and feasible, the comparison between the method proposed in this paper and other ones is also given, which proves that this method is, for the discussed example, better than hybrid PSOv-SVM and other traditional methods.

  16. Grey Language Hesitant Fuzzy Group Decision Making Method Based on Kernel and Grey Scale.

    Science.gov (United States)

    Li, Qingsheng; Diao, Yuzhu; Gong, Zaiwu; Hu, Aqin

    2018-03-02

    Based on grey language multi-attribute group decision making, a kernel and grey scale scoring function is put forward according to the definition of grey language and the meaning of the kernel and grey scale. The function introduces grey scale into the decision-making method to avoid information distortion. This method is applied to the grey language hesitant fuzzy group decision making, and the grey correlation degree is used to sort the schemes. The effectiveness and practicability of the decision-making method are further verified by the industry chain sustainable development ability evaluation example of a circular economy. Moreover, its simplicity and feasibility are verified by comparing it with the traditional grey language decision-making method and the grey language hesitant fuzzy weighted arithmetic averaging (GLHWAA) operator integration method after determining the index weight based on the grey correlation.

  17. Application of Reproducing Kernel Method for Solving Nonlinear Fredholm-Volterra Integrodifferential Equations

    Directory of Open Access Journals (Sweden)

    Omar Abu Arqub

    2012-01-01

    Full Text Available This paper investigates the numerical solution of nonlinear Fredholm-Volterra integro-differential equations using reproducing kernel Hilbert space method. The solution ( is represented in the form of series in the reproducing kernel space. In the mean time, the n-term approximate solution ( is obtained and it is proved to converge to the exact solution (. Furthermore, the proposed method has an advantage that it is possible to pick any point in the interval of integration and as well the approximate solution and its derivative will be applicable. Numerical examples are included to demonstrate the accuracy and applicability of the presented technique. The results reveal that the method is very effective and simple.

  18. Improved modeling of clinical data with kernel methods.

    Science.gov (United States)

    Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart

    2012-02-01

    Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems

  19. Integration

    DEFF Research Database (Denmark)

    Emerek, Ruth

    2004-01-01

    Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration......Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration...

  20. Maize kernel antioxidants and their potential involvement in Fusarium ear rot resistance.

    Science.gov (United States)

    Picot, Adeline; Atanasova-Pénichon, Vessela; Pons, Sebastien; Marchegay, Gisèle; Barreau, Christian; Pinson-Gadais, Laëtitia; Roucolle, Joël; Daveau, Florie; Caron, Daniel; Richard-Forget, Florence

    2013-04-10

    The potential involvement of antioxidants (α-tocopherol, lutein, zeaxanthin, β-carotene, and ferulic acid) in the resistance of maize varieties to Fusarium ear rot was the focus of this study. These antioxidants were present in all maize kernel stages, indicating that the fumonisin-producing fungi (mainly Fusarium verticillioides and Fusarium proliferatum ) are likely to face them during ear colonization. The effect of these compounds on fumonisin biosynthesis was studied in F. verticillioides liquid cultures. In carotenoid-treated cultures, no inhibitory effect of fumonisin accumulation was observed while a potent inhibitory activity was obtained for sublethal doses of α-tocopherol (0.1 mM) and ferulic acid (1 mM). Using a set of genotypes with moderate to high susceptibility to Fusarium ear rot, ferulic acid was significantly lower in immature kernels of the very susceptible group. Such a relation was nonexistent for tocopherols and carotenoids. Also, ferulic acid in immature kernels ranged from 3 to 8.5 mg/g, i.e., at levels consistent with the in vitro inhibitory concentration. Overall, our data support the fact that ferulic acid may contribute to resistance to Fusarium ear rot and/or fumonisin accumulation.

  1. Dose Evaluation of Fractionated Schema and Distance From Tumor to Spinal Cord for Spinal SBRT with Simultaneous Integrated Boost: A Preliminary Study.

    Science.gov (United States)

    Yang, Hao; Cai, Bo-ning; Wang, Xiao-shen; Cong, Xiao-hu; Xu, Wei; Wang, Jin-yuan; Yang, Jun; Xu, Shou-ping; Ju, Zhong-jian; Ma, Lin

    2016-02-23

    BACKGROUND This study investigated and quantified the dosimetric impact of the distance from the tumor to the spinal cord and fractionation schemes for patients who received stereotactic body radiation therapy (SBRT) and hypofractionated simultaneous integrated boost (HF-SIB). MATERIAL AND METHODS Six modified planning target volumes (PTVs) for 5 patients with spinal metastases were created by artificial uniform extension in the region of PTV adjacent spinal cord with a specified minimum tumor to cord distance (0-5 mm). The prescription dose (biologic equivalent dose, BED) was 70 Gy in different fractionation schemes (1, 3, 5, and 10 fractions). For PTV V100, Dmin, D98, D95, and D1, spinal cord dose, conformity index (CI), V30 were measured and compared. RESULTS PTV-to-cord distance influenced PTV V100, Dmin, D98, and D95, and fractionation schemes influenced Dmin and D98, with a significant difference. Distances of ≥2 mm, ≥1 mm, ≥1 mm, and ≥0 mm from PTV to spinal cord meet dose requirements in 1, 3, 5, and 10 fractionations, respectively. Spinal cord dose, CI, and V30 were not impacted by PTV-to-cord distance and fractionation schemes. CONCLUSIONS Target volume coverage, Dmin, D98, and D95 were directly correlated with distance from the spinal cord for spine SBRT and HF-SIB. Based on our study, ≥2 mm, ≥1 mm, ≥1 mm, and ≥0 mm distance from PTV to spinal cord meets dose requirements in 1, 3, 5 and 10 fractionations, respectively.

  2. A method for manufacturing kernels of metallic oxides and the thus obtained kernels

    International Nuclear Information System (INIS)

    Lelievre Bernard; Feugier, Andre.

    1973-01-01

    A method is described for manufacturing fissile or fertile metal oxide kernels, consisting in adding at least a chemical compound capable of releasing ammonia to an aqueous solution of actinide nitrates dispersing the thus obtained solution dropwise in a hot organic phase so as to gelify the drops and transform them into solid particles, washing drying and treating said particles so as to transform them into oxide kernels. Such a method is characterized in that the organic phase used in the gel-forming reactions comprises a mixture of two organic liquids, one of which acts as a solvent, whereas the other is a product capable of extracting the metal-salt anions from the drops while the gel forming reaction is taking place. This can be applied to the so-called high temperature nuclear reactors [fr

  3. Learning molecular energies using localized graph kernels

    Science.gov (United States)

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-01

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  4. Lung toxicity determination by in vitro exposure at the air liquid interface with an integrated online dose measurement

    International Nuclear Information System (INIS)

    Muelhopt, Sonja; Paur, H-R; Diabate, S; Weiss, C; Krebs, T

    2009-01-01

    Epidemiological studies show an association between the concentration of ultrafine particles in the atmosphere and the rate of mortality or morbidity due to respiratory and cardiovascular diseases. For the quantitative assessment of the toxicity of airborne nanoparticles the dose-response relationship is tested in in vitro test systems using bioassays of cell cultures as sensor. For the air-liquid interface exposure of cell cultures towards aerosols the Karlsruhe exposure system was developed. The human lung cell cultures are exposed in VITROCELL (registered) system modules with a constant flow of the conditioned aerosol. After exposure the cells are analyzed to measure the biological responses such as viability, inflammatory or oxidative stress. For the determination of the dose response relationship the accurate knowledge of the deposited particle mass is essential. A new online method is developed in the Karlsruhe exposure system: the sensor of a quartz crystal microbalance is placed in an exposure chamber instead of the membrane insert and exposed to the aerosol in the same way as the cell cultures. The deposited mass per area unit is monitored as a function of exposure time showing a linear relationship for a constant aerosol flow with defined particle concentration. A comparison of this new dose signal to a dosimetry method using fluorescein sodium particles shows a very good correlation between the sensor signal of the quartz crystal microbalance and the deposited mass on the membranes shown by spectroscopy. This system for the first time provides an online dose measurement for in vitro experiments with nanoparticles.

  5. Integrated effects of reduction dose of nitrogen fertilizer and mode of biofertilizer application on soil health under mung bean cropping system

    Directory of Open Access Journals (Sweden)

    Naba Kumar Mondal

    2014-12-01

    Full Text Available To study the integrated effects of reduced dose of chemical fertilizer with different methods and times of application of Rhizobium biofertilizer on soil health and fertility under mung bean (Vigna radiata cropping, field experiments were carried out during three years (2009, 2010, and 2011 in West Bengal, India, in randomized block design. In the first year, varietal screening of mung bean under recommended dose of chemical fertilizer (20:40:20 were performed with five available varieties adapted to local climate. Reduced nitrogen fertilizer doses (20%, 30%, 40%, 50%, and 60% and the recommended dose, as well as the Rhizobium biofertilizer application (basal, soil, and spray, were done, and data were recorded for pH, electrical conductivity, organic carbon, total nitrogen, total phosphorus, total potassium, and bacterial population of soil, both before sowing and after harvesting. The results indicated significant improvement in the soil quality with gradual buildup of soil macronutrient status after harvesting of crop. Application of biofertilizer has contributed significantly towards higher soil organic matter, nitrogen, phosphorus, and potassium. The use of biofertilizer significantly improved soil bacterial population count in the soil thereby increasing the soil health.

  6. An independent dose calculation algorithm for MLC-based stereotactic radiotherapy

    International Nuclear Information System (INIS)

    Lorenz, Friedlieb; Killoran, Joseph H.; Wenz, Frederik; Zygmanski, Piotr

    2007-01-01

    We have developed an algorithm to calculate dose in a homogeneous phantom for radiotherapy fields defined by multi-leaf collimator (MLC) for both static and dynamic MLC delivery. The algorithm was developed to supplement the dose algorithms of the commercial treatment planning systems (TPS). The motivation for this work is to provide an independent dose calculation primarily for quality assurance (QA) and secondarily for the development of static MLC field based inverse planning. The dose calculation utilizes a pencil-beam kernel. However, an explicit analytical integration results in a closed form for rectangular-shaped beamlets, defined by single leaf pairs. This approach reduces spatial integration to summation, and leads to a simple method of determination of model parameters. The total dose for any static or dynamic MLC field is obtained by summing over all individual rectangles from each segment which offers faster speed to calculate two-dimensional dose distributions at any depth in the phantom. Standard beam data used in the commissioning of the TPS was used as input data for the algorithm. The calculated results were compared with the TPS and measurements for static and dynamic MLC. The agreement was very good (<2.5%) for all tested cases except for very small static MLC sizes of 0.6 cmx0.6 cm (<6%) and some ion chamber measurements in a high gradient region (<4.4%). This finding enables us to use the algorithm for routine QA as well as for research developments

  7. Dose calculation in eye brachytherapy with Ir-192 threads using the Sievert integral and corrected by attenuation and scattering with the Meisberg polynomials

    International Nuclear Information System (INIS)

    Vivanco, M.G. Bernui de; Cardenas R, A.

    2006-01-01

    The ocular brachytherapy many times unique alternative to conserve the visual organ in patients of ocular cancer, one comes carrying out in the National Institute of Neoplastic Illnesses (INEN) using threads of Iridium 192; those which, they are placed in radial form on the interior surface of a spherical cap of gold of 18 K; the cap remains in the eye until reaching the prescribed dose by the doctor. The main objective of this work is to be able to calculate in a correct and practical way the one time that the treatment of ocular brachytherapy should last to reach the dose prescribed by the doctor. To reach this objective I use the Sievert integral corrected by attenuation effects and scattering (Meisberg polynomials); calculating it by the Simpson method. In the calculations by means of the Sievert integral doesn't take into account the scattering produced by the gold cap neither the variation of the constant of frequency of exposure with the distance. The calculations by means of Sievert integral are compared with those obtained using the Monte Carlo Penelope simulation code, where it is observed that they agree at distances of the surface of the cap greater or equal to 2mm. (Author)

  8. Verification of helical tomotherapy delivery using autoassociative kernel regression

    International Nuclear Information System (INIS)

    Seibert, Rebecca M.; Ramsey, Chester R.; Garvey, Dustin R.; Wesley Hines, J.; Robison, Ben H.; Outten, Samuel S.

    2007-01-01

    Quality assurance (QA) is a topic of major concern in the field of intensity modulated radiation therapy (IMRT). The standard of practice for IMRT is to perform QA testing for individual patients to verify that the dose distribution will be delivered to the patient. The purpose of this study was to develop a new technique that could eventually be used to automatically evaluate helical tomotherapy treatments during delivery using exit detector data. This technique uses an autoassociative kernel regression (AAKR) model to detect errors in tomotherapy delivery. AAKR is a novel nonparametric model that is known to predict a group of correct sensor values when supplied a group of sensor values that is usually corrupted or contains faults such as machine failure. This modeling scheme is especially suited for the problem of monitoring the fluence values found in the exit detector data because it is able to learn the complex detector data relationships. This scheme still applies when detector data are summed over many frames with a low temporal resolution and a variable beam attenuation resulting from patient movement. Delivery sequences from three archived patients (prostate, lung, and head and neck) were used in this study. Each delivery sequence was modified by reducing the opening time for random individual multileaf collimator (MLC) leaves by random amounts. The error and error-free treatments were delivered with different phantoms in the path of the beam. Multiple autoassociative kernel regression (AAKR) models were developed and tested by the investigators using combinations of the stored exit detector data sets from each delivery. The models proved robust and were able to predict the correct or error-free values for a projection, which had a single MLC leaf decrease its opening time by less than 10 msec. The model also was able to determine machine output errors. The average uncertainty value for the unfaulted projections ranged from 0.4% to 1.8% of the detector

  9. Development of Point Kernel Shielding Analysis Computer Program Implementing Recent Nuclear Data and Graphic User Interfaces

    International Nuclear Information System (INIS)

    Kang, Sang Ho; Lee, Seung Gi; Chung, Chan Young; Lee, Choon Sik; Lee, Jai Ki

    2001-01-01

    In order to comply with revised national regulationson radiological protection and to implement recent nuclear data and dose conversion factors, KOPEC developed a new point kernel gamma and beta ray shielding analysis computer program. This new code, named VisualShield, adopted mass attenuation coefficient and buildup factors from recent ANSI/ANS standards and flux-to-dose conversion factors from the International Commission on Radiological Protection (ICRP) Publication 74 for estimation of effective/equivalent dose recommended in ICRP 60. VisualShield utilizes graphical user interfaces and 3-D visualization of the geometric configuration for preparing input data sets and analyzing results, which leads users to error free processing with visual effects. Code validation and data analysis were performed by comparing the results of various calculations to the data outputs of previous programs such as MCNP 4B, ISOSHLD-II, QAD-CGGP, etc

  10. Methods, safety, and early clinical outcomes of dose escalation using simultaneous integrated and sequential boosts in patients with locally advanced gynecologic malignancies.

    Science.gov (United States)

    Boyle, John; Craciunescu, Oana; Steffey, Beverly; Cai, Jing; Chino, Junzo

    2014-11-01

    To evaluate the safety of dose escalated radiotherapy using a simultaneous integrated boost technique in patients with locally advanced gynecological malignancies. Thirty-nine women with locally advanced gynecological malignancies were treated with intensity modulated radiation therapy utilizing a simultaneous integrated boost (SIB) technique for gross disease in the para-aortic and/or pelvic nodal basins, sidewall extension, or residual primary disease. Women were treated to 45Gy in 1.8Gy fractions to elective nodal regions. Gross disease was simultaneously treated to 55Gy in 2.2Gy fractions (n=44 sites). An additional sequential boost of 10Gy in 2Gy fractions was delivered if deemed appropriate (n=29 sites). Acute and late toxicity, local control in the treated volumes (LC), overall survival (OS), and distant metastases (DM) were assessed. All were treated with a SIB to a dose of 55Gy. Twenty-four patients were subsequently treated with a sequential boost to a median dose of 65Gy. Median follow-up was 18months. Rates of acute>grade 2 gastrointestinal (GI), genitourinary (GU), and hematologic (heme) toxicities were 2.5%, 0%, and 30%, respectively. There were no grade 4 acute toxicities. At one year, grade 1-2 late GI toxicities were 24.5%. There were no grade 3 or 4 late GI toxicities. Rates of grade 1-2 late GU toxicities were 12.7%. There were no grade 3 or 4 late GU toxicities. Dose escalated radiotherapy using a SIB results in acceptable rates of acute toxicity. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Hyperfractionated accelerated radiotherapy with concomitant integrated boost of 70-75 Gy in 5 weeks for advanced head and neck cancer. A phase I dose escalation study

    Energy Technology Data Exchange (ETDEWEB)

    Cvek, J.; Skacelikova, E.; Otahal, B.; Halamka, M.; Feltl, D. [University Hospital Ostrava (Czech Republic). Dept. of Oncology; Kubes, J. [University Hospital Bulovka, Prague (Czech Republic). Dept. of Radiation Oncology; Kominek, P. [University Hospital Ostrava (Czech Republic). Dept. of Otolaryngology

    2012-08-15

    Background and purpose: The present study was performed to evaluate the feasibility of a new, 5-week regimen of 70-75 Gy hyperfractionated accelerated radiotherapy with concomitant integrated boost (HARTCIB) for locally advanced, inoperable head and neck cancer. Methods and materials: A total of 39 patients with very advanced, stage IV nonmetastatic head and neck squamous cell carcinoma (median gross tumor volume 72 ml) were included in this phase I dose escalation study. A total of 50 fractions intensity-modulated radiotherapy (IMRT) were administered twice daily over 5 weeks. Prescribed total dose/dose per fraction for planning target volume (PTV{sub tumor}) were 70 Gy in 1.4 Gy fractions, 72.5 Gy in 1.45 Gy fractions, and 75 Gy in 1.5 Gy fractions for 10, 13, and 16 patients, respectively. Uninvolved lymphatic nodes (PTV{sub uninvolved}) were irradiated with 55 Gy in 1.1 Gy fractions using the concomitant integrated boost. Results: Acute toxicity was evaluated according to the RTOG/EORTC scale; the incidence of grade 3 mucositis was 51% in the oral cavity/pharynx and 0% in skin and the recovery time was {<=} 9 weeks for all patients. Late toxicity was evaluated in patients in complete remission according to the RTOG/EORTC scale. No grade 3/4 late toxicity was observed. The 1-year locoregional progression-free survival was 50% and overall survival was 55%. Conclusion: HARTCIB (75 Gy in 5 weeks) is feasible for patients deemed unsuitable for chemoradiation. Acute toxicity was lower than predicted from radiobiological models; duration of dysphagia and confluent mucositis were particularly short. Better conformity of radiotherapy allows the use of more intensive altered fractionation schedules compared with older studies. These results suggest that further dose escalation might be possible when highly conformal techniques (e.g., stereotactic radiotherapy) are used.

  12. [Integrity].

    Science.gov (United States)

    Gómez Rodríguez, Rafael Ángel

    2014-01-01

    To say that someone possesses integrity is to claim that that person is almost predictable about responses to specific situations, that he or she can prudentially judge and to act correctly. There is a closed interrelationship between integrity and autonomy, and the autonomy rests on the deeper moral claim of all humans to integrity of the person. Integrity has two senses of significance for medical ethic: one sense refers to the integrity of the person in the bodily, psychosocial and intellectual elements; and in the second sense, the integrity is the virtue. Another facet of integrity of the person is la integrity of values we cherish and espouse. The physician must be a person of integrity if the integrity of the patient is to be safeguarded. The autonomy has reduced the violations in the past, but the character and virtues of the physician are the ultimate safeguard of autonomy of patient. A field very important in medicine is the scientific research. It is the character of the investigator that determines the moral quality of research. The problem arises when legitimate self-interests are replaced by selfish, particularly when human subjects are involved. The final safeguard of moral quality of research is the character and conscience of the investigator. Teaching must be relevant in the scientific field, but the most effective way to teach virtue ethics is through the example of the a respected scientist.

  13. Stochastic subset selection for learning with kernel machines.

    Science.gov (United States)

    Rhinelander, Jason; Liu, Xiaoping P

    2012-06-01

    Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.

  14. Multiple kernel boosting framework based on information measure for classification

    International Nuclear Information System (INIS)

    Qi, Chengming; Wang, Yuping; Tian, Wenjie; Wang, Qun

    2016-01-01

    The performance of kernel-based method, such as support vector machine (SVM), is greatly affected by the choice of kernel function. Multiple kernel learning (MKL) is a promising family of machine learning algorithms and has attracted many attentions in recent years. MKL combines multiple sub-kernels to seek better results compared to single kernel learning. In order to improve the efficiency of SVM and MKL, in this paper, the Kullback–Leibler kernel function is derived to develop SVM. The proposed method employs an improved ensemble learning framework, named KLMKB, which applies Adaboost to learning multiple kernel-based classifier. In the experiment for hyperspectral remote sensing image classification, we employ feature selected through Optional Index Factor (OIF) to classify the satellite image. We extensively examine the performance of our approach in comparison to some relevant and state-of-the-art algorithms on a number of benchmark classification data sets and hyperspectral remote sensing image data set. Experimental results show that our method has a stable behavior and a noticeable accuracy for different data set.

  15. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Ling-Yu Duan

    2010-01-01

    Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  16. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Tian Yonghong

    2010-01-01

    Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  17. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  18. Deep Restricted Kernel Machines Using Conjugate Feature Duality.

    Science.gov (United States)

    Suykens, Johan A K

    2017-08-01

    The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.

  19. Training Lp norm multiple kernel learning in the primal.

    Science.gov (United States)

    Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei

    2013-10-01

    Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. WRAITH, Internal and External Doses from Atmospheric Release of Isotopes

    International Nuclear Information System (INIS)

    1984-01-01

    1 - Description of problem or function: WRAITH calculates the atmospheric transport of radioactive material to each of a number of downwind receptor points and the external and internal doses to a reference man at each of the receptor points. 2 - Method of solution: The movement of the released material through the atmosphere is calculated using a bivariate straight-line Gaussian distribution model with Pasquill values for standard deviations. The quantity of material in the released cloud is modified during its transit time to account for radioactive decay and daughter production. External doses due to exposure to the cloud can be calculated using a semi-infinite cloud approximation or a 'finite plume' three-dimensional point-kernel numerical integration technique. Internal doses due to acute inhalation are calculated using the ICRP Task Group Model and a four-segmented gastro- intestinal tract model. Translocation of the material between body compartments and retention in the body compartments are calculated using multiple exponential retention functions. Internal doses to each organ are calculated as sums of cross-organ doses with each target organ irradiated by radioactive material in a number of source organs. All doses are calculated in rads with separate values determined for high-LET and low-LET radiation. 3 - Restrictions on the complexity of the problem: - Doses to only three target organs (total body, red bone marrow, and the lungs) are considered and acute inhalation is the only pathway for material to enter the body. The dose response model is not valid for high-LET radiation other than alphas. The high-LET calculation ignores the contributions of neutrons, spontaneous fission fragments, and alpha recoil nuclei

  1. Identification of molecular candidates and interaction networks via integrative toxicogenomic analysis in a human cell line following low-dose exposure to the carcinogenic metals cadmium and nickel.

    Science.gov (United States)

    Kwon, Jee Young; Weon, Jong-Il; Koedrith, Preeyaporn; Park, Kang-Sik; Kim, Im Soon; Seo, Young Rok

    2013-09-01

    Cadmium and nickel have been classified as carcinogenic to humans by the World Health Organization's International Agency for Research on Cancer. Given their prevalence in the environment, the fact that cadmium and nickel may cause diseases including cancer even at low doses is a cause for concern. However, the exact mechanisms underlying the toxicological effects induced by low-dose exposure to cadmium and nickel remain to be elucidated. Furthermore, it has recently been recognized that integrative analysis of DNA, mRNA and proteins is required to discover biomarkers and signaling networks relevant to human toxicant exposure. In the present study, we examined the deleterious effects of chronic low-dose exposure of either cadmium or nickel on global profiling of DNA copy number variation, mRNA and proteins. Array comparative genomic hybridization, gene expression microarray and functional proteomics were conducted, and a bioinformatics tool, which predicted signaling pathways, was applied to integrate data for each heavy metal separately and together. We found distinctive signaling networks associated with subchronic low-dose exposure to cadmium and nickel, and identified pathways common to both. ACTB, HSP90AA1, HSPA5 and HSPA8, which are key mediators of pathways related to apoptosis, proliferation and neoplastic processes, were key mediators of the same pathways in low-dose nickel and cadmium exposure in particular. CASP-associated signaling pathways involving CASP3, CASP7 and CASP9 were observed in cadmium-exposed cells. We found that HSP90AA1, one of the main modulators, interacted with HIF1A, AR and BCL2 in nickel-exposed cells. Interestingly, we found that HSP90AA1 was involved in the BCL2-associated apoptotic pathway in the nickel-only data, whereas this gene interacted with several genes functioning in CASP-associated apoptotic signaling in the cadmium-only data. Additionally, JUN and FASN were main modulators in nickel-responsive signaling pathways. Our

  2. Gradient-based adaptation of general gaussian kernels.

    Science.gov (United States)

    Glasmachers, Tobias; Igel, Christian

    2005-10-01

    Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.

  3. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Hansen, Lars Kai; Madsen, Kristoffer Hougaard

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus...... on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...

  4. Flour quality and kernel hardness connection in winter wheat

    Directory of Open Access Journals (Sweden)

    Szabó B. P.

    2016-12-01

    Full Text Available Kernel hardness is controlled by friabilin protein and it depends on the relation between protein matrix and starch granules. Friabilin is present in high concentration in soft grain varieties and in low concentration in hard grain varieties. The high gluten, hard wheat our generally contains about 12.0–13.0% crude protein under Mid-European conditions. The relationship between wheat protein content and kernel texture is usually positive and kernel texture influences the power consumption during milling. Hard-textured wheat grains require more grinding energy than soft-textured grains.

  5. Deep kernel learning method for SAR image target recognition

    Science.gov (United States)

    Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao

    2017-10-01

    With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.

  6. Explicit signal to noise ratio in reproducing kernel Hilbert spaces

    DEFF Research Database (Denmark)

    Gomez-Chova, Luis; Nielsen, Allan Aasbjerg; Camps-Valls, Gustavo

    2011-01-01

    This paper introduces a nonlinear feature extraction method based on kernels for remote sensing data analysis. The proposed approach is based on the minimum noise fraction (MNF) transform, which maximizes the signal variance while also minimizing the estimated noise variance. We here propose...... an alternative kernel MNF (KMNF) in which the noise is explicitly estimated in the reproducing kernel Hilbert space. This enables KMNF dealing with non-linear relations between the noise and the signal features jointly. Results show that the proposed KMNF provides the most noise-free features when confronted...

  7. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    Energy Technology Data Exchange (ETDEWEB)

    Thimmisetty, Charanraj A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Zhao, Wenju [Florida State Univ., Tallahassee, FL (United States). Dept. of Scientific Computing; Chen, Xiao [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Tong, Charles H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; White, Joshua A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Atmospheric, Earth and Energy Division

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). This approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.

  8. Examining Potential Boundary Bias Effects in Kernel Smoothing on Equating: An Introduction for the Adaptive and Epanechnikov Kernels.

    Science.gov (United States)

    Cid, Jaime A; von Davier, Alina A

    2015-05-01

    Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.

  9. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT).

    Science.gov (United States)

    Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C

    Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.

  10. Comparisons of geoid models over Alaska computed with different Stokes' kernel modifications

    Science.gov (United States)

    Li, X.; Wang, Y.

    2011-01-01

    Various Stokes kernel modification methods have been developed over the years. The goal of this paper is to test the most commonly used Stokes kernel modifications numerically by using Alaska as a test area and EGM08 as a reference model. The tests show that some methods are more sensitive than others to the integration cap sizes. For instance, using the methods of Vaníček and Kleusberg or Featherstone et al. with kernel modification at degree 60, the geoid decreases by 30 cm (on average) when the cap size increases from 1° to 25°. The corresponding changes in the methods of Wong and Gore and Heck and Grüninger are only at the 1 cm level. At high modification degrees, above 360, the methods of Vaníček and Kleusberg and Featherstone et al become unstable because of numerical problems in the modification coefficients; similar conclusions have been reported by Featherstone (2003). In contrast, the methods of Wong and Gore, Heck and Grüninger and the least-squares spectral combination are stable at any modification degree, though they do not provide as good fit as the best case of the Molodenskii-type methods at the GPS/Leveling benchmarks. However, certain tests for choosing the cap size and modification degree have to be performed in advance to avoid abrupt mean geoid changes if the latter methods are applied.

  11. Forecasting Crude Oil Price Using EEMD and RVM with Adaptive PSO-Based Kernels

    Directory of Open Access Journals (Sweden)

    Taiyong Li

    2016-12-01

    Full Text Available Crude oil, as one of the most important energy sources in the world, plays a crucial role in global economic events. An accurate prediction for crude oil price is an interesting and challenging task for enterprises, governments, investors, and researchers. To cope with this issue, in this paper, we proposed a method integrating ensemble empirical mode decomposition (EEMD, adaptive particle swarm optimization (APSO, and relevance vector machine (RVM—namely, EEMD-APSO-RVM—to predict crude oil price based on the “decomposition and ensemble” framework. Specifically, the raw time series of crude oil price were firstly decomposed into several intrinsic mode functions (IMFs and one residue by EEMD. Then, RVM with combined kernels was applied to predict target value for the residue and each IMF individually. To improve the prediction performance of each component, an extended particle swarm optimization (PSO was utilized to simultaneously optimize the weights and parameters of single kernels for the combined kernel of RVM. Finally, simple addition was used to aggregate all the predicted results of components into an ensemble result as the final result. Extensive experiments were conducted on the crude oil spot price of the West Texas Intermediate (WTI to illustrate and evaluate the proposed method. The experimental results are superior to those by several state-of-the-art benchmark methods in terms of root mean squared error (RMSE, mean absolute percent error (MAPE, and directional statistic (Dstat, showing that the proposed EEMD-APSO-RVM is promising for forecasting crude oil price.

  12. Towards smart energy systems: application of kernel machine regression for medium term electricity load forecasting.

    Science.gov (United States)

    Alamaniotis, Miltiadis; Bargiotas, Dimitrios; Tsoukalas, Lefteri H

    2016-01-01

    Integration of energy systems with information technologies has facilitated the realization of smart energy systems that utilize information to optimize system operation. To that end, crucial in optimizing energy system operation is the accurate, ahead-of-time forecasting of load demand. In particular, load forecasting allows planning of system expansion, and decision making for enhancing system safety and reliability. In this paper, the application of two types of kernel machines for medium term load forecasting (MTLF) is presented and their performance is recorded based on a set of historical electricity load demand data. The two kernel machine models and more specifically Gaussian process regression (GPR) and relevance vector regression (RVR) are utilized for making predictions over future load demand. Both models, i.e., GPR and RVR, are equipped with a Gaussian kernel and are tested on daily predictions for a 30-day-ahead horizon taken from the New England Area. Furthermore, their performance is compared to the ARMA(2,2) model with respect to mean average percentage error and squared correlation coefficient. Results demonstrate the superiority of RVR over the other forecasting models in performing MTLF.

  13. Fast scalar data buffering interface in Linux 2.6 kernel

    International Nuclear Information System (INIS)

    Homs, A.

    2012-01-01

    Key instrumentation devices like counter/timers, analog-to-digital converters and encoders provide scalar data input. Many of them allow fast acquisitions, but do not provide hardware triggering or buffering mechanisms. A Linux 2.4 kernel driver called Hook was developed at the ESRF as a generic software-triggered buffering interface. This work presents the portage of the ESRF Hook interface to the Linux 2.6 kernel. The interface distinguishes 2 independent functional groups: trigger event generators and data channels. Devices in the first group create software events, like hardware interrupts generated by timers or external signals. On each event, one or more device channels on the second group are read and stored in kernel buffers. The event generators and data channels to be read are fully configurable before each sequence. Designed for fast acquisitions, the Hook implementation is well adapted to multi-CPU systems, where the interrupt latency is notably reduced. On heavily loaded dual-core PCs running standard (non real time) Linux, data can be taken at 1 KHz without losing events. Additional features include full integration into the /sys virtual file-system and hot-plug devices support. (author)

  14. Feature selection and multi-kernel learning for sparse representation on a manifold

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-03-01

    Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao etal. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. © 2013 Elsevier Ltd.

  15. Feature selection and multi-kernel learning for sparse representation on a manifold.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2014-03-01

    Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao et al. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Calculation of dose-rate conversion factors for external exposure to photons and electrons

    International Nuclear Information System (INIS)

    Kocher, D.C.

    1978-01-01

    Methods are presented for the calculation of dose-rate conversion factors for external exposure to photon and electron radiation from radioactive decay. A dose-rate conversion factor is defined as the dose-equivalent rate per unit radionuclide concentration. Exposure modes considered are immersion in contaminated air, immersion in contaminated water, and irradiation from a contaminated ground surface. For each radiation type and exposure mode, dose-rate conversion factors are derived for tissue-equivalent material at the body surface of an exposed individual. In addition, photon dose-rate conversion factors are estimated for 22 body organs. The calculations are based on the assumption that the exposure medium is infinite in extent and that the radionuclide concentration is uniform. The dose-rate conversion factors for immersion in contaminated air and water then follow from the requirement that all of the energy emitted in the radioactive decay is absorbed in the infinite medium. Dose-rate conversion factors for ground-surface exposure are calculated at a reference location above a smooth, infinite plane using the point-kernel integration method and known specific absorbed fractions for photons and electrons in air

  17. Heart dose reduction in breast cancer treatment with simultaneous integrated boost. Comparison of treatment planning and dosimetry for a novel hybrid technique and 3D-CRT

    International Nuclear Information System (INIS)

    Joest, Vincent; Kretschmer, Matthias; Sabatino, Marcello; Wuerschmidt, Florian; Dahle, Joerg; Lorenzen, Joern; Ueberle, Friedrich

    2015-01-01

    The present study compares in silico treatment plans of clinically established three-dimensional conformal radiotherapy (3D-CRT) with a hybrid technique consisting of intensity-modulated radiotherapy (IMRT) and volumetric modulated arc radiotherapy (VMAT) during normally fractionated radiation of mammary carcinomas with simultaneous integrated boost on the basis of dose-volume histogram (DVH) parameters. Radiation treatment planning was performed with a hybrid and a 3D-CRT treatment plan for 20 patients. Hybrid plans were implemented with two tangential IMRT fields and a VMAT field in the angular range of the tangents. Verification of the plan was performed with a manufacturer-independent measurement system consisting of a detector array and rotation unit. The mean values of the heart dose for the entire patient collective were 3.6 ± 2.5 Gy for 3D-CRT and 2.9 ± 2.1 Gy for the hybrid technique (p < 0.01). For the left side (n = 10), the mean values for the left anterior descending artery were 21.8 ± 7.4 Gy for 3D-CRT and 17.6 ± 7.4 Gy for the hybrid technique (p < 0.01). The mean values of the ipsilateral lung were 11.9 ± 1.6 Gy for 3D-CRT and 10.5 ± 1.3 Gy for the hybrid technique (p < 0.01). Calculated dose distributions in the hybrid arm were in good accordance with measured dose (on average 95.6 ± 0.5 % for γ < 1 and 3 %/3 mm). The difference of the mean treatment time per fraction was 7 s in favor of 3D-CRT. Compared with the established 3D-CRT technique, the hybrid technique allows for a decrease in dose, particularly of the mean heart and lung dose with comparable target volume acquisition and without disadvantageous low-dose load of contralateral structures. Uncomplicated implementation of the hybrid technique was demonstrated in this context. The hybrid technique combines the advantages of tangential IMRT with the superior sparing of organs at risk by VMAT. (orig.) [de

  18. Efficient Online Subspace Learning With an Indefinite Kernel for Visual Tracking and Recognition

    NARCIS (Netherlands)

    Liwicki, Stephan; Zafeiriou, Stefanos; Tzimiropoulos, Georgios; Pantic, Maja

    2012-01-01

    We propose an exact framework for online learning with a family of indefinite (not positive) kernels. As we study the case of nonpositive kernels, we first show how to extend kernel principal component analysis (KPCA) from a reproducing kernel Hilbert space to Krein space. We then formulate an

  19. Natural oils affect the human skin integrity and the percutaneous penetration of benzoic acid dose-dependently

    DEFF Research Database (Denmark)

    Nielsen, Jesper Bo

    2006-01-01

    three natural oils (eucalyptus oil, tea tree oil, peppermint oil) would affect the skin integrity and the percutaneous penetration of benzoic acid when applied topically in relevant concentrations. An experimental in vitro model using static diffusion cells mounted with human breast or abdominal skin...

  20. Some closed form expressions for the generalized secant integrals

    International Nuclear Information System (INIS)

    Nadarajah, Saralees

    2007-01-01

    The generalized secant integrals of the form I a (ψ,b)=b a ∫ 0 ψ exp(-bsecθ)(secθ) a dθ for b>0 and 0 a (ψ,b) have been known except when both a and ψ are fixed. In this note, we provide several closed form expressions for I a (ψ,b) applicable for a wide range of values of a and b. We establish their numerical accuracy over the methods presented in Michieli [1998. Point kernel calculations of dose fields from line sources using expanded polynomial form of buildup factor data: generalized secant integral series representations. Radiat. Phys. Chem. 51, 121-128; 2001. Some properties of generalized secant integrals: extended definition and recurrence relations. Radiat. Phys. Chem. 60, 551-554]. Finally, a simple computer program is provided for I a (ψ,b) that could be used widely