WorldWideScience

Sample records for convolution superposition calculations

  1. The denoising of Monte Carlo dose distributions using convolution superposition calculations

    International Nuclear Information System (INIS)

    El Naqa, I; Cui, J; Lindsay, P; Olivera, G; Deasy, J O

    2007-01-01

    Monte Carlo (MC) dose calculations can be accurate but are also computationally intensive. In contrast, convolution superposition (CS) offers faster and smoother results but by making approximations. We investigated MC denoising techniques, which use available convolution superposition results and new noise filtering methods to guide and accelerate MC calculations. Two main approaches were developed to combine CS information with MC denoising. In the first approach, the denoising result is iteratively updated by adding the denoised residual difference between the result and the MC image. Multi-scale methods were used (wavelets or contourlets) for denoising the residual. The iterations are initialized by the CS data. In the second approach, we used a frequency splitting technique by quadrature filtering to combine low frequency components derived from MC simulations with high frequency components derived from CS components. The rationale is to take the scattering tails as well as dose levels in the high-dose region from the MC calculations, which presumably more accurately incorporates scatter; high-frequency details are taken from CS calculations. 3D Butterworth filters were used to design the quadrature filters. The methods were demonstrated using anonymized clinical lung and head and neck cases. The MC dose distributions were calculated by the open-source dose planning method MC code with varying noise levels. Our results indicate that the frequency-splitting technique for incorporating CS-guided MC denoising is promising in terms of computational efficiency and noise reduction. (note)

  2. NOTE: The denoising of Monte Carlo dose distributions using convolution superposition calculations

    Science.gov (United States)

    El Naqa, I.; Cui, J.; Lindsay, P.; Olivera, G.; Deasy, J. O.

    2007-09-01

    Monte Carlo (MC) dose calculations can be accurate but are also computationally intensive. In contrast, convolution superposition (CS) offers faster and smoother results but by making approximations. We investigated MC denoising techniques, which use available convolution superposition results and new noise filtering methods to guide and accelerate MC calculations. Two main approaches were developed to combine CS information with MC denoising. In the first approach, the denoising result is iteratively updated by adding the denoised residual difference between the result and the MC image. Multi-scale methods were used (wavelets or contourlets) for denoising the residual. The iterations are initialized by the CS data. In the second approach, we used a frequency splitting technique by quadrature filtering to combine low frequency components derived from MC simulations with high frequency components derived from CS components. The rationale is to take the scattering tails as well as dose levels in the high-dose region from the MC calculations, which presumably more accurately incorporates scatter; high-frequency details are taken from CS calculations. 3D Butterworth filters were used to design the quadrature filters. The methods were demonstrated using anonymized clinical lung and head and neck cases. The MC dose distributions were calculated by the open-source dose planning method MC code with varying noise levels. Our results indicate that the frequency-splitting technique for incorporating CS-guided MC denoising is promising in terms of computational efficiency and noise reduction.

  3. A convolution-superposition dose calculation engine for GPUs

    Energy Technology Data Exchange (ETDEWEB)

    Hissoiny, Sami; Ozell, Benoit; Despres, Philippe [Departement de genie informatique et genie logiciel, Ecole polytechnique de Montreal, 2500 Chemin de Polytechnique, Montreal, Quebec H3T 1J4 (Canada); Departement de radio-oncologie, CRCHUM-Centre hospitalier de l' Universite de Montreal, 1560 rue Sherbrooke Est, Montreal, Quebec H2L 4M1 (Canada)

    2010-03-15

    Purpose: Graphic processing units (GPUs) are increasingly used for scientific applications, where their parallel architecture and unprecedented computing power density can be exploited to accelerate calculations. In this paper, a new GPU implementation of a convolution/superposition (CS) algorithm is presented. Methods: This new GPU implementation has been designed from the ground-up to use the graphics card's strengths and to avoid its weaknesses. The CS GPU algorithm takes into account beam hardening, off-axis softening, kernel tilting, and relies heavily on raytracing through patient imaging data. Implementation details are reported as well as a multi-GPU solution. Results: An overall single-GPU acceleration factor of 908x was achieved when compared to a nonoptimized version of the CS algorithm implemented in PlanUNC in single threaded central processing unit (CPU) mode, resulting in approximatively 2.8 s per beam for a 3D dose computation on a 0.4 cm grid. A comparison to an established commercial system leads to an acceleration factor of approximately 29x or 0.58 versus 16.6 s per beam in single threaded mode. An acceleration factor of 46x has been obtained for the total energy released per mass (TERMA) calculation and a 943x acceleration factor for the CS calculation compared to PlanUNC. Dose distributions also have been obtained for a simple water-lung phantom to verify that the implementation gives accurate results. Conclusions: These results suggest that GPUs are an attractive solution for radiation therapy applications and that careful design, taking the GPU architecture into account, is critical in obtaining significant acceleration factors. These results potentially can have a significant impact on complex dose delivery techniques requiring intensive dose calculations such as intensity-modulated radiation therapy (IMRT) and arc therapy. They also are relevant for adaptive radiation therapy where dose results must be obtained rapidly.

  4. Fluence-convolution broad-beam (FCBB) dose calculation

    Energy Technology Data Exchange (ETDEWEB)

    Lu Weiguo; Chen Mingli, E-mail: wlu@tomotherapy.co [TomoTherapy Inc., 1240 Deming Way, Madison, WI 53717 (United States)

    2010-12-07

    IMRT optimization requires a fast yet relatively accurate algorithm to calculate the iteration dose with small memory demand. In this paper, we present a dose calculation algorithm that approaches these goals. By decomposing the infinitesimal pencil beam (IPB) kernel into the central axis (CAX) component and lateral spread function (LSF) and taking the beam's eye view (BEV), we established a non-voxel and non-beamlet-based dose calculation formula. Both LSF and CAX are determined by a commissioning procedure using the collapsed-cone convolution/superposition (CCCS) method as the standard dose engine. The proposed dose calculation involves a 2D convolution of a fluence map with LSF followed by ray tracing based on the CAX lookup table with radiological distance and divergence correction, resulting in complexity of O(N{sup 3}) both spatially and temporally. This simple algorithm is orders of magnitude faster than the CCCS method. Without pre-calculation of beamlets, its implementation is also orders of magnitude smaller than the conventional voxel-based beamlet-superposition (VBS) approach. We compared the presented algorithm with the CCCS method using simulated and clinical cases. The agreement was generally within 3% for a homogeneous phantom and 5% for heterogeneous and clinical cases. Combined with the 'adaptive full dose correction', the algorithm is well suitable for calculating the iteration dose during IMRT optimization.

  5. Accurate convolution/superposition for multi-resolution dose calculation using cumulative tabulated kernels

    International Nuclear Information System (INIS)

    Lu Weiguo; Olivera, Gustavo H; Chen Mingli; Reckwerdt, Paul J; Mackie, Thomas R

    2005-01-01

    Convolution/superposition (C/S) is regarded as the standard dose calculation method in most modern radiotherapy treatment planning systems. Different implementations of C/S could result in significantly different dose distributions. This paper addresses two major implementation issues associated with collapsed cone C/S: one is how to utilize the tabulated kernels instead of analytical parametrizations and the other is how to deal with voxel size effects. Three methods that utilize the tabulated kernels are presented in this paper. These methods differ in the effective kernels used: the differential kernel (DK), the cumulative kernel (CK) or the cumulative-cumulative kernel (CCK). They result in slightly different computation times but significantly different voxel size effects. Both simulated and real multi-resolution dose calculations are presented. For simulation tests, we use arbitrary kernels and various voxel sizes with a homogeneous phantom, and assume forward energy transportation only. Simulations with voxel size up to 1 cm show that the CCK algorithm has errors within 0.1% of the maximum gold standard dose. Real dose calculations use a heterogeneous slab phantom, both the 'broad' (5 x 5 cm 2 ) and the 'narrow' (1.2 x 1.2 cm 2 ) tomotherapy beams. Various voxel sizes (0.5 mm, 1 mm, 2 mm, 4 mm and 8 mm) are used for dose calculations. The results show that all three algorithms have negligible difference (0.1%) for the dose calculation in the fine resolution (0.5 mm voxels). But differences become significant when the voxel size increases. As for the DK or CK algorithm in the broad (narrow) beam dose calculation, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 10% (7%) of the maximum dose. As for the broad (narrow) beam dose calculation using the CCK algorithm, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 1% of the maximum dose. Among all three methods, the CCK algorithm

  6. Evaluation of collapsed cone convolution superposition (CCCS algorithms in prowess treatment planning system for calculating symmetric and asymmetric field size

    Directory of Open Access Journals (Sweden)

    Tamer Dawod

    2015-01-01

    Full Text Available Purpose: This work investigated the accuracy of prowess treatment planning system (TPS in dose calculation in a homogenous phantom for symmetric and asymmetric field sizes using collapse cone convolution / superposition algorithm (CCCS. Methods: The measurements were carried out at source-to-surface distance (SSD set to 100 cm for 6 and 10 MV photon beams. Data for a full set of measurements for symmetric fields and asymmetric fields, including inplane and crossplane profiles at various depths and percentage depth doses (PDDs were obtained during measurements on the linear accelerator.Results: The results showed that the asymmetric collimation dose lead to significant errors (up to approximately 7% in dose calculations if changes in primary beam intensity and beam quality. It is obvious that the most difference in the isodose curves was found in buildup and the penumbra regions. Conclusion: The results showed that the dose calculation using Prowess TPS based on CCCS algorithm is generally in excellent agreement with measurements.

  7. Ultrafast convolution/superposition using tabulated and exponential kernels on GPU

    Energy Technology Data Exchange (ETDEWEB)

    Chen Quan; Chen Mingli; Lu Weiguo [TomoTherapy Inc., 1240 Deming Way, Madison, Wisconsin 53717 (United States)

    2011-03-15

    Purpose: Collapsed-cone convolution/superposition (CCCS) dose calculation is the workhorse for IMRT dose calculation. The authors present a novel algorithm for computing CCCS dose on the modern graphic processing unit (GPU). Methods: The GPU algorithm includes a novel TERMA calculation that has no write-conflicts and has linear computation complexity. The CCCS algorithm uses either tabulated or exponential cumulative-cumulative kernels (CCKs) as reported in literature. The authors have demonstrated that the use of exponential kernels can reduce the computation complexity by order of a dimension and achieve excellent accuracy. Special attentions are paid to the unique architecture of GPU, especially the memory accessing pattern, which increases performance by more than tenfold. Results: As a result, the tabulated kernel implementation in GPU is two to three times faster than other GPU implementations reported in literature. The implementation of CCCS showed significant speedup on GPU over single core CPU. On tabulated CCK, speedups as high as 70 are observed; on exponential CCK, speedups as high as 90 are observed. Conclusions: Overall, the GPU algorithm using exponential CCK is 1000-3000 times faster over a highly optimized single-threaded CPU implementation using tabulated CCK, while the dose differences are within 0.5% and 0.5 mm. This ultrafast CCCS algorithm will allow many time-sensitive applications to use accurate dose calculation.

  8. Point kernels and superposition methods for scatter dose calculations in brachytherapy

    International Nuclear Information System (INIS)

    Carlsson, A.K.

    2000-01-01

    Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)

  9. Evaluation of dose prediction errors and optimization convergence errors of deliverable-based head-and-neck IMRT plans computed with a superposition/convolution dose algorithm

    International Nuclear Information System (INIS)

    Mihaylov, I. B.; Siebers, J. V.

    2008-01-01

    The purpose of this study is to evaluate dose prediction errors (DPEs) and optimization convergence errors (OCEs) resulting from use of a superposition/convolution dose calculation algorithm in deliverable intensity-modulated radiation therapy (IMRT) optimization for head-and-neck (HN) patients. Thirteen HN IMRT patient plans were retrospectively reoptimized. The IMRT optimization was performed in three sequential steps: (1) fast optimization in which an initial nondeliverable IMRT solution was achieved and then converted to multileaf collimator (MLC) leaf sequences; (2) mixed deliverable optimization that used a Monte Carlo (MC) algorithm to account for the incident photon fluence modulation by the MLC, whereas a superposition/convolution (SC) dose calculation algorithm was utilized for the patient dose calculations; and (3) MC deliverable-based optimization in which both fluence and patient dose calculations were performed with a MC algorithm. DPEs of the mixed method were quantified by evaluating the differences between the mixed optimization SC dose result and a MC dose recalculation of the mixed optimization solution. OCEs of the mixed method were quantified by evaluating the differences between the MC recalculation of the mixed optimization solution and the final MC optimization solution. The results were analyzed through dose volume indices derived from the cumulative dose-volume histograms for selected anatomic structures. Statistical equivalence tests were used to determine the significance of the DPEs and the OCEs. Furthermore, a correlation analysis between DPEs and OCEs was performed. The evaluated DPEs were within ±2.8% while the OCEs were within 5.5%, indicating that OCEs can be clinically significant even when DPEs are clinically insignificant. The full MC-dose-based optimization reduced normal tissue dose by as much as 8.5% compared with the mixed-method optimization results. The DPEs and the OCEs in the targets had correlation coefficients greater

  10. FAST-PT: a novel algorithm to calculate convolution integrals in cosmological perturbation theory

    Energy Technology Data Exchange (ETDEWEB)

    McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.; Blazek, Jonathan A., E-mail: mcewen.24@osu.edu, E-mail: fang.307@osu.edu, E-mail: hirata.10@osu.edu, E-mail: blazek@berkeley.edu [Center for Cosmology and AstroParticle Physics, Department of Physics, The Ohio State University, 191 W Woodruff Ave, Columbus OH 43210 (United States)

    2016-09-01

    We present a novel algorithm, FAST-PT, for performing convolution or mode-coupling integrals that appear in nonlinear cosmological perturbation theory. The algorithm uses several properties of gravitational structure formation—the locality of the dark matter equations and the scale invariance of the problem—as well as Fast Fourier Transforms to describe the input power spectrum as a superposition of power laws. This yields extremely fast performance, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation. We describe the algorithm and demonstrate its application to calculating nonlinear corrections to the matter power spectrum, including one-loop standard perturbation theory and the renormalization group approach. We also describe our public code (in Python) to implement this algorithm. The code, along with a user manual and example implementations, is available at https://github.com/JoeMcEwen/FAST-PT.

  11. Monte Carlo evaluation of the convolution/superposition algorithm of Hi-Art tomotherapy in heterogeneous phantoms and clinical cases

    International Nuclear Information System (INIS)

    Sterpin, E.; Salvat, F.; Olivera, G.; Vynckier, S.

    2009-01-01

    The reliability of the convolution/superposition (C/S) algorithm of the Hi-Art tomotherapy system is evaluated by using the Monte Carlo model TomoPen, which has been already validated for homogeneous phantoms. The study was performed in three stages. First, measurements with EBT Gafchromic film for a 1.25x2.5 cm 2 field in a heterogeneous phantom consisting of two slabs of polystyrene separated with Styrofoam were compared to simulation results from TomoPen. The excellent agreement found in this comparison justifies the use of TomoPen as the reference for the remaining parts of this work. Second, to allow analysis and interpretation of the results in clinical cases, dose distributions calculated with TomoPen and C/S were compared for a similar phantom geometry, with multiple slabs of various densities. Even in conditions of lack of lateral electronic equilibrium, overall good agreement was obtained between C/S and TomoPen results, with deviations within 3%/2 mm, showing that the C/S algorithm accounts for modifications in secondary electron transport due to the presence of a low density medium. Finally, calculations were performed with TomoPen and C/S of dose distributions in various clinical cases, from large bilateral head and neck tumors to small lung tumors with diameter of <3 cm. To ensure a ''fair'' comparison, identical dose calculation grid and dose-volume histogram calculator were used. Very good agreement was obtained for most of the cases, with no significant differences between the DVHs obtained from both calculations. However, deviations of up to 4% for the dose received by 95% of the target volume were found for the small lung tumors. Therefore, the approximations in the C/S algorithm slightly influence the accuracy in small lung tumors even though the C/S algorithm of the tomotherapy system shows very good overall behavior.

  12. Monte Carlo evaluation of the convolution/superposition algorithm of Hi-Art tomotherapy in heterogeneous phantoms and clinical cases

    Energy Technology Data Exchange (ETDEWEB)

    Sterpin, E.; Salvat, F.; Olivera, G.; Vynckier, S. [Department of Radiotherapy, Saint-Luc University Hospital, Universite Catholique de Louvain, 10 Avenue Hippocrate, 1200 Brussels (Belgium); Facultat de Fisica (ECM), Universitat de Barcelona, Diagonal 647, 08028 Barcelona (Spain); Tomotherapy Inc., 1240 Deming Way, Madison, Wisconsin 53717 and Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States); Department of Radiotherapy, Saint-Luc University Hospital, Universite Catholique de Louvain, 10 Avenue Hippocrate, 1200 Brussels (Belgium)

    2009-05-15

    The reliability of the convolution/superposition (C/S) algorithm of the Hi-Art tomotherapy system is evaluated by using the Monte Carlo model TomoPen, which has been already validated for homogeneous phantoms. The study was performed in three stages. First, measurements with EBT Gafchromic film for a 1.25x2.5 cm{sup 2} field in a heterogeneous phantom consisting of two slabs of polystyrene separated with Styrofoam were compared to simulation results from TomoPen. The excellent agreement found in this comparison justifies the use of TomoPen as the reference for the remaining parts of this work. Second, to allow analysis and interpretation of the results in clinical cases, dose distributions calculated with TomoPen and C/S were compared for a similar phantom geometry, with multiple slabs of various densities. Even in conditions of lack of lateral electronic equilibrium, overall good agreement was obtained between C/S and TomoPen results, with deviations within 3%/2 mm, showing that the C/S algorithm accounts for modifications in secondary electron transport due to the presence of a low density medium. Finally, calculations were performed with TomoPen and C/S of dose distributions in various clinical cases, from large bilateral head and neck tumors to small lung tumors with diameter of <3 cm. To ensure a ''fair'' comparison, identical dose calculation grid and dose-volume histogram calculator were used. Very good agreement was obtained for most of the cases, with no significant differences between the DVHs obtained from both calculations. However, deviations of up to 4% for the dose received by 95% of the target volume were found for the small lung tumors. Therefore, the approximations in the C/S algorithm slightly influence the accuracy in small lung tumors even though the C/S algorithm of the tomotherapy system shows very good overall behavior.

  13. SU-E-T-91: Accuracy of Dose Calculation Algorithms for Patients Undergoing Stereotactic Ablative Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Tajaldeen, A [RMIT university, Docklands, Vic (Australia); Ramachandran, P [Peter MacCallum Cancer Centre, Bendigo (Australia); Geso, M [RMIT University, Bundoora, Melbourne (Australia)

    2015-06-15

    Purpose: The purpose of this study was to investigate and quantify the variation in dose distributions in small field lung cancer radiotherapy using seven different dose calculation algorithms. Methods: The study was performed in 21 lung cancer patients who underwent Stereotactic Ablative Body Radiotherapy (SABR). Two different methods (i) Same dose coverage to the target volume (named as same dose method) (ii) Same monitor units in all algorithms (named as same monitor units) were used for studying the performance of seven different dose calculation algorithms in XiO and Eclipse treatment planning systems. The seven dose calculation algorithms include Superposition, Fast superposition, Fast Fourier Transform ( FFT) Convolution, Clarkson, Anisotropic Analytic Algorithm (AAA), Acurous XB and pencil beam (PB) algorithms. Prior to this, a phantom study was performed to assess the accuracy of these algorithms. Superposition algorithm was used as a reference algorithm in this study. The treatment plans were compared using different dosimetric parameters including conformity, heterogeneity and dose fall off index. In addition to this, the dose to critical structures like lungs, heart, oesophagus and spinal cord were also studied. Statistical analysis was performed using Prism software. Results: The mean±stdev with conformity index for Superposition, Fast superposition, Clarkson and FFT convolution algorithms were 1.29±0.13, 1.31±0.16, 2.2±0.7 and 2.17±0.59 respectively whereas for AAA, pencil beam and Acurous XB were 1.4±0.27, 1.66±0.27 and 1.35±0.24 respectively. Conclusion: Our study showed significant variations among the seven different algorithms. Superposition and AcurosXB algorithms showed similar values for most of the dosimetric parameters. Clarkson, FFT convolution and pencil beam algorithms showed large differences as compared to superposition algorithms. Based on our study, we recommend Superposition and AcurosXB algorithms as the first choice of

  14. SU-E-T-91: Accuracy of Dose Calculation Algorithms for Patients Undergoing Stereotactic Ablative Radiotherapy

    International Nuclear Information System (INIS)

    Tajaldeen, A; Ramachandran, P; Geso, M

    2015-01-01

    Purpose: The purpose of this study was to investigate and quantify the variation in dose distributions in small field lung cancer radiotherapy using seven different dose calculation algorithms. Methods: The study was performed in 21 lung cancer patients who underwent Stereotactic Ablative Body Radiotherapy (SABR). Two different methods (i) Same dose coverage to the target volume (named as same dose method) (ii) Same monitor units in all algorithms (named as same monitor units) were used for studying the performance of seven different dose calculation algorithms in XiO and Eclipse treatment planning systems. The seven dose calculation algorithms include Superposition, Fast superposition, Fast Fourier Transform ( FFT) Convolution, Clarkson, Anisotropic Analytic Algorithm (AAA), Acurous XB and pencil beam (PB) algorithms. Prior to this, a phantom study was performed to assess the accuracy of these algorithms. Superposition algorithm was used as a reference algorithm in this study. The treatment plans were compared using different dosimetric parameters including conformity, heterogeneity and dose fall off index. In addition to this, the dose to critical structures like lungs, heart, oesophagus and spinal cord were also studied. Statistical analysis was performed using Prism software. Results: The mean±stdev with conformity index for Superposition, Fast superposition, Clarkson and FFT convolution algorithms were 1.29±0.13, 1.31±0.16, 2.2±0.7 and 2.17±0.59 respectively whereas for AAA, pencil beam and Acurous XB were 1.4±0.27, 1.66±0.27 and 1.35±0.24 respectively. Conclusion: Our study showed significant variations among the seven different algorithms. Superposition and AcurosXB algorithms showed similar values for most of the dosimetric parameters. Clarkson, FFT convolution and pencil beam algorithms showed large differences as compared to superposition algorithms. Based on our study, we recommend Superposition and AcurosXB algorithms as the first choice of

  15. Evaluation of dose calculation algorithms using the treatment planning system Xi O with tissue heterogeneity correction turned on

    International Nuclear Information System (INIS)

    Fairbanks, Leandro R.; Barbi, Gustavo L.; Silva, Wiliam T.; Reis, Eduardo G.F.; Borges, Leandro F.; Bertucci, Edenyse C.; Maciel, Marina F.; Amaral, Leonardo L.

    2011-01-01

    Since the cross-section for various radiation interactions is dependent upon tissue material, the presence of heterogeneities affects the final dose delivered. This paper aims to analyze how different treatment planning algorithms (Fast Fourier Transform, Convolution, Superposition, Fast Superposition and Clarkson) work when heterogeneity corrections are used. To that end, a farmer-type ionization chamber was positioned reproducibly (during the time of CT as well as irradiation) inside several phantoms made of aluminum, bone, cork and solid water slabs. The percent difference between the dose measured and calculated by the various algorithms was less than 5%.The convolution method shows better results for high density materials (difference ∼1 %), whereas the Superposition algorithm is more accurate for low densities (around 1,1%). (author)

  16. Superposition of configurations in semiempirical calculation of iron group ion spectra

    International Nuclear Information System (INIS)

    Kantseryavichyus, A.Yu.; Ramonas, A.A.

    1976-01-01

    The energy spectra of ions from the iron group in the dsup(N), dsup(N)s, dsup(N)p configurations are studied. A semiempirical method is used in which the effective hamiltonian contains configuration superposition. The sdsup(N+1), psup(4)dsup(N+2) quasidegenerated configurations, as well as configurations which differ by one electron are taken as correction configurations. It follows from the calculations that the most important role among the quasidegenerate configurations is played by the sdsup(N+1) correctional configuration. When it is taken into account, the introduction of the psup(4)dsup(N+2) correctional configuration practically does not affect the results. Account of the dsup(N-1)s configuration in the second order of the perturbation theory is equivalent to that of sdsup(N+1) in the sense that it results in the identical mean square deviation. As follows from the comparison of the results of the approximate and complete account of the configuration superposition, in many cases one can be satisfied with its approximate and complete account of the configuration superposition, in many cases one can be satisfied with its approximate version. The results are presented in the form of tables including the values of empirical parameters, radial integrals, mean square errors, etc

  17. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures

    Energy Technology Data Exchange (ETDEWEB)

    Neylon, J., E-mail: jneylon@mednet.ucla.edu; Sheng, K.; Yu, V.; Low, D. A.; Kupelian, P.; Santhanam, A. [Department of Radiation Oncology, University of California Los Angeles, 200 Medical Plaza, #B265, Los Angeles, California 90095 (United States); Chen, Q. [Department of Radiation Oncology, University of Virginia, 1300 Jefferson Park Avenue, Charlottesville, California 22908 (United States)

    2014-10-15

    , respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. Results: The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. Conclusions: The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.

  18. A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures

    International Nuclear Information System (INIS)

    Neylon, J.; Sheng, K.; Yu, V.; Low, D. A.; Kupelian, P.; Santhanam, A.; Chen, Q.

    2014-01-01

    , respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. Results: The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. Conclusions: The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems

  19. Evaluation of dose calculation algorithms using the treatment planning system Xi O with tissue heterogeneity correction turned on; Validacao dos algoritmos de calculo de dose do sistema de planejamento Xi O considerando as correcoes para heterogeneidade dos tecidos

    Energy Technology Data Exchange (ETDEWEB)

    Fairbanks, Leandro R.; Barbi, Gustavo L.; Silva, Wiliam T.; Reis, Eduardo G.F.; Borges, Leandro F.; Bertucci, Edenyse C.; Maciel, Marina F.; Amaral, Leonardo L., E-mail: lefairbanks@yahoo.com.b [Universidade de Sao Paulo (HCRP/USP), Ribeirao Preto, SP (Brazil). Hospital das Clinicas. Servico de Radioterapia

    2011-07-01

    Since the cross-section for various radiation interactions is dependent upon tissue material, the presence of heterogeneities affects the final dose delivered. This paper aims to analyze how different treatment planning algorithms (Fast Fourier Transform, Convolution, Superposition, Fast Superposition and Clarkson) work when heterogeneity corrections are used. To that end, a farmer-type ionization chamber was positioned reproducibly (during the time of CT as well as irradiation) inside several phantoms made of aluminum, bone, cork and solid water slabs. The percent difference between the dose measured and calculated by the various algorithms was less than 5%.The convolution method shows better results for high density materials (difference {approx}1 %), whereas the Superposition algorithm is more accurate for low densities (around 1,1%). (author)

  20. Engineering mesoscopic superpositions of superfluid flow

    International Nuclear Information System (INIS)

    Hallwood, D. W.; Brand, J.

    2011-01-01

    Modeling strongly correlated atoms demonstrates the possibility to prepare quantum superpositions that are robust against experimental imperfections and temperature. Such superpositions of vortex states are formed by adiabatic manipulation of interacting ultracold atoms confined to a one-dimensional ring trapping potential when stirred by a barrier. Here, we discuss the influence of nonideal experimental procedures and finite temperature. Adiabaticity conditions for changing the stirring rate reveal that superpositions of many atoms are most easily accessed in the strongly interacting, Tonks-Girardeau, regime, which is also the most robust at finite temperature. NOON-type superpositions of weakly interacting atoms are most easily created by adiabatically decreasing the interaction strength by means of a Feshbach resonance. The quantum dynamics of small numbers of particles is simulated and the size of the superpositions is calculated based on their ability to make precision measurements. The experimental creation of strongly correlated and NOON-type superpositions with about 100 atoms seems feasible in the near future.

  1. Final Aperture Superposition Technique applied to fast calculation of electron output factors and depth dose curves

    International Nuclear Information System (INIS)

    Faddegon, B.A.; Villarreal-Barajas, J.E.

    2005-01-01

    The Final Aperture Superposition Technique (FAST) is described and applied to accurate, near instantaneous calculation of the relative output factor (ROF) and central axis percentage depth dose curve (PDD) for clinical electron beams used in radiotherapy. FAST is based on precalculation of dose at select points for the two extreme situations of a fully open final aperture and a final aperture with no opening (fully shielded). This technique is different than conventional superposition of dose deposition kernels: The precalculated dose is differential in position of the electron or photon at the downstream surface of the insert. The calculation for a particular aperture (x-ray jaws or MLC, insert in electron applicator) is done with superposition of the precalculated dose data, using the open field data over the open part of the aperture and the fully shielded data over the remainder. The calculation takes explicit account of all interactions in the shielded region of the aperture except the collimator effect: Particles that pass from the open part into the shielded part, or visa versa. For the clinical demonstration, FAST was compared to full Monte Carlo simulation of 10x10,2.5x2.5, and 2x8 cm 2 inserts. Dose was calculated to 0.5% precision in 0.4x0.4x0.2 cm 3 voxels, spaced at 0.2 cm depth intervals along the central axis, using detailed Monte Carlo simulation of the treatment head of a commercial linear accelerator for six different electron beams with energies of 6-21 MeV. Each simulation took several hours on a personal computer with a 1.7 Mhz processor. The calculation for the individual inserts, done with superposition, was completed in under a second on the same PC. Since simulations for the pre calculation are only performed once, higher precision and resolution can be obtained without increasing the calculation time for individual inserts. Fully shielded contributions were largest for small fields and high beam energy, at the surface, reaching a maximum

  2. Evaluation of dose calculation algorithms using the treatment planning system XiO with tissue heterogeneity correction turned on; Validacao dos algoritmos de calculo de dose do sistema de planejamento XiO considerando as correcoes para heterogeneidade dos tecidos

    Energy Technology Data Exchange (ETDEWEB)

    Fairbanks, L.R.; Barbi, G.L.; Silva, W.T. da; Reis, E.G.F. dos; Borges, L.F.; Bertucci, E.C.; Maciel, M.F.; Amaral, L.L. do, E-mail: lefairbanks@yahoo.com.b [Universidade de Sao Paulo (USP), Ribeirao Preto, SP (Brazil). Hospital das Clinicas. Servico de Radioterapia

    2010-07-01

    Since the cross-section for various radiation interactions is dependent upon tissue material, the presence of heterogeneities affects the final dose delivered. This paper aims to analyze how different treatment planning algorithms (Fast Fourier Transform, Convolution, Superposition, Fast Superposition and Clarkson) work when heterogeneity corrections are used. To that end, a farmer-type ionization chamber was positioned reproducibly (during the time of CT as well as irradiation) inside several phantoms made of aluminum, bone, cork and solid water slabs. The percent difference between the dose measured and calculated by the various algorithms was less than 5%; This is in accordance with the recommendation of several references.The convolution method shows better results for high density materials (difference {approx}1 %), whereas the Superposition algorithm is more accurate for low densities (around 1,1%).

  3. SU-E-T-423: Fast Photon Convolution Calculation with a 3D-Ideal Kernel On the GPU

    Energy Technology Data Exchange (ETDEWEB)

    Moriya, S; Sato, M [Komazawa University, Setagaya, Tokyo (Japan); Tachibana, H [National Cancer Center Hospital East, Kashiwa, Chiba (Japan)

    2015-06-15

    Purpose: The calculation time is a trade-off for improving the accuracy of convolution dose calculation with fine calculation spacing of the KERMA kernel. We investigated to accelerate the convolution calculation using an ideal kernel on the Graphic Processing Units (GPU). Methods: The calculation was performed on the AMD graphics hardware of Dual FirePro D700 and our algorithm was implemented using the Aparapi that convert Java bytecode to OpenCL. The process of dose calculation was separated with the TERMA and KERMA steps. The dose deposited at the coordinate (x, y, z) was determined in the process. In the dose calculation running on the central processing unit (CPU) of Intel Xeon E5, the calculation loops were performed for all calculation points. On the GPU computation, all of the calculation processes for the points were sent to the GPU and the multi-thread computation was done. In this study, the dose calculation was performed in a water equivalent homogeneous phantom with 150{sup 3} voxels (2 mm calculation grid) and the calculation speed on the GPU to that on the CPU and the accuracy of PDD were compared. Results: The calculation time for the GPU and the CPU were 3.3 sec and 4.4 hour, respectively. The calculation speed for the GPU was 4800 times faster than that for the CPU. The PDD curve for the GPU was perfectly matched to that for the CPU. Conclusion: The convolution calculation with the ideal kernel on the GPU was clinically acceptable for time and may be more accurate in an inhomogeneous region. Intensity modulated arc therapy needs dose calculations for different gantry angles at many control points. Thus, it would be more practical that the kernel uses a coarse spacing technique if the calculation is faster while keeping the similar accuracy to a current treatment planning system.

  4. SU-F-T-377: Monte Carlo Re-Evaluation of Volumetric-Modulated Arc Plans of Advanced Stage Nasopharygeal Cancers Optimized with Convolution-Superposition Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, K; Leung, R; Law, G; Wong, M; Lee, V; Tung, S; Cheung, S; Chan, M [Tuen Mun Hospital, Hong Kong (Hong Kong)

    2016-06-15

    Background: Commercial treatment planning system Pinnacle3 (Philips, Fitchburg, WI, USA) employs a convolution-superposition algorithm for volumetric-modulated arc radiotherapy (VMAT) optimization and dose calculation. Study of Monte Carlo (MC) dose recalculation of VMAT plans for advanced-stage nasopharyngeal cancers (NPC) is currently limited. Methods: Twenty-nine VMAT prescribed 70Gy, 60Gy, and 54Gy to the planning target volumes (PTVs) were included. These clinical plans achieved with a CS dose engine on Pinnacle3 v9.0 were recalculated by the Monaco TPS v5.0 (Elekta, Maryland Heights, MO, USA) with a XVMC-based MC dose engine. The MC virtual source model was built using the same measurement beam dataset as for the Pinnacle beam model. All MC recalculation were based on absorbed dose to medium in medium (Dm,m). Differences in dose constraint parameters per our institution protocol (Supplementary Table 1) were analyzed. Results: Only differences in maximum dose to left brachial plexus, left temporal lobe and PTV54Gy were found to be statistically insignificant (p> 0.05). Dosimetric differences of other tumor targets and normal organs are found in supplementary Table 1. Generally, doses outside the PTV in the normal organs are lower with MC than with CS. This is also true in the PTV54-70Gy doses but higher dose in the nasal cavity near the bone interfaces is consistently predicted by MC, possibly due to the increased backscattering of short-range scattered photons and the secondary electrons that is not properly modeled by the CS. The straight shoulders of the PTV dose volume histograms (DVH) initially resulted from the CS optimization are merely preserved after MC recalculation. Conclusion: Significant dosimetric differences in VMAT NPC plans were observed between CS and MC calculations. Adjustments of the planning dose constraints to incorporate the physics differences from conventional CS algorithm should be made when VMAT optimization is carried out directly

  5. Dose Calculation Accuracy of the Monte Carlo Algorithm for CyberKnife Compared with Other Commercially Available Dose Calculation Algorithms

    International Nuclear Information System (INIS)

    Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny

    2011-01-01

    Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required.

  6. Quantification of the influence of the choice of the algorithm and planning system on the calculation of a treatment plan

    International Nuclear Information System (INIS)

    Moral, F. del; Ramos, A.; Salgado, M.; Andrade, B; Munoz, V.

    2010-01-01

    In this work an analysis of the influence of the choice of the algorithm or planning system, on the calculus of the same treatment plan is introduced. For this purpose specific software has been developed for comparing plans of a series of IMRT cases of prostate and head and neck cancer calculated using the convolution, superposition and fast superposition algorithms implemented in the XiO 4.40 planning system (CMS). It has also been used for the comparison of the same treatment plan for lung pathology calculated in XiO with the mentioned algorithms, and calculated in the Plan 4.1 planning system (Brainlab) using its pencil beam algorithm. Differences in dose among the treatment plans have been quantified using a set of metrics. The recommendation for the dosimetrist of a careful choice of the algorithm has been numerically confirmed. (Author).

  7. A quantum algorithm for Viterbi decoding of classical convolutional codes

    Science.gov (United States)

    Grice, Jon R.; Meyer, David A.

    2015-07-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.

  8. Optimal simultaneous superpositioning of multiple structures with missing data.

    Science.gov (United States)

    Theobald, Douglas L; Steindel, Phillip A

    2012-08-01

    Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually 'missing' from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation-maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. dtheobald@brandeis.edu Supplementary data are available at Bioinformatics online.

  9. Classification of high-resolution remote sensing images based on multi-scale superposition

    Science.gov (United States)

    Wang, Jinliang; Gao, Wenjie; Liu, Guangjie

    2017-07-01

    Landscape structures and process on different scale show different characteristics. In the study of specific target landmarks, the most appropriate scale for images can be attained by scale conversion, which improves the accuracy and efficiency of feature identification and classification. In this paper, the authors carried out experiments on multi-scale classification by taking the Shangri-la area in the north-western Yunnan province as the research area and the images from SPOT5 HRG and GF-1 Satellite as date sources. Firstly, the authors upscaled the two images by cubic convolution, and calculated the optimal scale for different objects on the earth shown in images by variation functions. Then the authors conducted multi-scale superposition classification on it by Maximum Likelyhood, and evaluated the classification accuracy. The results indicates that: (1) for most of the object on the earth, the optimal scale appears in the bigger scale instead of the original one. To be specific, water has the biggest optimal scale, i.e. around 25-30m; farmland, grassland, brushwood, roads, settlement places and woodland follows with 20-24m. The optimal scale for shades and flood land is basically as the same as the original one, i.e. 8m and 10m respectively. (2) Regarding the classification of the multi-scale superposed images, the overall accuracy of the ones from SPOT5 HRG and GF-1 Satellite is 12.84% and 14.76% higher than that of the original multi-spectral images, respectively, and Kappa coefficient is 0.1306 and 0.1419 higher, respectively. Hence, the multi-scale superposition classification which was applied in the research area can enhance the classification accuracy of remote sensing images .

  10. Dealiased convolutions for pseudospectral simulations

    International Nuclear Information System (INIS)

    Roberts, Malcolm; Bowman, John C

    2011-01-01

    Efficient algorithms have recently been developed for calculating dealiased linear convolution sums without the expense of conventional zero-padding or phase-shift techniques. For one-dimensional in-place convolutions, the memory requirements are identical with the zero-padding technique, with the important distinction that the additional work memory need not be contiguous with the input data. This decoupling of data and work arrays dramatically reduces the memory and computation time required to evaluate higher-dimensional in-place convolutions. The memory savings is achieved by computing the in-place Fourier transform of the data in blocks, rather than all at once. The technique also allows one to dealias the n-ary convolutions that arise on Fourier transforming cubic and higher powers. Implicitly dealiased convolutions can be built on top of state-of-the-art adaptive fast Fourier transform libraries like FFTW. Vectorized multidimensional implementations for the complex and centered Hermitian (pseudospectral) cases have already been implemented in the open-source software FFTW++. With the advent of this library, writing a high-performance dealiased pseudospectral code for solving nonlinear partial differential equations has now become a relatively straightforward exercise. New theoretical estimates of computational complexity and memory use are provided, including corrected timing results for 3D pruned convolutions and further consideration of higher-order convolutions.

  11. Calculation of media temperatures for nuclear sources in geologic depositories by a finite-length line source superposition model (FLLSSM)

    Energy Technology Data Exchange (ETDEWEB)

    Kays, W M; Hossaini-Hashemi, F [Stanford Univ., Palo Alto, CA (USA). Dept. of Mechanical Engineering; Busch, J S [Kaiser Engineers, Oakland, CA (USA)

    1982-02-01

    A linearized transient thermal conduction model was developed to economically determine media temperatures in geologic repositories for nuclear wastes. Individual canisters containing either high-level waste or spent fuel assemblies are represented as finite-length line sources in a continuous medium. The combined effects of multiple canisters in a representative storage pattern can be established in the medium at selected point of interest by superposition of the temperature rises calculated for each canister. A mathematical solution of the calculation for each separate source is given in this article, permitting a slow hand calculation. The full report, ONWI-94, contains the details of the computer code FLLSSM and its use, yielding the total solution in one computer output.

  12. Superposition Quantification

    Science.gov (United States)

    Chang, Li-Na; Luo, Shun-Long; Sun, Yuan

    2017-11-01

    The principle of superposition is universal and lies at the heart of quantum theory. Although ever since the inception of quantum mechanics a century ago, superposition has occupied a central and pivotal place, rigorous and systematic studies of the quantification issue have attracted significant interests only in recent years, and many related problems remain to be investigated. In this work we introduce a figure of merit which quantifies superposition from an intuitive and direct perspective, investigate its fundamental properties, connect it to some coherence measures, illustrate it through several examples, and apply it to analyze wave-particle duality. Supported by Science Challenge Project under Grant No. TZ2016002, Laboratory of Computational Physics, Institute of Applied Physics and Computational Mathematics, Beijing, Key Laboratory of Random Complex Structures and Data Science, Chinese Academy of Sciences, Grant under No. 2008DP173182

  13. A comparison study for dose calculation in radiation therapy: pencil beam Kernel based vs. Monte Carlo simulation vs. measurements

    Energy Technology Data Exchange (ETDEWEB)

    Cheong, Kwang-Ho; Suh, Tae-Suk; Lee, Hyoung-Koo; Choe, Bo-Young [The Catholic Univ. of Korea, Seoul (Korea, Republic of); Kim, Hoi-Nam; Yoon, Sei-Chul [Kangnam St. Mary' s Hospital, Seoul (Korea, Republic of)

    2002-07-01

    Accurate dose calculation in radiation treatment planning is most important for successful treatment. Since human body is composed of various materials and not an ideal shape, it is not easy to calculate the accurate effective dose in the patients. Many methods have been proposed to solve inhomogeneity and surface contour problems. Monte Carlo simulations are regarded as the most accurate method, but it is not appropriate for routine planning because it takes so much time. Pencil beam kernel based convolution/superposition methods were also proposed to correct those effects. Nowadays, many commercial treatment planning systems have adopted this algorithm as a dose calculation engine. The purpose of this study is to verify the accuracy of the dose calculated from pencil beam kernel based treatment planning system comparing to Monte Carlo simulations and measurements especially in inhomogeneous region. Home-made inhomogeneous phantom, Helax-TMS ver. 6.0 and Monte Carlo code BEAMnrc and DOSXYZnrc were used in this study. In homogeneous media, the accuracy was acceptable but in inhomogeneous media, the errors were more significant. However in general clinical situation, pencil beam kernel based convolution algorithm is thought to be a valuable tool to calculate the dose.

  14. Student Ability to Distinguish between Superposition States and Mixed States in Quantum Mechanics

    Science.gov (United States)

    Passante, Gina; Emigh, Paul J.; Shaffer, Peter S.

    2015-01-01

    Superposition gives rise to the probabilistic nature of quantum mechanics and is therefore one of the concepts at the heart of quantum mechanics. Although we have found that many students can successfully use the idea of superposition to calculate the probabilities of different measurement outcomes, they are often unable to identify the…

  15. Superposition and macroscopic observation

    International Nuclear Information System (INIS)

    Cartwright, N.D.

    1976-01-01

    The principle of superposition has long plagued the quantum mechanics of macroscopic bodies. In at least one well-known situation - that of measurement - quantum mechanics predicts a superposition. It is customary to try to reconcile macroscopic reality and quantum mechanics by reducing the superposition to a mixture. To establish consistency with quantum mechanics, values for the apparatus after a measurement are to be distributed in the way predicted by the superposition. The distributions observed, however, are those of the mixture. The statistical predictions of quantum mechanics, it appears, are not borne out by observation in macroscopic situations. It has been shown that, insofar as specific ergodic hypotheses apply to the apparatus after the interaction, the superposition which evolves is experimentally indistinguishable from the corresponding mixture. In this paper an idealized model of the measuring situation is presented in which this consistency can be demonstrated. It includes a simplified version of the measurement solution proposed by Daneri, Loinger, and Prosperi (1962). The model should make clear the kind of statistical evidence required to carry of this approach, and the role of the ergodic hypotheses assumed. (Auth.)

  16. Generating superpositions of higher order bessel beams [Conference paper

    CSIR Research Space (South Africa)

    Vasilyeu, R

    2009-10-01

    Full Text Available An experimental setup to generate a superposition of higher-order Bessel beams by means of a spatial light modulator and ring aperture is presented. The experimentally produced fields are in good agreement with those calculated theoretically....

  17. Quantification of the influence of the choice of the algorithm and planning system on the calculation of a treatment plan; Cuantificacion de la influencia que tiene la eleccion del algoritmo y del sistema de planificacion en el calculo de una dosimetria clinica

    Energy Technology Data Exchange (ETDEWEB)

    Moral, F. del; Ramos, A.; Salgado, M.; Andrade, B; Munoz, V.

    2010-07-01

    In this work an analysis of the influence of the choice of the algorithm or planning system, on the calculus of the same treatment plan is introduced. For this purpose specific software has been developed for comparing plans of a series of IMRT cases of prostate and head and neck cancer calculated using the convolution, superposition and fast superposition algorithms implemented in the XiO 4.40 planning system (CMS). It has also been used for the comparison of the same treatment plan for lung pathology calculated in XiO with the mentioned algorithms, and calculated in the Plan 4.1 planning system (Brainlab) using its pencil beam algorithm. Differences in dose among the treatment plans have been quantified using a set of metrics. The recommendation for the dosimetry of a careful choice of the algorithm has been numerically confirmed. (Author).

  18. Fast Convolution Module (Fast Convolution Module)

    National Research Council Canada - National Science Library

    Bierens, L

    1997-01-01

    This report describes the design and realisation of a real-time range azimuth compression module, the so-called 'Fast Convolution Module', based on the fast convolution algorithm developed at TNO-FEL...

  19. Efficient forward propagation of time-sequences in convolutional neural networks using Deep Shifting

    NARCIS (Netherlands)

    K.L. Groenland (Koen); S.M. Bohte (Sander)

    2016-01-01

    textabstractWhen a Convolutional Neural Network is used for on-the-fly evaluation of continuously updating time-sequences, many redundant convolution operations are performed. We propose the method of Deep Shifting, which remembers previously calculated results of convolution operations in order

  20. An Improved Convolutional Neural Network on Crowd Density Estimation

    Directory of Open Access Journals (Sweden)

    Pan Shao-Yun

    2016-01-01

    Full Text Available In this paper, a new method is proposed for crowd density estimation. An improved convolutional neural network is combined with traditional texture feature. The data calculated by the convolutional layer can be treated as a new kind of features.So more useful information of images can be extracted by different features.In the meantime, the size of image has little effect on the result of convolutional neural network. Experimental results indicate that our scheme has adequate performance to allow for its use in real world applications.

  1. Limitations of a convolution method for modeling geometric uncertainties in radiation therapy. I. The effect of shift invariance

    International Nuclear Information System (INIS)

    Craig, Tim; Battista, Jerry; Van Dyk, Jake

    2003-01-01

    Convolution methods have been used to model the effect of geometric uncertainties on dose delivery in radiation therapy. Convolution assumes shift invariance of the dose distribution. Internal inhomogeneities and surface curvature lead to violations of this assumption. The magnitude of the error resulting from violation of shift invariance is not well documented. This issue is addressed by comparing dose distributions calculated using the Convolution method with dose distributions obtained by Direct Simulation. A comparison of conventional Static dose distributions was also made with Direct Simulation. This analysis was performed for phantom geometries and several clinical tumor sites. A modification to the Convolution method to correct for some of the inherent errors is proposed and tested using example phantoms and patients. We refer to this modified method as the Corrected Convolution. The average maximum dose error in the calculated volume (averaged over different beam arrangements in the various phantom examples) was 21% with the Static dose calculation, 9% with Convolution, and reduced to 5% with the Corrected Convolution. The average maximum dose error in the calculated volume (averaged over four clinical examples) was 9% for the Static method, 13% for Convolution, and 3% for Corrected Convolution. While Convolution can provide a superior estimate of the dose delivered when geometric uncertainties are present, the violation of shift invariance can result in substantial errors near the surface of the patient. The proposed Corrected Convolution modification reduces errors near the surface to 3% or less

  2. Linear superposition solutions to nonlinear wave equations

    International Nuclear Information System (INIS)

    Liu Yu

    2012-01-01

    The solutions to a linear wave equation can satisfy the principle of superposition, i.e., the linear superposition of two or more known solutions is still a solution of the linear wave equation. We show in this article that many nonlinear wave equations possess exact traveling wave solutions involving hyperbolic, triangle, and exponential functions, and the suitable linear combinations of these known solutions can also constitute linear superposition solutions to some nonlinear wave equations with special structural characteristics. The linear superposition solutions to the generalized KdV equation K(2,2,1), the Oliver water wave equation, and the k(n, n) equation are given. The structure characteristic of the nonlinear wave equations having linear superposition solutions is analyzed, and the reason why the solutions with the forms of hyperbolic, triangle, and exponential functions can form the linear superposition solutions is also discussed

  3. Use of the modal superposition technique for piping system blowdown analyses

    International Nuclear Information System (INIS)

    Ware, A.G.; Macek, R.W.

    1983-01-01

    A standard method of solving for the seismic response of piping systems is the modal superposition technique. Only a limited number of structural modes are considered (typically those up to 33 Hz in the U.S.), since the effect on the calculated response due to higher modes is generally small, and the method can result in considerable computer cost savings over the direct integration method. The modal superposition technique has also been applied to piping response problems in which the forcing functions are due to fluid excitation. Application of the technique to this case is somewhat more difficult, because a well defined cutoff frequency for determining structural modes to be included has not been established. This paper outlines a method for higher mode corrections, and suggests methods to determine suitable cutoff frequencies for piping system blowdown analyses. A numerical example illustrates how uncorrected modal superposition results can produce erroneous stress results

  4. A fast dose calculation method based on table lookup for IMRT optimization

    International Nuclear Information System (INIS)

    Wu Qiuwen; Djajaputra, David; Lauterbach, Marc; Wu Yan; Mohan, Radhe

    2003-01-01

    This note describes a fast dose calculation method that can be used to speed up the optimization process in intensity-modulated radiotherapy (IMRT). Most iterative optimization algorithms in IMRT require a large number of dose calculations to achieve convergence and therefore the total amount of time needed for the IMRT planning can be substantially reduced by using a faster dose calculation method. The method that is described in this note relies on an accurate dose calculation engine that is used to calculate an approximate dose kernel for each beam used in the treatment plan. Once the kernel is computed and saved, subsequent dose calculations can be done rapidly by looking up this kernel. Inaccuracies due to the approximate nature of the kernel in this method can be reduced by performing scheduled kernel updates. This fast dose calculation method can be performed more than two orders of magnitude faster than the typical superposition/convolution methods and therefore is suitable for applications in which speed is critical, e.g., in an IMRT optimization that requires a simulated annealing optimization algorithm or in a practical IMRT beam-angle optimization system. (note)

  5. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  6. A superposition principle in quantum logics

    International Nuclear Information System (INIS)

    Pulmannova, S.

    1976-01-01

    A new definition of the superposition principle in quantum logics is given which enables us to define the sectors. It is shown that the superposition principle holds only in the irreducible quantum logics. (orig.) [de

  7. Super-Monte Carla : a combined approach to x-ray beam planning

    International Nuclear Information System (INIS)

    Keall, P.; Hoban, P.

    1996-01-01

    A new accurate 3-D radiotherapy dose calculation algorithm, Super-Monte Carlo (SMC), has been developed which combines elements of both superposition/convolution and Monte Carlo methods. Currently used clinical dose calculation algorithms (except those based on the superposition method) can have errors of over 10%, especially where significant density inhomogeneities exist, such as in the head and neck, and lung regions. Errors of this magnitude can cause significan departures in the tumour control probability of the actual treatment. (author)

  8. Influence on dose calculation by difference of dose calculation algorithms in stereotactic lung irradiation. Comparison of pencil beam convolution (inhomogeneity correction: batho power law) and analytical anisotropic algorithm

    International Nuclear Information System (INIS)

    Tachibana, Masayuki; Noguchi, Yoshitaka; Fukunaga, Jyunichi; Hirano, Naomi; Yoshidome, Satoshi; Hirose, Takaaki

    2009-01-01

    The monitor unit (MU) was calculated by pencil beam convolution (inhomogeneity correction algorithm: batho power law) [PBC (BPL)] which is the dose calculation algorithm based on measurement in the past in the stereotactic lung irradiation study. The recalculation was done by analytical anisotropic algorithm (AAA), which is the dose calculation algorithm based on theory data. The MU calculated by PBC (BPL) and AAA was compared for each field. In the result of the comparison of 1031 fields in 136 cases, the MU calculated by PBC (BPL) was about 2% smaller than that calculated by AAA. This depends on whether one does the calculation concerning the extension of the second electrons. In particular, the difference in the MU is influenced by the X-ray energy. With the same X-ray energy, when the irradiation field size is small, the lung pass length is long, the lung pass length percentage is large, and the CT value of the lung is low, and the difference of MU is increased. (author)

  9. PL-1 program system for generalized Patterson superpositions. [PL1GEN, SYMPL1, and ALSPL1, in PL/1 for IBM 360/65 computer

    Energy Technology Data Exchange (ETDEWEB)

    Hubbard, C.R.; Babich, M.W.; Jacobson, R.A.

    1977-01-01

    A new system of three programs written in PL/1 can calculate symmetry and Patterson superposition maps for triclinic, monoclinic, and orthorhombic space groups as well as any space group reducible to one of these three. These programs are based on a system of FORTRAN programs developed at Ames Laboratory, but are more general and have expanded utility, especially with regard to large unit cells. The program PLIGEN calculates a direct access data set, SYMPL1 calculates a direct access symmetry map, and ALSPL1 calculates a superposition map using one or multiple superpositions. A detailed description of the use of these programs including symbolic program listings is included. 2 tables.

  10. About simple nonlinear and linear superpositions of special exact solutions of Veselov-Novikov equation

    International Nuclear Information System (INIS)

    Dubrovsky, V. G.; Topovsky, A. V.

    2013-01-01

    New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u (n) , n= 1, …, N are constructed via Zakharov and Manakov ∂-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u (n) and calculated by ∂-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schrödinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums of special solutions u (n) . It is shown that the sums u=u (k 1 ) +...+u (k m ) , 1 ⩽k 1 2 m ⩽N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schrödinger equation and can serve as model potentials for electrons in planar structures of modern electronics.

  11. Theoretical calculation on ICI reduction using digital coherent superposition of optical OFDM subcarrier pairs in the presence of laser phase noise.

    Science.gov (United States)

    Yi, Xingwen; Xu, Bo; Zhang, Jing; Lin, Yun; Qiu, Kun

    2014-12-15

    Digital coherent superposition (DCS) of optical OFDM subcarrier pairs with Hermitian symmetry can reduce the inter-carrier-interference (ICI) noise resulted from phase noise. In this paper, we show two different implementations of DCS-OFDM that have the same performance in the presence of laser phase noise. We complete the theoretical calculation on ICI reduction by using the model of pure Wiener phase noise. By Taylor expansion of the ICI, we show that the ICI power is cancelled to the second order by DCS. The fourth order term is further derived out and only decided by the ratio of laser linewidth to OFDM subcarrier symbol rate, which can greatly simplify the system design. Finally, we verify our theoretical calculations in simulations and use the analytical results to predict the system performance. DCS-OFDM is expected to be beneficial to certain optical fiber transmissions.

  12. Comparison of dose calculation algorithms in slab phantoms with cortical bone equivalent heterogeneities

    International Nuclear Information System (INIS)

    Carrasco, P.; Jornet, N.; Duch, M. A.; Panettieri, V.; Weber, L.; Eudaldo, T.; Ginjaume, M.; Ribas, M.

    2007-01-01

    To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10x10, 5x5, and 2x2 cm 2 ) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2x2 cm 2 field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values within

  13. Comparison of dose calculation algorithms in slab phantoms with cortical bone equivalent heterogeneities.

    Science.gov (United States)

    Carrasco, P; Jornet, N; Duch, M A; Panettieri, V; Weber, L; Eudaldo, T; Ginjaume, M; Ribas, M

    2007-08-01

    To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10 x 10, 5 x 5, and 2 x 2 cm2) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2 x 2 cm2 field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values

  14. On the superposition principle and its physics content

    International Nuclear Information System (INIS)

    Roos, M.

    1984-01-01

    What is commonly denoted the superposition principle is shown to consist of three different physical assumptions: conservation of probability, completeness, and some phase conditions. The latter conditions form the physical assumptions of the superposition principle. These phase conditions are exemplified by the Kobayashi-Maskawa matrix. Some suggestions for testing the superposition principle are given. (Auth.)

  15. Exclusion of identification by negative superposition

    Directory of Open Access Journals (Sweden)

    Takač Šandor

    2012-01-01

    Full Text Available The paper represents the first report of negative superposition in our country. Photo of randomly selected young, living woman was superimposed on the previously discovered female skull. Computer program Adobe Photoshop 7.0 was used in work. Digitilized photographs of the skull and face, after uploaded to computer, were superimposed on each other and displayed on the monitor in order to assess their possible similarities or differences. Special attention was payed to matching the same anthropometrical points of the skull and face, as well as following their contours. The process of fitting the skull and the photograph is usually started by setting eyes in correct position relative to the orbits. In this case, lower jaw gonions go beyond the face contour and gnathion is highly placed. By positioning the chin, mouth and nose their correct anatomical position cannot be achieved. All the difficulties associated with the superposition were recorded, with special emphasis on critical evaluation of work results in a negative superposition. Negative superposition has greater probative value (exclusion of identification than positive (possible identification. 100% negative superposition is easily achieved, but 100% positive - almost never. 'Each skull is unique and viewed from different perspectives is always a new challenge'. From this point of view, identification can be negative or of high probability.

  16. Application of structured support vector machine backpropagation to a convolutional neural network for human pose estimation.

    Science.gov (United States)

    Witoonchart, Peerajak; Chongstitvatana, Prabhas

    2017-08-01

    In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Convolution based profile fitting

    International Nuclear Information System (INIS)

    Kern, A.; Coelho, A.A.; Cheary, R.W.

    2002-01-01

    Full text: In convolution based profile fitting, profiles are generated by convoluting functions together to form the observed profile shape. For a convolution of 'n' functions this process can be written as, Y(2θ)=F 1 (2θ)x F 2 (2θ)x... x F i (2θ)x....xF n (2θ). In powder diffractometry the functions F i (2θ) can be interpreted as the aberration functions of the diffractometer, but in general any combination of appropriate functions for F i (2θ) may be used in this context. Most direct convolution fitting methods are restricted to combinations of F i (2θ) that can be convoluted analytically (e.g. GSAS) such as Lorentzians, Gaussians, the hat (impulse) function and the exponential function. However, software such as TOPAS is now available that can accurately convolute and refine a wide variety of profile shapes numerically, including user defined profiles, without the need to convolute analytically. Some of the most important advantages of modern convolution based profile fitting are: 1) virtually any peak shape and angle dependence can normally be described using minimal profile parameters in laboratory and synchrotron X-ray data as well as in CW and TOF neutron data. This is possible because numerical convolution and numerical differentiation is used within the refinement procedure so that a wide range of functions can easily be incorporated into the convolution equation; 2) it can use physically based diffractometer models by convoluting the instrument aberration functions. This can be done for most laboratory based X-ray powder diffractometer configurations including conventional divergent beam instruments, parallel beam instruments, and diffractometers used for asymmetric diffraction. It can also accommodate various optical elements (e.g. multilayers and monochromators) and detector systems (e.g. point and position sensitive detectors) and has already been applied to neutron powder diffraction systems (e.g. ANSTO) as well as synchrotron based

  18. Exponential Communication Complexity Advantage from Quantum Superposition of the Direction of Communication

    Science.gov (United States)

    Guérin, Philippe Allard; Feix, Adrien; Araújo, Mateus; Brukner, Časlav

    2016-09-01

    In communication complexity, a number of distant parties have the task of calculating a distributed function of their inputs, while minimizing the amount of communication between them. It is known that with quantum resources, such as entanglement and quantum channels, one can obtain significant reductions in the communication complexity of some tasks. In this work, we study the role of the quantum superposition of the direction of communication as a resource for communication complexity. We present a tripartite communication task for which such a superposition allows for an exponential saving in communication, compared to one-way quantum (or classical) communication; the advantage also holds when we allow for protocols with bounded error probability.

  19. About simple nonlinear and linear superpositions of special exact solutions of Veselov-Novikov equation

    Energy Technology Data Exchange (ETDEWEB)

    Dubrovsky, V. G.; Topovsky, A. V. [Novosibirsk State Technical University, Karl Marx prosp. 20, Novosibirsk 630092 (Russian Federation)

    2013-03-15

    New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u{sup (n)}, n= 1, Horizontal-Ellipsis , N are constructed via Zakharov and Manakov {partial_derivative}-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u{sup (n)} and calculated by {partial_derivative}-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schroedinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums of special solutions u{sup (n)}. It is shown that the sums u=u{sup (k{sub 1})}+...+u{sup (k{sub m})}, 1 Less-Than-Or-Slanted-Equal-To k{sub 1} < k{sub 2} < Horizontal-Ellipsis < k{sub m} Less-Than-Or-Slanted-Equal-To N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schroedinger equation and can serve as model potentials for electrons in planar structures of modern electronics.

  20. Validation of a dose-point kernel convolution technique for internal dosimetry

    International Nuclear Information System (INIS)

    Giap, H.B.; Macey, D.J.; Bayouth, J.E.; Boyer, A.L.

    1995-01-01

    The objective of this study was to validate a dose-point kernel convolution technique that provides a three-dimensional (3D) distribution of absorbed dose from a 3D distribution of the radionuclide 131 I. A dose-point kernel for the penetrating radiations was calculated by a Monte Carlo simulation and cast in a 3D rectangular matrix. This matrix was convolved with the 3D activity map furnished by quantitative single-photon-emission computed tomography (SPECT) to provide a 3D distribution of absorbed dose. The convolution calculation was performed using a 3D fast Fourier transform (FFT) technique, which takes less than 40 s for a 128 x 128 x 16 matrix on an Intel 486 DX2 (66 MHz) personal computer. The calculated photon absorbed dose was compared with values measured by thermoluminescent dosimeters (TLDS) inserted along the diameter of a 22 cm diameter annular source of 131 I. The mean and standard deviation of the percentage difference between the measurements and the calculations were equal to -1% and 3.6% respectively. This convolution method was also used to calculate the 3D dose distribution in an Alderson abdominal phantom containing a liver, a spleen, and a spherical tumour volume loaded with various concentrations of 131 I. By averaging the dose calculated throughout the liver, spleen, and tumour the dose-point kernel approach was compared with values derived using the MIRD formalism, and found to agree to better than 15%. (author)

  1. A convolution method for predicting mean treatment dose including organ motion at imaging

    International Nuclear Information System (INIS)

    Booth, J.T.; Zavgorodni, S.F.; Royal Adelaide Hospital, SA

    2000-01-01

    Full text: The random treatment delivery errors (organ motion and set-up error) can be incorporated into the treatment planning software using a convolution method. Mean treatment dose is computed as the convolution of a static dose distribution with a variation kernel. Typically this variation kernel is Gaussian with variance equal to the sum of the organ motion and set-up error variances. We propose a novel variation kernel for the convolution technique that additionally considers the position of the mobile organ in the planning CT image. The systematic error of organ position in the planning CT image can be considered random for each patient over a population. Thus the variance of the variation kernel will equal the sum of treatment delivery variance and organ motion variance at planning for the population of treatments. The kernel is extended to deal with multiple pre-treatment CT scans to improve tumour localisation for planning. Mean treatment doses calculated with the convolution technique are compared to benchmark Monte Carlo (MC) computations. Calculations of mean treatment dose using the convolution technique agreed with MC results for all cases to better than ± 1 Gy in the planning treatment volume for a prescribed 60 Gy treatment. Convolution provides a quick method of incorporating random organ motion (captured in the planning CT image and during treatment delivery) and random set-up errors directly into the dose distribution. Copyright (2000) Australasian College of Physical Scientists and Engineers in Medicine

  2. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    Science.gov (United States)

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  3. Non-coaxial superposition of vector vortex beams.

    Science.gov (United States)

    Aadhi, A; Vaity, Pravin; Chithrabhanu, P; Reddy, Salla Gangi; Prabakar, Shashi; Singh, R P

    2016-02-10

    Vector vortex beams are classified into four types depending upon spatial variation in their polarization vector. We have generated all four of these types of vector vortex beams by using a modified polarization Sagnac interferometer with a vortex lens. Further, we have studied the non-coaxial superposition of two vector vortex beams. It is observed that the superposition of two vector vortex beams with same polarization singularity leads to a beam with another kind of polarization singularity in their interaction region. The results may be of importance in ultrahigh security of the polarization-encrypted data that utilizes vector vortex beams and multiple optical trapping with non-coaxial superposition of vector vortex beams. We verified our experimental results with theory.

  4. Photon beam modelling with Pinnacle3 Treatment Planning System for a Rokus M Co-60 Machine

    International Nuclear Information System (INIS)

    Dulcescu, Mihaela; Murgulet Cristian

    2008-01-01

    The basic relationships of the convolution/superposition dose calculation technique are reviewed, and a modelling technique that can be used for obtaining a satisfactory beam model for a commercially available convolution/superposition-based treatment planning system is described. A fluence energy spectrum for a Co-60 treatment machine obtained from a Monte Carlo simulation was used for modelling the fluence spectrum for a Rokus M machine. In order to achieve this model we measured the depth dose distribution and the dose profiles with a Wellhofer dosimetry system. The primary fluence was iteratively modelled by comparing the computed depth dose curves and beam profiles with the depth dose curves and crossbeam profiles measured in a water phantom. The objective of beam modelling is to build a model of the primary fluence that the patient is exposed to, which can then be used for the calculation of the dose deposited in the patient. (authors)

  5. Projective measurement onto arbitrary superposition of weak coherent state bases

    DEFF Research Database (Denmark)

    Izumi, Shuro; Takeoka, Masahiro; Wakui, Kentaro

    2018-01-01

    One of the peculiar features in quantum mechanics is that a superposition of macroscopically distinct states can exist. In optical system, this is highlighted by a superposition of coherent states (SCS), i.e. a superposition of classical states. Recently this highly nontrivial quantum state and i...

  6. Nuclear grade cable thermal life model by time temperature superposition algorithm based on Matlab GUI

    International Nuclear Information System (INIS)

    Lu Yanyun; Gu Shenjie; Lou Tianyang

    2014-01-01

    Background: As nuclear grade cable must endure harsh environment within design life, it is critical to predict cable thermal life accurately owing to thermal aging, which is one of dominant factors of aging mechanism. Purpose: Using time temperature superposition (TTS) method, the aim is to construct nuclear grade cable thermal life model, predict cable residual life and develop life model interactive interface under Matlab GUI. Methods: According to TTS, nuclear grade cable thermal life model can be constructed by shifting data groups at various temperatures to preset reference temperature with translation factor which is determined by non linear programming optimization. Interactive interface of cable thermal life model developed under Matlab GUI consists of superposition mode and standard mode which include features such as optimization of translation factor, calculation of activation energy, construction of thermal aging curve and analysis of aging mechanism., Results: With calculation result comparison between superposition and standard method, the result with TTS has better accuracy than that with standard method. Furthermore, confidence level of nuclear grade cable thermal life with TTS is higher than that with standard method. Conclusion: The results show that TTS methodology is applicable to thermal life prediction of nuclear grade cable. Interactive Interface under Matlab GUI achieves anticipated functionalities. (authors)

  7. Thermalization as an Invisibility Cloak for Fragile Quantum Superpositions

    OpenAIRE

    Hahn, Walter; Fine, Boris V.

    2017-01-01

    We propose a method for protecting fragile quantum superpositions in many-particle systems from dephasing by external classical noise. We call superpositions "fragile" if dephasing occurs particularly fast, because the noise couples very differently to the superposed states. The method consists of letting a quantum superposition evolve under the internal thermalization dynamics of the system, followed by a time reversal manipulation known as Loschmidt echo. The thermalization dynamics makes t...

  8. Applicability of the Fourier convolution theorem to the analysis of late-type stellar spectra

    International Nuclear Information System (INIS)

    Bruning, D.H.

    1981-01-01

    Solar flux and intensity measurements were obtained at Sacramento Peak Observatory to test the validity of the Fourier convolution method as a means of analyzing the spectral line shapes of late-type stars. Analysis of six iron lines near 6200A shows that, in general, the convolution method is not a suitable approximation for the calculation of the flux profile. The convolution method does reasonably reproduce the line shape for some lines which appear not to vary across the disk of the sun, but does not properly calculate the central line depth of these lines. Even if a central depth correction could be found, it is difficult to predict, especially for stars other than the sun, which lines have nearly constant shapes and could be used with the convolution method. Therefore, explicit disk integrations are promoted as the only reliable method of spectral line analysis for late-type stars. Several methods of performing the disk integration are investigated. Although the Abt (1957) prescription appears suitable for the limited case studied, methods using annuli of equal area, equal flux, or equal width (Soberblom, 1980) are considered better models. The model that is the easiest to use and most efficient computationally is the equal area model. Model atmosphere calculations yield values for the microturbulence and macroturbulence similar to those derived by observers. Since the depth dependence of the microturbulence is ignored in the calculations, the intensity profiles at disk center and the limb do not match the observed intensity profiles with only one set of velocity parameters. Use of these incorrectly calculated intensity profiles in the integration procedure to obtain the flux profile leads to incorrect estimates of the solar macroturbulence

  9. Convolutional coding techniques for data protection

    Science.gov (United States)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  10. Toward quantum superposition of living organisms

    International Nuclear Information System (INIS)

    Romero-Isart, Oriol; Cirac, J Ignacio; Juan, Mathieu L; Quidant, Romain

    2010-01-01

    The most striking feature of quantum mechanics is the existence of superposition states, where an object appears to be in different situations at the same time. The existence of such states has been previously tested with small objects, such as atoms, ions, electrons and photons (Zoller et al 2005 Eur. Phys. J. D 36 203-28), and even with molecules (Arndt et al 1999 Nature 401 680-2). More recently, it has been shown that it is possible to create superpositions of collections of photons (Deleglise et al 2008 Nature 455 510-14), atoms (Hammerer et al 2008 arXiv:0807.3358) or Cooper pairs (Friedman et al 2000 Nature 406 43-6). Very recent progress in optomechanical systems may soon allow us to create superpositions of even larger objects, such as micro-sized mirrors or cantilevers (Marshall et al 2003 Phys. Rev. Lett. 91 130401; Kippenberg and Vahala 2008 Science 321 1172-6; Marquardt and Girvin 2009 Physics 2 40; Favero and Karrai 2009 Nature Photon. 3 201-5), and thus to test quantum mechanical phenomena at larger scales. Here we propose a method to cool down and create quantum superpositions of the motion of sub-wavelength, arbitrarily shaped dielectric objects trapped inside a high-finesse cavity at a very low pressure. Our method is ideally suited for the smallest living organisms, such as viruses, which survive under low-vacuum pressures (Rothschild and Mancinelli 2001 Nature 406 1092-101) and optically behave as dielectric objects (Ashkin and Dziedzic 1987 Science 235 1517-20). This opens up the possibility of testing the quantum nature of living organisms by creating quantum superposition states in very much the same spirit as the original Schroedinger's cat 'gedanken' paradigm (Schroedinger 1935 Naturwissenschaften 23 807-12, 823-8, 844-9). We anticipate that our paper will be a starting point for experimentally addressing fundamental questions, such as the role of life and consciousness in quantum mechanics.

  11. Toward quantum superposition of living organisms

    Energy Technology Data Exchange (ETDEWEB)

    Romero-Isart, Oriol; Cirac, J Ignacio [Max-Planck-Institut fuer Quantenoptik, Hans-Kopfermann-Strasse 1, D-85748, Garching (Germany); Juan, Mathieu L; Quidant, Romain [ICFO-Institut de Ciencies Fotoniques, Mediterranean Technology Park, Castelldefels, Barcelona 08860 (Spain)], E-mail: oriol.romero-isart@mpq.mpg.de

    2010-03-15

    The most striking feature of quantum mechanics is the existence of superposition states, where an object appears to be in different situations at the same time. The existence of such states has been previously tested with small objects, such as atoms, ions, electrons and photons (Zoller et al 2005 Eur. Phys. J. D 36 203-28), and even with molecules (Arndt et al 1999 Nature 401 680-2). More recently, it has been shown that it is possible to create superpositions of collections of photons (Deleglise et al 2008 Nature 455 510-14), atoms (Hammerer et al 2008 arXiv:0807.3358) or Cooper pairs (Friedman et al 2000 Nature 406 43-6). Very recent progress in optomechanical systems may soon allow us to create superpositions of even larger objects, such as micro-sized mirrors or cantilevers (Marshall et al 2003 Phys. Rev. Lett. 91 130401; Kippenberg and Vahala 2008 Science 321 1172-6; Marquardt and Girvin 2009 Physics 2 40; Favero and Karrai 2009 Nature Photon. 3 201-5), and thus to test quantum mechanical phenomena at larger scales. Here we propose a method to cool down and create quantum superpositions of the motion of sub-wavelength, arbitrarily shaped dielectric objects trapped inside a high-finesse cavity at a very low pressure. Our method is ideally suited for the smallest living organisms, such as viruses, which survive under low-vacuum pressures (Rothschild and Mancinelli 2001 Nature 406 1092-101) and optically behave as dielectric objects (Ashkin and Dziedzic 1987 Science 235 1517-20). This opens up the possibility of testing the quantum nature of living organisms by creating quantum superposition states in very much the same spirit as the original Schroedinger's cat 'gedanken' paradigm (Schroedinger 1935 Naturwissenschaften 23 807-12, 823-8, 844-9). We anticipate that our paper will be a starting point for experimentally addressing fundamental questions, such as the role of life and consciousness in quantum mechanics.

  12. Dose discrepancies in the buildup region and their impact on dose calculations for IMRT fields

    International Nuclear Information System (INIS)

    Hsu, Shu-Hui; Moran, Jean M.; Chen Yu; Kulasekere, Ravi; Roberson, Peter L.

    2010-01-01

    Purpose: Dose accuracy in the buildup region for radiotherapy treatment planning suffers from challenges in both measurement and calculation. This study investigates the dosimetry in the buildup region at normal and oblique incidences for open and IMRT fields and assesses the quality of the treatment planning calculations. Methods: This study was divided into three parts. First, percent depth doses and profiles (for 5x5, 10x10, 20x20, and 30x30 cm 2 field sizes at 0 deg., 45 deg., and 70 deg. incidences) were measured in the buildup region in Solid Water using an Attix parallel plate chamber and Kodak XV film, respectively. Second, the parameters in the empirical contamination (EC) term of the convolution/superposition (CVSP) calculation algorithm were fitted based on open field measurements. Finally, seven segmental head-and-neck IMRT fields were measured on a flat phantom geometry and compared to calculations using γ and dose-gradient compensation (C) indices to evaluate the impact of residual discrepancies and to assess the adequacy of the contamination term for IMRT fields. Results: Local deviations between measurements and calculations for open fields were within 1% and 4% in the buildup region for normal and oblique incidences, respectively. The C index with 5%/1 mm criteria for IMRT fields ranged from 89% to 99% and from 96% to 98% at 2 mm and 10 cm depths, respectively. The quality of agreement in the buildup region for open and IMRT fields is comparable to that in nonbuildup regions. Conclusions: The added EC term in CVSP was determined to be adequate for both open and IMRT fields. Due to the dependence of calculation accuracy on (1) EC modeling, (2) internal convolution and density grid sizes, (3) implementation details in the algorithm, and (4) the accuracy of measurements used for treatment planning system commissioning, the authors recommend an evaluation of the accuracy of near-surface dose calculations as a part of treatment planning commissioning.

  13. Fast, large-scale hologram calculation in wavelet domain

    Science.gov (United States)

    Shimobaba, Tomoyoshi; Matsushima, Kyoji; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Ito, Tomoyoshi

    2018-04-01

    We propose a large-scale hologram calculation using WAvelet ShrinkAge-Based superpositIon (WASABI), a wavelet transform-based algorithm. An image-type hologram calculated using the WASABI method is printed on a glass substrate with the resolution of 65 , 536 × 65 , 536 pixels and a pixel pitch of 1 μm. The hologram calculation time amounts to approximately 354 s on a commercial CPU, which is approximately 30 times faster than conventional methods.

  14. Improving deep convolutional neural networks with mixed maxout units.

    Directory of Open Access Journals (Sweden)

    Hui-Zhen Zhao

    Full Text Available Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN that "non-maximal features are unable to deliver" and "feature mapping subspace pooling is insufficient," we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance.

  15. Starting SCF Calculations by Superposition of Atomic Densities

    NARCIS (Netherlands)

    van Lenthe, J.H.; Zwaans, R.; van Dam, H.J.J.; Guest, M.F.

    2006-01-01

    We describe the procedure to start an SCF calculation of the general type from a sum of atomic electron densities, as implemented in GAMESS-UK. Although the procedure is well-known for closed-shell calculations and was already suggested when the Direct SCF procedure was proposed, the general

  16. Convolution copula econometrics

    CERN Document Server

    Cherubini, Umberto; Mulinacci, Sabrina

    2016-01-01

    This book presents a novel approach to time series econometrics, which studies the behavior of nonlinear stochastic processes. This approach allows for an arbitrary dependence structure in the increments and provides a generalization with respect to the standard linear independent increments assumption of classical time series models. The book offers a solution to the problem of a general semiparametric approach, which is given by a concept called C-convolution (convolution of dependent variables), and the corresponding theory of convolution-based copulas. Intended for econometrics and statistics scholars with a special interest in time series analysis and copula functions (or other nonparametric approaches), the book is also useful for doctoral students with a basic knowledge of copula functions wanting to learn about the latest research developments in the field.

  17. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed

    2018-04-08

    Convolutional Sparse Coding (CSC) is a well-established image representation model especially suited for image restoration tasks. In this work, we extend the applicability of this model by proposing a supervised approach to convolutional sparse coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements to be discriminative. Experimental results show that using supervised convolutional learning results in two key advantages. First, we learn more semantically relevant filters in the dictionary and second, we achieve improved image reconstruction on unseen data.

  18. Photon beam convolution using polyenergetic energy deposition kernels

    International Nuclear Information System (INIS)

    Hoban, P.W.; Murray, D.C.; Round, W.H.

    1994-01-01

    In photon beam convolution calculations where polyenergetic energy deposition kernels (EDKs) are used, the primary photon energy spectrum should be correctly accounted for in Monte Carlo generation of EDKs. This requires the probability of interaction, determined by the linear attenuation coefficient, μ, to be taken into account when primary photon interactions are forced to occur at the EDK origin. The use of primary and scattered EDKs generated with a fixed photon spectrum can give rise to an error in the dose calculation due to neglecting the effects of beam hardening with depth. The proportion of primary photon energy that is transferred to secondary electrons increases with depth of interaction, due to the increase in the ratio μ ab /μ as the beam hardens. Convolution depth-dose curves calculated using polyenergetic EDKs generated for the primary photon spectra which exist at depths of 0, 20 and 40 cm in water, show a fall-off which is too steep when compared with EGS4 Monte Carlo results. A beam hardening correction factor applied to primary and scattered 0 cm EDKs, based on the ratio of kerma to terma at each depth, gives primary, scattered and total dose in good agreement with Monte Carlo results. (Author)

  19. Thermalization as an invisibility cloak for fragile quantum superpositions

    Science.gov (United States)

    Hahn, Walter; Fine, Boris V.

    2017-07-01

    We propose a method for protecting fragile quantum superpositions in many-particle systems from dephasing by external classical noise. We call superpositions "fragile" if dephasing occurs particularly fast, because the noise couples very differently to the superposed states. The method consists of letting a quantum superposition evolve under the internal thermalization dynamics of the system, followed by a time-reversal manipulation known as Loschmidt echo. The thermalization dynamics makes the superposed states almost indistinguishable during most of the above procedure. We validate the method by applying it to a cluster of spins ½.

  20. Intra-cavity generation of superpositions of Laguerre-Gaussian beams

    CSIR Research Space (South Africa)

    Naidoo, Darryl

    2012-01-01

    Full Text Available In this paper we demonstrate experimentally the intra-cavity generation of a coherent superposition of Laguerre–Gaussian modes of zero radial order but opposite azimuthal order. The superposition is created with a simple intra-cavity stop...

  1. A comparison between anisotropic analytical and multigrid superposition dose calculation algorithms in radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Wu, Vincent W.C.; Tse, Teddy K.H.; Ho, Cola L.M.; Yeung, Eric C.Y.

    2013-01-01

    Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each case by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time

  2. Strongly-MDS convolutional codes

    NARCIS (Netherlands)

    Gluesing-Luerssen, H; Rosenthal, J; Smarandache, R

    Maximum-distance separable (MDS) convolutional codes have the property that their free distance is maximal among all codes of the same rate and the same degree. In this paper, a class of MDS convolutional codes is introduced whose column distances reach the generalized Singleton bound at the

  3. QCDNUM: Fast QCD evolution and convolution

    Science.gov (United States)

    Botje, M.

    2011-02-01

    The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline

  4. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  5. Teleportation of Unknown Superpositions of Collective Atomic Coherent States

    Institute of Scientific and Technical Information of China (English)

    ZHENG ShiBiao

    2001-01-01

    We propose a scheme to teleport an unknown superposition of two atomic coherent states with different phases. Our scheme is based on resonant and dispersive atom-field interaction. Our scheme provides a possibility of teleporting macroscopic superposition states of many atoms first time.``

  6. Testing the quantum superposition principle: matter waves and beyond

    Science.gov (United States)

    Ulbricht, Hendrik

    2015-05-01

    New technological developments allow to explore the quantum properties of very complex systems, bringing the question of whether also macroscopic systems share such features, within experimental reach. The interest in this question is increased by the fact that, on the theory side, many suggest that the quantum superposition principle is not exact, departures from it being the larger, the more macroscopic the system. Testing the superposition principle intrinsically also means to test suggested extensions of quantum theory, so-called collapse models. We will report on three new proposals to experimentally test the superposition principle with nanoparticle interferometry, optomechanical devices and by spectroscopic experiments in the frequency domain. We will also report on the status of optical levitation and cooling experiments with nanoparticles in our labs, towards an Earth bound matter-wave interferometer to test the superposition principle for a particle mass of one million amu (atomic mass unit).

  7. Comparison of dose calculation algorithms in phantoms with lung equivalent heterogeneities under conditions of lateral electronic disequilibrium

    International Nuclear Information System (INIS)

    Carrasco, P.; Jornet, N.; Duch, M.A.; Weber, L.; Ginjaume, M.; Eudaldo, T.; Jurado, D.; Ruiz, A.; Ribas, M.

    2004-01-01

    An extensive set of benchmark measurement of PDDs and beam profiles was performed in a heterogeneous layer phantom, including a lung equivalent heterogeneity, by means of several detectors and compared against the predicted dose values by different calculation algorithms in two treatment planning systems. PDDs were measured with TLDs, plane parallel and cylindrical ionization chambers and beam profiles with films. Additionally, Monte Carlo simulations by meansof the PENELOPE code were performed. Four different field sizes (10x10, 5x5, 2x2, and1x1 cm 2 ) and two lung equivalent materials (CIRS, ρ e w =0.195 and St. Bartholomew Hospital, London, ρ e w =0.244-0.322) were studied. The performance of four correction-based algorithms and one based on convolution-superposition was analyzed. The correction-based algorithms were the Batho, the Modified Batho, and the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system and the TMS Pencil Beam from the Helax-TMS (Nucletron) treatment planning system. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. The only studied calculation methods that correlated successfully with the measured values with a 2% average inside all media were the Collapsed Cone and the Monte Carlo simulation. The biggest difference between the predicted and the delivered dose in the beam axis was found for the EqTAR algorithm inside the CIRS lung equivalent material in a 2x2 cm 2 18 MV x-ray beam. In these conditions, average and maximum difference against the TLD measurements were 32% and 39%, respectively. In the water equivalent part of the phantom every algorithm correctly predicted the dose (within 2%) everywhere except very close to the interfaces where differences up to 24% were found for 2x2 cm 2 18 MV photon beams. Consistent values were found between the reference detector (ionization chamber in water and TLD in lung) and Monte Carlo simulations, yielding minimal differences (0

  8. Quantum State Engineering Via Coherent-State Superpositions

    Science.gov (United States)

    Janszky, Jozsef; Adam, P.; Szabo, S.; Domokos, P.

    1996-01-01

    The quantum interference between the two parts of the optical Schrodinger-cat state makes possible to construct a wide class of quantum states via discrete superpositions of coherent states. Even a small number of coherent states can approximate the given quantum states at a high accuracy when the distance between the coherent states is optimized, e. g. nearly perfect Fock state can be constructed by discrete superpositions of n + 1 coherent states lying in the vicinity of the vacuum state.

  9. Experimental superposition of orders of quantum gates

    Science.gov (United States)

    Procopio, Lorenzo M.; Moqanaki, Amir; Araújo, Mateus; Costa, Fabio; Alonso Calafell, Irati; Dowd, Emma G.; Hamel, Deny R.; Rozema, Lee A.; Brukner, Časlav; Walther, Philip

    2015-01-01

    Quantum computers achieve a speed-up by placing quantum bits (qubits) in superpositions of different states. However, it has recently been appreciated that quantum mechanics also allows one to ‘superimpose different operations'. Furthermore, it has been shown that using a qubit to coherently control the gate order allows one to accomplish a task—determining if two gates commute or anti-commute—with fewer gate uses than any known quantum algorithm. Here we experimentally demonstrate this advantage, in a photonic context, using a second qubit to control the order in which two gates are applied to a first qubit. We create the required superposition of gate orders by using additional degrees of freedom of the photons encoding our qubits. The new resource we exploit can be interpreted as a superposition of causal orders, and could allow quantum algorithms to be implemented with an efficiency unlikely to be achieved on a fixed-gate-order quantum computer. PMID:26250107

  10. Calculation methods for dissolution rate of multicomponent alloys during electrochemical machining

    International Nuclear Information System (INIS)

    Dikusar, A.I.; Petrenko, V.I.; Dikusar, G.K.; Ehngel'gardt, G.R.; Michukova, N.Yu.

    1981-01-01

    The possibility of theoretical calculation of metal dissolution rate during electrochemical mashining is considered. Two calculation techniques are compared at the example of two-component W-Re, Ni-W, Mo-Re alloys, namely: ''charge superposition'' and ''weight percents''. It is concluded that the technique of ''charge superposition'' is the only grounded calculation technique of specific rates of dissolution for alloys [ru

  11. Dosimetric validation of the anisotropic analytical algorithm for photon dose calculation: fundamental characterization in water

    International Nuclear Information System (INIS)

    Fogliata, Antonella; Nicolini, Giorgia; Vanetti, Eugenio; Clivio, Alessandro; Cozzi, Luca

    2006-01-01

    In July 2005 a new algorithm was released by Varian Medical Systems for the Eclipse planning system and installed in our institute. It is the anisotropic analytical algorithm (AAA) for photon dose calculations, a convolution/superposition model for the first time implemented in a Varian planning system. It was therefore necessary to perform validation studies at different levels with a wide investigation approach. To validate the basic performances of the AAA, a detailed analysis of data computed by the AAA configuration algorithm was carried out and data were compared against measurements. To better appraise the performance of AAA and the capability of its configuration to tailor machine-specific characteristics, data obtained from the pencil beam convolution (PBC) algorithm implemented in Eclipse were also added in the comparison. Since the purpose of the paper is to address the basic performances of the AAA and of its configuration procedures, only data relative to measurements in water will be reported. Validation was carried out for three beams: 6 MV and 15 MV from a Clinac 2100C/D and 6 MV from a Clinac 6EX. Generally AAA calculations reproduced very well measured data, and small deviations were observed, on average, for all the quantities investigated for open and wedged fields. In particular, percentage depth-dose curves showed on average differences between calculation and measurement smaller than 1% or 1 mm, and computed profiles in the flattened region matched measurements with deviations smaller than 1% for all beams, field sizes, depths and wedges. Percentage differences in output factors were observed as small as 1% on average (with a range smaller than ±2%) for all conditions. Additional tests were carried out for enhanced dynamic wedges with results comparable to previous results. The basic dosimetric validation of the AAA was therefore considered satisfactory

  12. Robust mesoscopic superposition of strongly correlated ultracold atoms

    International Nuclear Information System (INIS)

    Hallwood, David W.; Ernst, Thomas; Brand, Joachim

    2010-01-01

    We propose a scheme to create coherent superpositions of annular flow of strongly interacting bosonic atoms in a one-dimensional ring trap. The nonrotating ground state is coupled to a vortex state with mesoscopic angular momentum by means of a narrow potential barrier and an applied phase that originates from either rotation or a synthetic magnetic field. We show that superposition states in the Tonks-Girardeau regime are robust against single-particle loss due to the effects of strong correlations. The coupling between the mesoscopically distinct states scales much more favorably with particle number than in schemes relying on weak interactions, thus making particle numbers of hundreds or thousands feasible. Coherent oscillations induced by time variation of parameters may serve as a 'smoking gun' signature for detecting superposition states.

  13. Empirical Evaluation of Superposition Coded Multicasting for Scalable Video

    KAUST Repository

    Chun Pong Lau

    2013-03-01

    In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However, it has never been practically implemented using a commercial 4G system. This paper demonstrates our prototype in achieving the SCM using a standard 802.16 based testbed for scalable video transmissions. In particular, to implement the superposition coded (SPC) modulation, we take advantage a novel software approach, namely logical SPC (L-SPC), which aims to mimic the physical layer superposition coded modulation. The emulation results show improved throughput comparing with generic multicast method.

  14. Geometrical correction for the inter- and intramolecular basis set superposition error in periodic density functional theory calculations.

    Science.gov (United States)

    Brandenburg, Jan Gerit; Alessio, Maristella; Civalleri, Bartolomeo; Peintinger, Michael F; Bredow, Thomas; Grimme, Stefan

    2013-09-26

    We extend the previously developed geometrical correction for the inter- and intramolecular basis set superposition error (gCP) to periodic density functional theory (DFT) calculations. We report gCP results compared to those from the standard Boys-Bernardi counterpoise correction scheme and large basis set calculations. The applicability of the method to molecular crystals as the main target is tested for the benchmark set X23. It consists of 23 noncovalently bound crystals as introduced by Johnson et al. (J. Chem. Phys. 2012, 137, 054103) and refined by Tkatchenko et al. (J. Chem. Phys. 2013, 139, 024705). In order to accurately describe long-range electron correlation effects, we use the standard atom-pairwise dispersion correction scheme DFT-D3. We show that a combination of DFT energies with small atom-centered basis sets, the D3 dispersion correction, and the gCP correction can accurately describe van der Waals and hydrogen-bonded crystals. Mean absolute deviations of the X23 sublimation energies can be reduced by more than 70% and 80% for the standard functionals PBE and B3LYP, respectively, to small residual mean absolute deviations of about 2 kcal/mol (corresponding to 13% of the average sublimation energy). As a further test, we compute the interlayer interaction of graphite for varying distances and obtain a good equilibrium distance and interaction energy of 6.75 Å and -43.0 meV/atom at the PBE-D3-gCP/SVP level. We fit the gCP scheme for a recently developed pob-TZVP solid-state basis set and obtain reasonable results for the X23 benchmark set and the potential energy curve for water adsorption on a nickel (110) surface.

  15. Quantifying the interplay effect in prostate IMRT delivery using a convolution-based method

    International Nuclear Information System (INIS)

    Li, Haisen S.; Chetty, Indrin J.; Solberg, Timothy D.

    2008-01-01

    The authors present a segment-based convolution method to account for the interplay effect between intrafraction organ motion and the multileaf collimator position for each particular segment in intensity modulated radiation therapy (IMRT) delivered in a step-and-shoot manner. In this method, the static dose distribution attributed to each segment is convolved with the probability density function (PDF) of motion during delivery of the segment, whereas in the conventional convolution method (''average-based convolution''), the static dose distribution is convolved with the PDF averaged over an entire fraction, an entire treatment course, or even an entire patient population. In the case of IMRT delivered in a step-and-shoot manner, the average-based convolution method assumes that in each segment the target volume experiences the same motion pattern (PDF) as that of population. In the segment-based convolution method, the dose during each segment is calculated by convolving the static dose with the motion PDF specific to that segment, allowing both intrafraction motion and the interplay effect to be accounted for in the dose calculation. Intrafraction prostate motion data from a population of 35 patients tracked using the Calypso system (Calypso Medical Technologies, Inc., Seattle, WA) was used to generate motion PDFs. These were then convolved with dose distributions from clinical prostate IMRT plans. For a single segment with a small number of monitor units, the interplay effect introduced errors of up to 25.9% in the mean CTV dose compared against the planned dose evaluated by using the PDF of the entire fraction. In contrast, the interplay effect reduced the minimum CTV dose by 4.4%, and the CTV generalized equivalent uniform dose by 1.3%, in single fraction plans. For entire treatment courses delivered in either a hypofractionated (five fractions) or conventional (>30 fractions) regimen, the discrepancy in total dose due to interplay effect was negligible

  16. Separating Underdetermined Convolutive Speech Mixtures

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Wang, DeLiang; Larsen, Jan

    2006-01-01

    a method for underdetermined blind source separation of convolutive mixtures. The proposed framework is applicable for separation of instantaneous as well as convolutive speech mixtures. It is possible to iteratively extract each speech signal from the mixture by combining blind source separation...

  17. Superposition Attacks on Cryptographic Protocols

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Funder, Jakob Løvstad; Nielsen, Jesper Buus

    2011-01-01

    of information. In this paper, we introduce a fundamentally new model of quantum attacks on classical cryptographic protocols, where the adversary is allowed to ask several classical queries in quantum superposition. This is a strictly stronger attack than the standard one, and we consider the security......Attacks on classical cryptographic protocols are usually modeled by allowing an adversary to ask queries from an oracle. Security is then defined by requiring that as long as the queries satisfy some constraint, there is some problem the adversary cannot solve, such as compute a certain piece...... of several primitives in this model. We show that a secret-sharing scheme that is secure with threshold $t$ in the standard model is secure against superposition attacks if and only if the threshold is lowered to $t/2$. We use this result to give zero-knowledge proofs for all of NP in the common reference...

  18. Macroscopicity of quantum superpositions on a one-parameter unitary path in Hilbert space

    Science.gov (United States)

    Volkoff, T. J.; Whaley, K. B.

    2014-12-01

    We analyze quantum states formed as superpositions of an initial pure product state and its image under local unitary evolution, using two measurement-based measures of superposition size: one based on the optimal quantum binary distinguishability of the branches of the superposition and another based on the ratio of the maximal quantum Fisher information of the superposition to that of its branches, i.e., the relative metrological usefulness of the superposition. A general formula for the effective sizes of these states according to the branch-distinguishability measure is obtained and applied to superposition states of N quantum harmonic oscillators composed of Gaussian branches. Considering optimal distinguishability of pure states on a time-evolution path leads naturally to a notion of distinguishability time that generalizes the well-known orthogonalization times of Mandelstam and Tamm and Margolus and Levitin. We further show that the distinguishability time provides a compact operational expression for the superposition size measure based on the relative quantum Fisher information. By restricting the maximization procedure in the definition of this measure to an appropriate algebra of observables, we show that the superposition size of, e.g., NOON states and hierarchical cat states, can scale linearly with the number of elementary particles comprising the superposition state, implying precision scaling inversely with the total number of photons when these states are employed as probes in quantum parameter estimation of a 1-local Hamiltonian in this algebra.

  19. Method for assessing the probability of accumulated doses from an intermittent source using the convolution technique

    International Nuclear Information System (INIS)

    Coleman, J.H.

    1980-10-01

    A technique is discussed for computing the probability distribution of the accumulated dose received by an arbitrary receptor resulting from several single releases from an intermittent source. The probability density of the accumulated dose is the convolution of the probability densities of doses from the intermittent releases. Emissions are not assumed to be constant over the brief release period. The fast fourier transform is used in the calculation of the convolution

  20. Superposition in quantum and relativity physics: an interaction interpretation of special relativity theory. III

    International Nuclear Information System (INIS)

    Schlegel, R.

    1975-01-01

    With the interaction interpretation, the Lorentz transformation of a system arises with selection from a superposition of its states in an observation-interaction. Integration of momentum states of a mass over all possible velocities gives the rest-mass energy. Static electrical and magnetic fields are not found to form such a superposition and are to be taken as irreducible elements. The external superposition consists of those states that are reached only by change of state of motion, whereas the internal superposition contains all the states available to an observer in a single inertial coordinate system. The conjecture is advanced that states of superposition may only be those related by space-time transformations (Lorentz transformations plus space inversion and charge conjugation). The continuum of external and internal superpositions is examined for various masses, and an argument for the unity of the superpositions is presented

  1. On the L-characteristic of nonlinear superposition operators in lp-spaces

    International Nuclear Information System (INIS)

    Dedagic, F.

    1995-04-01

    In this paper we describe the L-characteristic of the nonlinear superposition operator F(x) f(s,x(s)) between two Banach spaces of functions x from N to R. It was shown that L-characteristic of the nonlinear superposition operator which acts between two Lebesgue spaces has so-called Σ-convexity property. In this paper we show that L-characteristic of the operator F (between two Banach spaces) has the convexity property. It means that the classical interpolation theorem of Reisz-Thorin for a linear operator holds for the nonlinear operator superposition which acts between two Banach spaces of sequences. Moreover, we consider the growth function of the operator superposition in mentioned spaces and we show that one has the logarithmically convexity property. (author). 7 refs

  2. Entanglement and quantum superposition induced by a single photon

    Science.gov (United States)

    Lü, Xin-You; Zhu, Gui-Lei; Zheng, Li-Li; Wu, Ying

    2018-03-01

    We predict the occurrence of single-photon-induced entanglement and quantum superposition in a hybrid quantum model, introducing an optomechanical coupling into the Rabi model. Originally, it comes from the photon-dependent quantum property of the ground state featured by the proposed hybrid model. It is associated with a single-photon-induced quantum phase transition, and is immune to the A2 term of the spin-field interaction. Moreover, the obtained quantum superposition state is actually a squeezed cat state, which can significantly enhance precision in quantum metrology. This work offers an approach to manipulate entanglement and quantum superposition with a single photon, which might have potential applications in the engineering of new single-photon quantum devices, and also fundamentally broaden the regime of cavity QED.

  3. Convolution of Distribution-Valued Functions. Applications.

    OpenAIRE

    BARGETZ, CHRISTIAN

    2011-01-01

    In this article we examine products and convolutions of vector-valued functions. For nuclear normal spaces of distributions Proposition 25 in [31,p. 120] yields a vector-valued product or convolution if there is a continuous product or convolution mapping in the range of the vector-valued functions. For specific spaces, we generalize this result to hypocontinuous bilinear maps at the expense of generality with respect to the function space. We consider holomorphic, meromorphic and differentia...

  4. Generation of optical coherent state superpositions for quantum information processing

    DEFF Research Database (Denmark)

    Tipsmark, Anders

    2012-01-01

    I dette projektarbejde med titlen “Generation of optical coherent state superpositions for quantum information processing” har målet været at generere optiske kat-tilstande. Dette er en kvantemekanisk superpositions tilstand af to koherente tilstande med stor amplitude. Sådan en tilstand er...

  5. Macroscopic superposition states and decoherence by quantum telegraph noise

    Energy Technology Data Exchange (ETDEWEB)

    Abel, Benjamin Simon

    2008-12-19

    In the first part of the present thesis we address the question about the size of superpositions of macroscopically distinct quantum states. We propose a measure for the ''size'' of a Schroedinger cat state, i.e. a quantum superposition of two many-body states with (supposedly) macroscopically distinct properties, by counting how many single-particle operations are needed to map one state onto the other. We apply our measure to a superconducting three-junction flux qubit put into a superposition of clockwise and counterclockwise circulating supercurrent states and find this Schroedinger cat to be surprisingly small. The unavoidable coupling of any quantum system to many environmental degrees of freedom leads to an irreversible loss of information about an initially prepared superposition of quantum states. This phenomenon, commonly referred to as decoherence or dephasing, is the subject of the second part of the thesis. We have studied the time evolution of the reduced density matrix of a two-level system (qubit) subject to quantum telegraph noise which is the major source of decoherence in Josephson charge qubits. We are able to derive an exact expression for the time evolution of the reduced density matrix. (orig.)

  6. Macroscopic superposition states and decoherence by quantum telegraph noise

    International Nuclear Information System (INIS)

    Abel, Benjamin Simon

    2008-01-01

    In the first part of the present thesis we address the question about the size of superpositions of macroscopically distinct quantum states. We propose a measure for the ''size'' of a Schroedinger cat state, i.e. a quantum superposition of two many-body states with (supposedly) macroscopically distinct properties, by counting how many single-particle operations are needed to map one state onto the other. We apply our measure to a superconducting three-junction flux qubit put into a superposition of clockwise and counterclockwise circulating supercurrent states and find this Schroedinger cat to be surprisingly small. The unavoidable coupling of any quantum system to many environmental degrees of freedom leads to an irreversible loss of information about an initially prepared superposition of quantum states. This phenomenon, commonly referred to as decoherence or dephasing, is the subject of the second part of the thesis. We have studied the time evolution of the reduced density matrix of a two-level system (qubit) subject to quantum telegraph noise which is the major source of decoherence in Josephson charge qubits. We are able to derive an exact expression for the time evolution of the reduced density matrix. (orig.)

  7. Evaluation of Class II treatment by cephalometric regional superpositions versus conventional measurements.

    Science.gov (United States)

    Efstratiadis, Stella; Baumrind, Sheldon; Shofer, Frances; Jacobsson-Hunt, Ulla; Laster, Larry; Ghafari, Joseph

    2005-11-01

    The aims of this study were (1) to evaluate cephalometric changes in subjects with Class II Division 1 malocclusion who were treated with headgear (HG) or Fränkel function regulator (FR) and (2) to compare findings from regional superpositions of cephalometric structures with those from conventional cephalometric measurements. Cephalographs were taken at baseline, after 1 year, and after 2 years of 65 children enrolled in a prospective randomized clinical trial. The spatial location of the landmarks derived from regional superpositions was evaluated in a coordinate system oriented on natural head position. The superpositions included the best anatomic fit of the anterior cranial base, maxillary base, and mandibular structures. Both the HG and the FR were effective in correcting the distoclusion, and they generated enhanced differential growth between the jaws. Differences between cranial and maxillary superpositions regarding mandibular displacement (Point B, pogonion, gnathion, menton) were noted: the HG had a more horizontal vector on maxillary superposition that was also greater (.0001 < P < .05) than the horizontal displacement observed with the FR. This discrepancy appeared to be related to (1) the clockwise (backward) rotation of the palatal and mandibular planes observed with the HG; the palatal plane's rotation, which was transferred through the occlusion to the mandibular plane, was factored out on maxillary superposition; and (2) the interaction between the inclination of the maxillary incisors and the forward movement of the mandible during growth. Findings from superpositions agreed with conventional angular and linear measurements regarding the basic conclusions for the primary effects of HG and FR. However, the results suggest that inferences of mandibular displacement are more reliable from maxillary than cranial superposition when evaluating occlusal changes during treatment.

  8. Single-Atom Gating of Quantum State Superpositions

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Christopher

    2010-04-28

    The ultimate miniaturization of electronic devices will likely require local and coherent control of single electronic wavefunctions. Wavefunctions exist within both physical real space and an abstract state space with a simple geometric interpretation: this state space - or Hilbert space - is spanned by mutually orthogonal state vectors corresponding to the quantized degrees of freedom of the real-space system. Measurement of superpositions is akin to accessing the direction of a vector in Hilbert space, determining an angle of rotation equivalent to quantum phase. Here we show that an individual atom inside a designed quantum corral1 can control this angle, producing arbitrary coherent superpositions of spatial quantum states. Using scanning tunnelling microscopy and nanostructures assembled atom-by-atom we demonstrate how single spins and quantum mirages can be harnessed to image the superposition of two electronic states. We also present a straightforward method to determine the atom path enacting phase rotations between any desired state vectors. A single atom thus becomes a real-space handle for an abstract Hilbert space, providing a simple technique for coherent quantum state manipulation at the spatial limit of condensed matter.

  9. Digital image correlation based on a fast convolution strategy

    Science.gov (United States)

    Yuan, Yuan; Zhan, Qin; Xiong, Chunyang; Huang, Jianyong

    2017-10-01

    In recent years, the efficiency of digital image correlation (DIC) methods has attracted increasing attention because of its increasing importance for many engineering applications. Based on the classical affine optical flow (AOF) algorithm and the well-established inverse compositional Gauss-Newton algorithm, which is essentially a natural extension of the AOF algorithm under a nonlinear iterative framework, this paper develops a set of fast convolution-based DIC algorithms for high-efficiency subpixel image registration. Using a well-developed fast convolution technique, the set of algorithms establishes a series of global data tables (GDTs) over the digital images, which allows the reduction of the computational complexity of DIC significantly. Using the pre-calculated GDTs, the subpixel registration calculations can be implemented efficiently in a look-up-table fashion. Both numerical simulation and experimental verification indicate that the set of algorithms significantly enhances the computational efficiency of DIC, especially in the case of a dense data sampling for the digital images. Because the GDTs need to be computed only once, the algorithms are also suitable for efficiently coping with image sequences that record the time-varying dynamics of specimen deformations.

  10. Experimental Demonstration of Capacity-Achieving Phase-Shifted Superposition Modulation

    DEFF Research Database (Denmark)

    Estaran Tolosa, Jose Manuel; Zibar, Darko; Caballero Jambrina, Antonio

    2013-01-01

    We report on the first experimental demonstration of phase-shifted superposition modulation (PSM) for optical links. Successful demodulation and decoding is obtained after 240 km transmission for 16-, 32- and 64-PSM.......We report on the first experimental demonstration of phase-shifted superposition modulation (PSM) for optical links. Successful demodulation and decoding is obtained after 240 km transmission for 16-, 32- and 64-PSM....

  11. Edgeworth Expansion Based Model for the Convolutional Noise pdf

    Directory of Open Access Journals (Sweden)

    Yonatan Rivlin

    2014-01-01

    Full Text Available Recently, the Edgeworth expansion up to order 4 was used to represent the convolutional noise probability density function (pdf in the conditional expectation calculations where the source pdf was modeled with the maximum entropy density approximation technique. However, the applied Lagrange multipliers were not the appropriate ones for the chosen model for the convolutional noise pdf. In this paper we use the Edgeworth expansion up to order 4 and up to order 6 to model the convolutional noise pdf. We derive the appropriate Lagrange multipliers, thus obtaining new closed-form approximated expressions for the conditional expectation and mean square error (MSE as a byproduct. Simulation results indicate hardly any equalization improvement with Edgeworth expansion up to order 4 when using optimal Lagrange multipliers over a nonoptimal set. In addition, there is no justification for using the Edgeworth expansion up to order 6 over the Edgeworth expansion up to order 4 for the 16QAM and easy channel case. However, Edgeworth expansion up to order 6 leads to improved equalization performance compared to the Edgeworth expansion up to order 4 for the 16QAM and hard channel case as well as for the case where the 64QAM is sent via an easy channel.

  12. Feedback equivalence of convolutional codes over finite rings

    Directory of Open Access Journals (Sweden)

    DeCastro-García Noemí

    2017-12-01

    Full Text Available The approach to convolutional codes from the linear systems point of view provides us with effective tools in order to construct convolutional codes with adequate properties that let us use them in many applications. In this work, we have generalized feedback equivalence between families of convolutional codes and linear systems over certain rings, and we show that every locally Brunovsky linear system may be considered as a representation of a code under feedback convolutional equivalence.

  13. Efficient convolutional sparse coding

    Science.gov (United States)

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  14. Multithreaded implicitly dealiased convolutions

    Science.gov (United States)

    Roberts, Malcolm; Bowman, John C.

    2018-03-01

    Implicit dealiasing is a method for computing in-place linear convolutions via fast Fourier transforms that decouples work memory from input data. It offers easier memory management and, for long one-dimensional input sequences, greater efficiency than conventional zero-padding. Furthermore, for convolutions of multidimensional data, the segregation of data and work buffers can be exploited to reduce memory usage and execution time significantly. This is accomplished by processing and discarding data as it is generated, allowing work memory to be reused, for greater data locality and performance. A multithreaded implementation of implicit dealiasing that accepts an arbitrary number of input and output vectors and a general multiplication operator is presented, along with an improved one-dimensional Hermitian convolution that avoids the loop dependency inherent in previous work. An alternate data format that can accommodate a Nyquist mode and enhance cache efficiency is also proposed.

  15. SU-F-T-620: Development of a Convolution/Superposition Dose Engine for CyberKnife System

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y; Liu, B; Liang, B; Xu, X; Guo, B; Wei, R; Zhou, F [Beihang University, Beijing, Beijing (China); Song, T [Southern Medical University, Guangzhou, Guangdong (China); Xu, S [PLA General Hospital, Beijing, Beijing (China); Piao, J [302 Military Hospital, Beijing, Beijing (China)

    2016-06-15

    Purpose: Current CyberKnife treatment planning system (TPS) provided two dose calculation algorithms: Ray-tracing and Monte Carlo. Ray-tracing algorithm is fast, but less accurate, and also can’t handle irregular fields since a multi-leaf collimator system was recently introduced to CyberKnife M6 system. Monte Carlo method has well-known accuracy, but the current version still takes a long time to finish dose calculations. The purpose of this paper is to develop a GPU-based fast C/S dose engine for CyberKnife system to achieve both accuracy and efficiency. Methods: The TERMA distribution from a poly-energetic source was calculated based on beam’s eye view coordinate system, which is GPU friendly and has linear complexity. The dose distribution was then computed by inversely collecting the energy depositions from all TERMA points along 192 collapsed-cone directions. EGSnrc user code was used to pre-calculate energy deposition kernels (EDKs) for a series of mono-energy photons The energy spectrum was reconstructed based on measured tissue maximum ratio (TMR) curve, the TERMA averaged cumulative kernels was then calculated. Beam hardening parameters and intensity profiles were optimized based on measurement data from CyberKnife system. Results: The difference between measured and calculated TMR are less than 1% for all collimators except in the build-up regions. The calculated profiles also showed good agreements with the measured doses within 1% except in the penumbra regions. The developed C/S dose engine was also used to evaluate four clinical CyberKnife treatment plans, the results showed a better dose calculation accuracy than Ray-tracing algorithm compared with Monte Carlo method for heterogeneous cases. For the dose calculation time, it takes about several seconds for one beam depends on collimator size and dose calculation grids. Conclusion: A GPU-based C/S dose engine has been developed for CyberKnife system, which was proven to be efficient and accurate

  16. Discrete convolution-operators and radioactive disintegration. [Numerical solution

    Energy Technology Data Exchange (ETDEWEB)

    Kalla, S L; VALENTINUZZI, M E [UNIVERSIDAD NACIONAL DE TUCUMAN (ARGENTINA). FACULTAD DE CIENCIAS EXACTAS Y TECNOLOGIA

    1975-08-01

    The basic concepts of discrete convolution and discrete convolution-operators are briefly described. Then, using the discrete convolution - operators, the differential equations associated with the process of radioactive disintegration are numerically solved. The importance of the method is emphasized to solve numerically, differential and integral equations.

  17. A convolutional approach to reflection symmetry

    DEFF Research Database (Denmark)

    Cicconet, Marcelo; Birodkar, Vighnesh; Lund, Mads

    2017-01-01

    We present a convolutional approach to reflection symmetry detection in 2D. Our model, built on the products of complex-valued wavelet convolutions, simplifies previous edge-based pairwise methods. Being parameter-centered, as opposed to feature-centered, it has certain computational advantages w...

  18. Spherical convolutions and their application in molecular modelling

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Frellsen, Jes

    2017-01-01

    Convolutional neural networks are increasingly used outside the domain of image analysis, in particular in various areas of the natural sciences concerned with spatial data. Such networks often work out-of-the box, and in some cases entire model architectures from image analysis can be carried over...... to other problem domains almost unaltered. Unfortunately, this convenience does not trivially extend to data in non-euclidean spaces, such as spherical data. In this paper, we introduce two strategies for conducting convolutions on the sphere, using either a spherical-polar grid or a grid based...... of spherical convolutions in the context of molecular modelling, by considering structural environments within proteins. We show that the models are capable of learning non-trivial functions in these molecular environments, and that our spherical convolutions generally outperform standard 3D convolutions...

  19. Collapsing a perfect superposition to a chosen quantum state without measurement.

    Directory of Open Access Journals (Sweden)

    Ahmed Younes

    Full Text Available Given a perfect superposition of [Formula: see text] states on a quantum system of [Formula: see text] qubits. We propose a fast quantum algorithm for collapsing the perfect superposition to a chosen quantum state [Formula: see text] without applying any measurements. The basic idea is to use a phase destruction mechanism. Two operators are used, the first operator applies a phase shift and a temporary entanglement to mark [Formula: see text] in the superposition, and the second operator applies selective phase shifts on the states in the superposition according to their Hamming distance with [Formula: see text]. The generated state can be used as an excellent input state for testing quantum memories and linear optics quantum computers. We make no assumptions about the used operators and applied quantum gates, but our result implies that for this purpose the number of qubits in the quantum register offers no advantage, in principle, over the obvious measurement-based feedback protocol.

  20. Acral melanoma detection using a convolutional neural network for dermoscopy images.

    Science.gov (United States)

    Yu, Chanki; Yang, Sejung; Kim, Wonoh; Jung, Jinwoong; Chung, Kee-Yang; Lee, Sang Wook; Oh, Byungho

    2018-01-01

    Acral melanoma is the most common type of melanoma in Asians, and usually results in a poor prognosis due to late diagnosis. We applied a convolutional neural network to dermoscopy images of acral melanoma and benign nevi on the hands and feet and evaluated its usefulness for the early diagnosis of these conditions. A total of 724 dermoscopy images comprising acral melanoma (350 images from 81 patients) and benign nevi (374 images from 194 patients), and confirmed by histopathological examination, were analyzed in this study. To perform the 2-fold cross validation, we split them into two mutually exclusive subsets: half of the total image dataset was selected for training and the rest for testing, and we calculated the accuracy of diagnosis comparing it with the dermatologist's and non-expert's evaluation. The accuracy (percentage of true positive and true negative from all images) of the convolutional neural network was 83.51% and 80.23%, which was higher than the non-expert's evaluation (67.84%, 62.71%) and close to that of the expert (81.08%, 81.64%). Moreover, the convolutional neural network showed area-under-the-curve values like 0.8, 0.84 and Youden's index like 0.6795, 0.6073, which were similar score with the expert. Although further data analysis is necessary to improve their accuracy, convolutional neural networks would be helpful to detect acral melanoma from dermoscopy images of the hands and feet.

  1. Dosimetric evaluation of multi-pattern spatially fractionated radiation therapy using a multi-leaf collimator and collapsed cone convolution superposition dose calculation algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Stathakis, Sotirios [Department of Radiation Oncology, University of Texas Health Science Center San Antonio, 7979 Wurzbach Rd, San Antonio, TX 78229 (United States)], E-mail: stathakis@uthscsa.edu; Esquivel, Carlos; Gutierrez, Alonso N.; Shi, ChengYu; Papanikolaou, Niko [Department of Radiation Oncology, University of Texas Health Science Center San Antonio, 7979 Wurzbach Rd, San Antonio, TX 78229 (United States)

    2009-10-15

    Purpose: In this paper, we present an alternative to the originally proposed technique for the delivery of spatially fractionated radiation therapy (GRID) using multi-leaf collimator (MLC) shaped fields. We employ the MLC to deliver various pattern GRID treatments to large solid tumors and dosimetrically characterize the GRID fields. Methods and materials: The GRID fields were created with different open to blocked area ratios and with variable separation between the openings using a MLC. GRID designs were introduced into the Pinnacle{sup 3} treatment planning system, and the dose was calculated in a water phantom. Ionization chamber and film measurements using both Kodak EDR2 and Gafchromic EBT film were performed in a SolidWater phantom to determine the relative output of each GRID design as well as its spatial dosimetric characteristics. Results: Agreement within 5.0% was observed between the Pinnacle{sup 3} predicted dose distributions and the measurements for the majority of experiments performed. A higher magnitude of discrepancy (15%) was observed using a high photon beam energy (18 MV) and small GRID opening. Skin dose at the GRID openings was higher than the corresponding open field by a factor as high as three for both photon energies and was found to be independent of the open-to-blocked area ratio. Conclusion: In summary, we reaffirm that the MLC can be used to deliver spatially fractionated GRID therapy and show that various GRID patterns may be generated. The Pinnacle{sup 3} TPS can accurately calculate the dose of the different GRID patterns in our study to within 5% for the majority of the cases based on film and ion chamber measurements. Disadvantages of MLC-based GRID therapy are longer treatment times and higher surface doses.

  2. Experimental verification of lung dose with radiochromic film: comparison with Monte Carlo simulations and commercially available treatment planning systems

    International Nuclear Information System (INIS)

    Paelinck, L; Reynaert, N; Thierens, H; Neve, W De; Wagter, C de

    2005-01-01

    The purpose of this study was to assess the absorbed dose in and around lung tissue by performing radiochromic film measurements, Monte Carlo simulations and calculations with superposition convolution algorithms. We considered a layered polystyrene phantom of 12 x 12 x 12 cm 3 containing a central cavity of 6 x 6 x 6 cm 3 filled with Gammex RMI lung-equivalent material. Two field configurations were investigated, a small 1 x 10 cm 2 field and a larger 10 x 10 cm 2 field. First, we performed Monte Carlo simulations to investigate the influence of radiochromic film itself on the measured dose distribution when the film intersects a lung-equivalent region and is oriented parallel to the central beam axis. To that end, the film and the lung-equivalent materials were modelled in detail, taking into account their specific composition. Next, measurements were performed with the film oriented both parallel and perpendicular to the central beam axis to verify the results of our Monte Carlo simulations. Finally, we digitized the phantom in two commercially available treatment planning systems, Helax-TMS version 6.1A and Pinnacle version 6.2b, and calculated the absorbed dose in the phantom with their incorporated superposition convolution algorithms to compare with the Monte Carlo simulations. Comparing Monte Carlo simulations with measurements reveals that radiochromic film is a reliable dosimeter in and around lung-equivalent regions when the film is positioned perpendicular to the central beam axis. Radiochromic film is also able to predict the absorbed dose accurately when the film is positioned parallel to the central beam axis through the lung-equivalent region. However, attention must be paid when the film is not positioned along the central beam axis, in which case the film gradually attenuates the beam and decreases the dose measured behind the cavity. This underdosage disappears by offsetting the film a few centimetres. We find deviations of about 3.6% between

  3. Experimental verification of lung dose with radiochromic film: comparison with Monte Carlo simulations and commercially available treatment planning systems

    Science.gov (United States)

    Paelinck, L.; Reynaert, N.; Thierens, H.; DeNeve, W.; DeWagter, C.

    2005-05-01

    The purpose of this study was to assess the absorbed dose in and around lung tissue by performing radiochromic film measurements, Monte Carlo simulations and calculations with superposition convolution algorithms. We considered a layered polystyrene phantom of 12 × 12 × 12 cm3 containing a central cavity of 6 × 6 × 6 cm3 filled with Gammex RMI lung-equivalent material. Two field configurations were investigated, a small 1 × 10 cm2 field and a larger 10 × 10 cm2 field. First, we performed Monte Carlo simulations to investigate the influence of radiochromic film itself on the measured dose distribution when the film intersects a lung-equivalent region and is oriented parallel to the central beam axis. To that end, the film and the lung-equivalent materials were modelled in detail, taking into account their specific composition. Next, measurements were performed with the film oriented both parallel and perpendicular to the central beam axis to verify the results of our Monte Carlo simulations. Finally, we digitized the phantom in two commercially available treatment planning systems, Helax-TMS version 6.1A and Pinnacle version 6.2b, and calculated the absorbed dose in the phantom with their incorporated superposition convolution algorithms to compare with the Monte Carlo simulations. Comparing Monte Carlo simulations with measurements reveals that radiochromic film is a reliable dosimeter in and around lung-equivalent regions when the film is positioned perpendicular to the central beam axis. Radiochromic film is also able to predict the absorbed dose accurately when the film is positioned parallel to the central beam axis through the lung-equivalent region. However, attention must be paid when the film is not positioned along the central beam axis, in which case the film gradually attenuates the beam and decreases the dose measured behind the cavity. This underdosage disappears by offsetting the film a few centimetres. We find deviations of about 3.6% between

  4. EPR, optical and superposition model study of Mn2+ doped L+ glutamic acid

    Science.gov (United States)

    Kripal, Ram; Singh, Manju

    2015-12-01

    Electron paramagnetic resonance (EPR) study of Mn2+ doped L+ glutamic acid single crystal is done at room temperature. Four interstitial sites are observed and the spin Hamiltonian parameters are calculated with the help of large number of resonant lines for various angular positions of external magnetic field. The optical absorption study is also done at room temperature. The energy values for different orbital levels are calculated, and observed bands are assigned as transitions from 6A1g(s) ground state to various excited states. With the help of these assigned bands, Racah inter-electronic repulsion parameters B = 869 cm-1, C = 2080 cm-1 and cubic crystal field splitting parameter Dq = 730 cm-1 are calculated. Zero field splitting (ZFS) parameters D and E are calculated by the perturbation formulae and crystal field parameters obtained using superposition model. The calculated values of ZFS parameters are in good agreement with the experimental values obtained by EPR.

  5. Validation of GPU based TomoTherapy dose calculation engine.

    Science.gov (United States)

    Chen, Quan; Lu, Weiguo; Chen, Yu; Chen, Mingli; Henderson, Douglas; Sterpin, Edmond

    2012-04-01

    The graphic processing unit (GPU) based TomoTherapy convolution/superposition(C/S) dose engine (GPU dose engine) achieves a dramatic performance improvement over the traditional CPU-cluster based TomoTherapy dose engine (CPU dose engine). Besides the architecture difference between the GPU and CPU, there are several algorithm changes from the CPU dose engine to the GPU dose engine. These changes made the GPU dose slightly different from the CPU-cluster dose. In order for the commercial release of the GPU dose engine, its accuracy has to be validated. Thirty eight TomoTherapy phantom plans and 19 patient plans were calculated with both dose engines to evaluate the equivalency between the two dose engines. Gamma indices (Γ) were used for the equivalency evaluation. The GPU dose was further verified with the absolute point dose measurement with ion chamber and film measurements for phantom plans. Monte Carlo calculation was used as a reference for both dose engines in the accuracy evaluation in heterogeneous phantom and actual patients. The GPU dose engine showed excellent agreement with the current CPU dose engine. The majority of cases had over 99.99% of voxels with Γ(1%, 1 mm) engine also showed similar degree of accuracy in heterogeneous media as the current TomoTherapy dose engine. It is verified and validated that the ultrafast TomoTherapy GPU dose engine can safely replace the existing TomoTherapy cluster based dose engine without degradation in dose accuracy.

  6. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.

    Science.gov (United States)

    Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L

    2018-04-01

    In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

  7. JaSTA-2: Second version of the Java Superposition T-matrix Application

    Science.gov (United States)

    Halder, Prithish; Das, Himadri Sekhar

    2017-12-01

    In this article, we announce the development of a new version of the Java Superposition T-matrix App (JaSTA-2), to study the light scattering properties of porous aggregate particles. It has been developed using Netbeans 7.1.2, which is a java integrated development environment (IDE). The JaSTA uses double precision superposition T-matrix codes for multi-sphere clusters in random orientation, developed by Mackowski and Mischenko (1996). The new version consists of two options as part of the input parameters: (i) single wavelength and (ii) multiple wavelengths. The first option (which retains the applicability of older version of JaSTA) calculates the light scattering properties of aggregates of spheres for a single wavelength at a given instant of time whereas the second option can execute the code for a multiple numbers of wavelengths in a single run. JaSTA-2 provides convenient and quicker data analysis which can be used in diverse fields like Planetary Science, Atmospheric Physics, Nanoscience, etc. This version of the software is developed for Linux platform only, and it can be operated over all the cores of a processor using the multi-threading option.

  8. Enhanced online convolutional neural networks for object tracking

    Science.gov (United States)

    Zhang, Dengzhuo; Gao, Yun; Zhou, Hao; Li, Tianwen

    2018-04-01

    In recent several years, object tracking based on convolution neural network has gained more and more attention. The initialization and update of convolution filters can directly affect the precision of object tracking effective. In this paper, a novel object tracking via an enhanced online convolution neural network without offline training is proposed, which initializes the convolution filters by a k-means++ algorithm and updates the filters by an error back-propagation. The comparative experiments of 7 trackers on 15 challenging sequences showed that our tracker can perform better than other trackers in terms of AUC and precision.

  9. Convolutional Neural Network for Image Recognition

    CERN Document Server

    Seifnashri, Sahand

    2015-01-01

    The aim of this project is to use machine learning techniques especially Convolutional Neural Networks for image processing. These techniques can be used for Quark-Gluon discrimination using calorimeters data, but unfortunately I didn’t manage to get the calorimeters data and I just used the Jet data fromminiaodsim(ak4 chs). The Jet data was not good enough for Convolutional Neural Network which is designed for ’image’ recognition. This report is made of twomain part, part one is mainly about implementing Convolutional Neural Network on unphysical data such as MNIST digits and CIFAR-10 dataset and part 2 is about the Jet data.

  10. Dimensionality-varied convolutional neural network for spectral-spatial classification of hyperspectral data

    Science.gov (United States)

    Liu, Wanjun; Liang, Xuejian; Qu, Haicheng

    2017-11-01

    Hyperspectral image (HSI) classification is one of the most popular topics in remote sensing community. Traditional and deep learning-based classification methods were proposed constantly in recent years. In order to improve the classification accuracy and robustness, a dimensionality-varied convolutional neural network (DVCNN) was proposed in this paper. DVCNN was a novel deep architecture based on convolutional neural network (CNN). The input of DVCNN was a set of 3D patches selected from HSI which contained spectral-spatial joint information. In the following feature extraction process, each patch was transformed into some different 1D vectors by 3D convolution kernels, which were able to extract features from spectral-spatial data. The rest of DVCNN was about the same as general CNN and processed 2D matrix which was constituted by by all 1D data. So that the DVCNN could not only extract more accurate and rich features than CNN, but also fused spectral-spatial information to improve classification accuracy. Moreover, the robustness of network on water-absorption bands was enhanced in the process of spectral-spatial fusion by 3D convolution, and the calculation was simplified by dimensionality varied convolution. Experiments were performed on both Indian Pines and Pavia University scene datasets, and the results showed that the classification accuracy of DVCNN improved by 32.87% on Indian Pines and 19.63% on Pavia University scene than spectral-only CNN. The maximum accuracy improvement of DVCNN achievement was 13.72% compared with other state-of-the-art HSI classification methods, and the robustness of DVCNN on water-absorption bands noise was demonstrated.

  11. The superposition of the states and the logic approach to quantum mechanics

    International Nuclear Information System (INIS)

    Zecca, A.

    1981-01-01

    An axiomatic approach to quantum mechanics is proposed in terms of a 'logic' scheme satisfying a suitable set of axioms. In this context the notion of pure, maximal, and characteristic state as well as the superposition relation and the superposition principle for the states are studied. The role the superposition relation plays in the reversible and in the irreversible dynamics is investigated and its connection with the tensor product is studied. Throughout the paper, the W*-algebra model, is used to exemplify results and properties of the general scheme. (author)

  12. Entanglement and discord of the superposition of Greenberger-Horne-Zeilinger states

    International Nuclear Information System (INIS)

    Parashar, Preeti; Rana, Swapan

    2011-01-01

    We calculate the analytic expression for geometric measure of entanglement for arbitrary superposition of two N-qubit canonical orthonormal Greenberger-Horne-Zeilinger (GHZ) states and the same for two W states. In the course of characterizing all kinds of nonclassical correlations, an explicit formula for quantum discord (via relative entropy) for the former class of states has been presented. Contrary to the GHZ state, the closest separable state to the W state is not classical. Therefore, in this case, the discord is different from the relative entropy of entanglement. We conjecture that the discord for the N-qubit W state is log 2 N.

  13. Symbol synchronization in convolutionally coded systems

    Science.gov (United States)

    Baumert, L. D.; Mceliece, R. J.; Van Tilborg, H. C. A.

    1979-01-01

    Alternate symbol inversion is sometimes applied to the output of convolutional encoders to guarantee sufficient richness of symbol transition for the receiver symbol synchronizer. A bound is given for the length of the transition-free symbol stream in such systems, and those convolutional codes are characterized in which arbitrarily long transition free runs occur.

  14. Superposition of helical beams by using a Michelson interferometer.

    Science.gov (United States)

    Gao, Chunqing; Qi, Xiaoqing; Liu, Yidong; Weber, Horst

    2010-01-04

    Orbital angular momentum (OAM) of a helical beam is of great interests in the high density optical communication due to its infinite number of eigen-states. In this paper, an experimental setup is realized to the information encoding and decoding on the OAM eigen-states. A hologram designed by the iterative method is used to generate the helical beams, and a Michelson interferometer with two Porro prisms is used for the superposition of two helical beams. The experimental results of the collinear superposition of helical beams and their OAM eigen-states detection are presented.

  15. FPGA-based digital convolution for wireless applications

    CERN Document Server

    Guan, Lei

    2017-01-01

    This book presents essential perspectives on digital convolutions in wireless communications systems and illustrates their corresponding efficient real-time field-programmable gate array (FPGA) implementations. Covering these digital convolutions from basic concept to vivid simulation/illustration, the book is also supplemented with MS PowerPoint presentations to aid in comprehension. FPGAs or generic all programmable devices will soon become widespread, serving as the “brains” of all types of real-time smart signal processing systems, like smart networks, smart homes and smart cities. The book examines digital convolution by bringing together the following main elements: the fundamental theory behind the mathematical formulae together with corresponding physical phenomena; virtualized algorithm simulation together with benchmark real-time FPGA implementations; and detailed, state-of-the-art case studies on wireless applications, including popular linear convolution in digital front ends (DFEs); nonlinear...

  16. Incomplete convolutions in production and inventory models

    NARCIS (Netherlands)

    Houtum, van G.J.J.A.N.; Zijm, W.H.M.

    1997-01-01

    In this paper, we study incomplete convolutions of continuous distribution functions, as they appear in the analysis of (multi-stage) production and inventory systems. Three example systems are discussed where these incomplete convolutions naturally arise. We derive explicit, nonrecursive formulae

  17. Fourier transforms and convolutions for the experimentalist

    CERN Document Server

    Jennison, RC

    1961-01-01

    Fourier Transforms and Convolutions for the Experimentalist provides the experimentalist with a guide to the principles and practical uses of the Fourier transformation. It aims to bridge the gap between the more abstract account of a purely mathematical approach and the rule of thumb calculation and intuition of the practical worker. The monograph springs from a lecture course which the author has given in recent years and for which he has drawn upon a number of sources, including a set of notes compiled by the late Dr. I. C. Browne from a series of lectures given by Mr. J . A. Ratcliffe of t

  18. The Urbanik generalized convolutions in the non-commutative ...

    Indian Academy of Sciences (India)

    −sν(dx) < ∞. Now we apply this construction to the Kendall convolution case, starting with the weakly stable measure δ1. Example 1. Let △ be the Kendall convolution, i.e. the generalized convolution with the probability kernel: δ1△δa = (1 − a)δ1 + aπ2 for a ∈ [0, 1] and π2 be the Pareto distribution with the density π2(dx) =.

  19. Practical purification scheme for decohered coherent-state superpositions via partial homodyne detection

    International Nuclear Information System (INIS)

    Suzuki, Shigenari; Takeoka, Masahiro; Sasaki, Masahide; Andersen, Ulrik L.; Kannari, Fumihiko

    2006-01-01

    We present a simple protocol to purify a coherent-state superposition that has undergone a linear lossy channel. The scheme constitutes only a single beam splitter and a homodyne detector, and thus is experimentally feasible. In practice, a superposition of coherent states is transformed into a classical mixture of coherent states by linear loss, which is usually the dominant decoherence mechanism in optical systems. We also address the possibility of producing a larger amplitude superposition state from decohered states, and show that in most cases the decoherence of the states are amplified along with the amplitude

  20. Decoherence of superposition states in trapped ions

    CSIR Research Space (South Africa)

    Uys, H

    2010-09-01

    Full Text Available This paper investigates the decoherence of superpositions of hyperfine states of 9Be+ ions due to spontaneous scattering of off-resonant light. It was found that, contrary to conventional wisdom, elastic Raleigh scattering can have major...

  1. An Algorithm for the Convolution of Legendre Series

    KAUST Repository

    Hale, Nicholas; Townsend, Alex

    2014-01-01

    An O(N2) algorithm for the convolution of compactly supported Legendre series is described. The algorithm is derived from the convolution theorem for Legendre polynomials and the recurrence relation satisfied by spherical Bessel functions. Combining with previous work yields an O(N 2) algorithm for the convolution of Chebyshev series. Numerical results are presented to demonstrate the improved efficiency over the existing algorithm. © 2014 Society for Industrial and Applied Mathematics.

  2. A Note on Cubic Convolution Interpolation

    OpenAIRE

    Meijering, E.; Unser, M.

    2003-01-01

    We establish a link between classical osculatory interpolation and modern convolution-based interpolation and use it to show that two well-known cubic convolution schemes are formally equivalent to two osculatory interpolation schemes proposed in the actuarial literature about a century ago. We also discuss computational differences and give examples of other cubic interpolation schemes not previously studied in signal and image processing.

  3. The general theory of convolutional codes

    Science.gov (United States)

    Mceliece, R. J.; Stanley, R. P.

    1993-01-01

    This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.

  4. Dose-calculation algorithms in the context of inhomogeneity corrections for high energy photon beams

    International Nuclear Information System (INIS)

    Papanikolaou, Niko; Stathakis, Sotirios

    2009-01-01

    Radiation therapy has witnessed a plethora of innovations and developments in the past 15 years. Since the introduction of computed tomography for treatment planning there has been a steady introduction of new methods to refine treatment delivery. Imaging continues to be an integral part of the planning, but also the delivery, of modern radiotherapy. However, all the efforts of image guided radiotherapy, intensity-modulated planning and delivery, adaptive radiotherapy, and everything else that we pride ourselves in having in the armamentarium can fall short, unless there is an accurate dose-calculation algorithm. The agreement between the calculated and delivered doses is of great significance in radiation therapy since the accuracy of the absorbed dose as prescribed determines the clinical outcome. Dose-calculation algorithms have evolved greatly over the years in an effort to be more inclusive of the effects that govern the true radiation transport through the human body. In this Vision 20/20 paper, we look back to see how it all started and where things are now in terms of dose algorithms for photon beams and the inclusion of tissue heterogeneities. Convolution-superposition algorithms have dominated the treatment planning industry for the past few years. Monte Carlo techniques have an inherent accuracy that is superior to any other algorithm and as such will continue to be the gold standard, along with measurements, and maybe one day will be the algorithm of choice for all particle treatment planning in radiation therapy.

  5. One weird trick for parallelizing convolutional neural networks

    OpenAIRE

    Krizhevsky, Alex

    2014-01-01

    I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural networks.

  6. Accurate lithography simulation model based on convolutional neural networks

    Science.gov (United States)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  7. Towards quantum superposition of a levitated nanodiamond with a NV center

    Science.gov (United States)

    Li, Tongcang

    2015-05-01

    Creating large Schrödinger's cat states with massive objects is one of the most challenging goals in quantum mechanics. We have previously achieved an important step of this goal by cooling the center-of-mass motion of a levitated microsphere from room temperature to millikelvin temperatures with feedback cooling. To generate spatial quantum superposition states with an optical cavity, however, requires a very strong quadratic coupling that is difficult to achieve. We proposed to optically trap a nanodiamond with a nitrogen-vacancy (NV) center in vacuum, and generate large spatial superposition states using the NV spin-optomechanical coupling in a strong magnetic gradient field. The large spatial superposition states can be used to study objective collapse theories of quantum mechanics. We have optically trapped nanodiamonds in air and are working towards this goal.

  8. Deep multi-scale convolutional neural network for hyperspectral image classification

    Science.gov (United States)

    Zhang, Feng-zhe; Yang, Xia

    2018-04-01

    In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.

  9. On the dosimetric behaviour of photon dose calculation algorithms in the presence of simple geometric heterogeneities: comparison with Monte Carlo calculations

    Science.gov (United States)

    Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk; Brink, Carsten; Clivio, Alessandro; Knöös, Tommy; Nicolini, Giorgia; Cozzi, Luca

    2007-03-01

    A comparative study was performed to reveal differences and relative figures of merit of seven different calculation algorithms for photon beams when applied to inhomogeneous media. The following algorithms were investigated: Varian Eclipse: the anisotropic analytical algorithm, and the pencil beam with modified Batho correction; Nucletron Helax-TMS: the collapsed cone and the pencil beam with equivalent path length correction; CMS XiO: the multigrid superposition and the fast Fourier transform convolution; Philips Pinnacle: the collapsed cone. Monte Carlo simulations (MC) performed with the EGSnrc codes BEAMnrc and DOSxyznrc from NRCC in Ottawa were used as a benchmark. The study was carried out in simple geometrical water phantoms (ρ = 1.00 g cm-3) with inserts of different densities simulating light lung tissue (ρ = 0.035 g cm-3), normal lung (ρ = 0.20 g cm-3) and cortical bone tissue (ρ = 1.80 g cm-3). Experiments were performed for low- and high-energy photon beams (6 and 15 MV) and for square (13 × 13 cm2) and elongated rectangular (2.8 × 13 cm2) fields. Analysis was carried out on the basis of depth dose curves and transverse profiles at several depths. Assuming the MC data as reference, γ index analysis was carried out distinguishing between regions inside the non-water inserts or inside the uniform water. For this study, a distance to agreement was set to 3 mm while the dose difference varied from 2% to 10%. In general all algorithms based on pencil-beam convolutions showed a systematic deficiency in managing the presence of heterogeneous media. In contrast, complicated patterns were observed for the advanced algorithms with significant discrepancies observed between algorithms in the lighter materials (ρ = 0.035 g cm-3), enhanced for the most energetic beam. For denser, and more clinical, densities a better agreement among the sophisticated algorithms with respect to MC was observed.

  10. Radial Structure Scaffolds Convolution Patterns of Developing Cerebral Cortex

    Directory of Open Access Journals (Sweden)

    Mir Jalil Razavi

    2017-08-01

    Full Text Available Commonly-preserved radial convolution is a prominent characteristic of the mammalian cerebral cortex. Endeavors from multiple disciplines have been devoted for decades to explore the causes for this enigmatic structure. However, the underlying mechanisms that lead to consistent cortical convolution patterns still remain poorly understood. In this work, inspired by prior studies, we propose and evaluate a plausible theory that radial convolution during the early development of the brain is sculptured by radial structures consisting of radial glial cells (RGCs and maturing axons. Specifically, the regionally heterogeneous development and distribution of RGCs controlled by Trnp1 regulate the convex and concave convolution patterns (gyri and sulci in the radial direction, while the interplay of RGCs' effects on convolution and axons regulates the convex (gyral convolution patterns. This theory is assessed by observations and measurements in literature from multiple disciplines such as neurobiology, genetics, biomechanics, etc., at multiple scales to date. Particularly, this theory is further validated by multimodal imaging data analysis and computational simulations in this study. We offer a versatile and descriptive study model that can provide reasonable explanations of observations, experiments, and simulations of the characteristic mammalian cortical folding.

  11. Design of convolutional tornado code

    Science.gov (United States)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  12. An Implementation of Error Minimization Data Transmission in OFDM using Modified Convolutional Code

    Directory of Open Access Journals (Sweden)

    Hendy Briantoro

    2016-04-01

    Full Text Available This paper presents about error minimization in OFDM system. In conventional system, usually using channel coding such as BCH Code or Convolutional Code. But, performance BCH Code or Convolutional Code is not good in implementation of OFDM System. Error bits of OFDM system without channel coding is 5.77%. Then, we used convolutional code with code rate 1/2, it can reduce error bitsonly up to 3.85%. So, we proposed OFDM system with Modified Convolutional Code. In this implementation, we used Software Define Radio (SDR, namely Universal Software Radio Peripheral (USRP NI 2920 as the transmitter and receiver. The result of OFDM system using Modified Convolutional Code with code rate is able recover all character received so can decrease until 0% error bit. Increasing performance of Modified Convolutional Code is about 1 dB in BER of 10-4 from BCH Code and Convolutional Code. So, performance of Modified Convolutional better than BCH Code or Convolutional Code. Keywords: OFDM, BCH Code, Convolutional Code, Modified Convolutional Code, SDR, USRP

  13. Motion-encoded dose calculation through fluence/sinogram modification

    International Nuclear Information System (INIS)

    Lu, Weiguo; Olivera, Gustavo H.; Mackie, Thomas R.

    2005-01-01

    Conventional radiotherapy treatment planning systems rely on a static computed tomography (CT) image for planning and evaluation. Intra/inter-fraction patient motions may result in significant differences between the planned and the delivered dose. In this paper, we develop a method to incorporate the knowledge of intra/inter-fraction patient motion directly into the dose calculation. By decomposing the motion into a parallel (to beam direction) component and perpendicular (to beam direction) component, we show that the motion effects can be accounted for by simply modifying the fluence distribution (sinogram). After such modification, dose calculation is the same as those based on a static planning image. This method is superior to the 'dose-convolution' method because it is not based on 'shift invariant' assumption. Therefore, it deals with material heterogeneity and surface curvature very well. We test our method using extensive simulations, which include four phantoms, four motion patterns, and three plan beams. We compare our method with the 'dose-convolution' and the 'stochastic simulation' methods (gold standard). As for the homogeneous flat surface phantom, our method has similar accuracy as the 'dose-convolution' method. As for all other phantoms, our method outperforms the 'dose-convolution'. The maximum motion encoded dose calculation error using our method is within 4% of the gold standard. It is shown that a treatment planning system that is based on 'motion-encoded dose calculation' can incorporate random and systematic motion errors in a very simple fashion. Under this approximation, in principle, a planning target volume definition is not required, since it already accounts for the intra/inter-fraction motion variations and it automatically optimizes the cumulative dose rather than the single fraction dose

  14. Semantic segmentation of bioimages using convolutional neural networks

    CSIR Research Space (South Africa)

    Wiehman, S

    2016-07-01

    Full Text Available Convolutional neural networks have shown great promise in both general image segmentation problems as well as bioimage segmentation. In this paper, the application of different convolutional network architectures is explored on the C. elegans live...

  15. The general use of the time-temperature-pressure superposition principle

    DEFF Research Database (Denmark)

    Rasmussen, Henrik Koblitz

    This note is a supplement to Dynamic of Polymeric Liquids (DPL) section 3.6(a). DPL do only concern material functions and only the effect of the temperature on these. This is a short introduction to the general use of the time-temperature-pressure superposition principle.......This note is a supplement to Dynamic of Polymeric Liquids (DPL) section 3.6(a). DPL do only concern material functions and only the effect of the temperature on these. This is a short introduction to the general use of the time-temperature-pressure superposition principle....

  16. Improved superposition schemes for approximate multi-caloron configurations

    International Nuclear Information System (INIS)

    Gerhold, P.; Ilgenfritz, E.-M.; Mueller-Preussker, M.

    2007-01-01

    Two improved superposition schemes for the construction of approximate multi-caloron-anti-caloron configurations, using exact single (anti-)caloron gauge fields as underlying building blocks, are introduced in this paper. The first improvement deals with possible monopole-Dirac string interactions between different calorons with non-trivial holonomy. The second one, based on the ADHM formalism, improves the (anti-)selfduality in the case of small caloron separations. It conforms with Shuryak's well-known ratio-ansatz when applied to instantons. Both superposition techniques provide a higher degree of (anti-)selfduality than the widely used sum-ansatz, which simply adds the (anti)caloron vector potentials in an appropriate gauge. Furthermore, the improved configurations (when discretized onto a lattice) are characterized by a higher stability when they are exposed to lattice cooling techniques

  17. Face recognition: a convolutional neural-network approach.

    Science.gov (United States)

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  18. Nuclear norm regularized convolutional Max Pos@Top machine

    KAUST Repository

    Li, Qinfeng; Zhou, Xiaofeng; Gu, Aihua; Li, Zonghua; Liang, Ru-Ze

    2016-01-01

    , named as Pos@Top. Our proposed classification model has a convolutional structure that is composed by four layers, i.e., the convolutional layer, the activation layer, the max-pooling layer and the full connection layer. In this paper, we propose

  19. A multidimensional superposition principle and wave switching in integrable and nonintegrable soliton models

    Energy Technology Data Exchange (ETDEWEB)

    Alexeyev, Alexander A [Laboratory of Computer Physics and Mathematical Simulation, Research Division, Room 247, Faculty of Phys.-Math. and Natural Sciences, Peoples' Friendship University of Russia, 6 Miklukho-Maklaya street, Moscow 117198 (Russian Federation) and Department of Mathematics 1, Faculty of Cybernetics, Moscow State Institute of Radio Engineering, Electronics and Automatics, 78 Vernadskogo Avenue, Moscow 117454 (Russian Federation)

    2004-11-26

    In the framework of a multidimensional superposition principle a series of computer experiments with integrable and nonintegrable models are carried out with the goal of verifying the existence of switching effect and superposition in soliton-perturbation interactions for a wide class of nonlinear PDEs. (letter to the editor)

  20. GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.

    Science.gov (United States)

    Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd

    2018-01-01

    In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.

  1. Measurement-Induced Macroscopic Superposition States in Cavity Optomechanics

    DEFF Research Database (Denmark)

    Hoff, Ulrich Busk; Kollath-Bönig, Johann; Neergaard-Nielsen, Jonas Schou

    2016-01-01

    A novel protocol for generating quantum superpositions of macroscopically distinct states of a bulk mechanical oscillator is proposed, compatible with existing optomechanical devices operating in the bad-cavity limit. By combining a pulsed optomechanical quantum nondemolition (QND) interaction...

  2. Convolutive ICA for Spatio-Temporal Analysis of EEG

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai

    2007-01-01

    in the convolutive model can be correctly detected using Bayesian model selection. We demonstrate a framework for deconvolving an EEG ICA subspace. Initial results suggest that in some cases convolutive mixing may be a more realistic model for EEG signals than the instantaneous ICA model....

  3. Generation of picosecond pulsed coherent state superpositions

    DEFF Research Database (Denmark)

    Dong, Ruifang; Tipsmark, Anders; Laghaout, Amine

    2014-01-01

    We present the generation of approximated coherent state superpositions-referred to as Schrodinger cat states-by the process of subtracting single photons from picosecond pulsed squeezed states of light. The squeezed vacuum states are produced by spontaneous parametric down-conversion (SPDC...... which exhibit non-Gaussian behavior. (C) 2014 Optical Society of America...

  4. CMOS Compressed Imaging by Random Convolution

    OpenAIRE

    Jacques, Laurent; Vandergheynst, Pierre; Bibet, Alexandre; Majidzadeh, Vahid; Schmid, Alexandre; Leblebici, Yusuf

    2009-01-01

    We present a CMOS imager with built-in capability to perform Compressed Sensing. The adopted sensing strategy is the random Convolution due to J. Romberg. It is achieved by a shift register set in a pseudo-random configuration. It acts as a convolutive filter on the imager focal plane, the current issued from each CMOS pixel undergoing a pseudo-random redirection controlled by each component of the filter sequence. A pseudo-random triggering of the ADC reading is finally applied to comp...

  5. Towards dropout training for convolutional neural networks.

    Science.gov (United States)

    Wu, Haibing; Gu, Xiaodong

    2015-11-01

    Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Convolutional Neural Network for Histopathological Analysis of Osteosarcoma.

    Science.gov (United States)

    Mishra, Rashika; Daescu, Ovidiu; Leavey, Patrick; Rakheja, Dinesh; Sengupta, Anita

    2018-03-01

    Pathologists often deal with high complexity and sometimes disagreement over osteosarcoma tumor classification due to cellular heterogeneity in the dataset. Segmentation and classification of histology tissue in H&E stained tumor image datasets is a challenging task because of intra-class variations, inter-class similarity, crowded context, and noisy data. In recent years, deep learning approaches have led to encouraging results in breast cancer and prostate cancer analysis. In this article, we propose convolutional neural network (CNN) as a tool to improve efficiency and accuracy of osteosarcoma tumor classification into tumor classes (viable tumor, necrosis) versus nontumor. The proposed CNN architecture contains eight learned layers: three sets of stacked two convolutional layers interspersed with max pooling layers for feature extraction and two fully connected layers with data augmentation strategies to boost performance. The use of a neural network results in higher accuracy of average 92% for the classification. We compare the proposed architecture with three existing and proven CNN architectures for image classification: AlexNet, LeNet, and VGGNet. We also provide a pipeline to calculate percentage necrosis in a given whole slide image. We conclude that the use of neural networks can assure both high accuracy and efficiency in osteosarcoma classification.

  7. The principle of superposition in human prehension.

    Science.gov (United States)

    Zatsiorsky, Vladimir M; Latash, Mark L; Gao, Fan; Shim, Jae Kun

    2004-03-01

    The experimental evidence supports the validity of the principle of superposition for multi-finger prehension in humans. Forces and moments of individual digits are defined by two independent commands: "Grasp the object stronger/weaker to prevent slipping" and "Maintain the rotational equilibrium of the object". The effects of the two commands are summed up.

  8. A scatter model for fast neutron beams using convolution of diffusion kernels

    International Nuclear Information System (INIS)

    Moyers, M.F.; Horton, J.L.; Boyer, A.L.

    1988-01-01

    A new model is proposed to calculate dose distributions in materials irradiated with fast neutron beams. Scattered neutrons are transported away from the point of production within the irradiated material in the forward, lateral and backward directions, while recoil protons are transported in the forward and lateral directions. The calculation of dose distributions, such as for radiotherapy planning, is accomplished by convolving a primary attenuation distribution with a diffusion kernel. The primary attenuation distribution may be quickly calculated for any given set of beam and material conditions as it describes only the magnitude and distribution of first interaction sites. The calculation of energy diffusion kernels is very time consuming but must be calculated only once for a given energy. Energy diffusion distributions shown in this paper have been calculated using a Monte Carlo type of program. To decrease beam calculation time, convolutions are performed using a Fast Fourier Transform technique. (author)

  9. Empirical Evaluation of Superposition Coded Multicasting for Scalable Video

    KAUST Repository

    Chun Pong Lau; Shihada, Basem; Pin-Han Ho

    2013-01-01

    In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However

  10. Gradient Flow Convolutive Blind Source Separation

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Nielsen, Chinton Møller

    2004-01-01

    Experiments have shown that the performance of instantaneous gradient flow beamforming by Cauwenberghs et al. is reduced significantly in reverberant conditions. By expanding the gradient flow principle to convolutive mixtures, separation in a reverberant environment is possible. By use...... of a circular four microphone array with a radius of 5 mm, and applying convolutive gradient flow instead of just applying instantaneous gradient flow, experimental results show an improvement of up to around 14 dB can be achieved for simulated impulse responses and up to around 10 dB for a hearing aid...

  11. On the Reduction of Computational Complexity of Deep Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Partha Maji

    2018-04-01

    Full Text Available Deep convolutional neural networks (ConvNets, which are at the heart of many new emerging applications, achieve remarkable performance in audio and visual recognition tasks. Unfortunately, achieving accuracy often implies significant computational costs, limiting deployability. In modern ConvNets it is typical for the convolution layers to consume the vast majority of computational resources during inference. This has made the acceleration of these layers an important research area in academia and industry. In this paper, we examine the effects of co-optimizing the internal structures of the convolutional layers and underlying implementation of fundamental convolution operation. We demonstrate that a combination of these methods can have a big impact on the overall speedup of a ConvNet, achieving a ten-fold increase over baseline. We also introduce a new class of fast one-dimensional (1D convolutions for ConvNets using the Toom–Cook algorithm. We show that our proposed scheme is mathematically well-grounded, robust, and does not require any time-consuming retraining, while still achieving speedups solely from convolutional layers with no loss in baseline accuracy.

  12. Coherent inflation for large quantum superpositions of levitated microspheres

    Science.gov (United States)

    Romero-Isart, Oriol

    2017-12-01

    We show that coherent inflation (CI), namely quantum dynamics generated by inverted conservative potentials acting on the center of mass of a massive object, is an enabling tool to prepare large spatial quantum superpositions in a double-slit experiment. Combined with cryogenic, extreme high vacuum, and low-vibration environments, we argue that it is experimentally feasible to exploit CI to prepare the center of mass of a micrometer-sized object in a spatial quantum superposition comparable to its size. In such a hitherto unexplored parameter regime gravitationally-induced decoherence could be unambiguously falsified. We present a protocol to implement CI in a double-slit experiment by letting a levitated microsphere traverse a static potential landscape. Such a protocol could be experimentally implemented with an all-magnetic scheme using superconducting microspheres.

  13. SU-E-T-371: Evaluating the Convolution Algorithm of a Commercially Available Radiosurgery Irradiator Using a Novel Phantom

    Energy Technology Data Exchange (ETDEWEB)

    Cates, J; Drzymala, R [Washington Univ, Saint Louis, MO (United States)

    2015-06-15

    Purpose: The purpose of this study was to develop and use a novel phantom to evaluate the accuracy and usefulness of the Leskell Gamma Plan convolution-based dose calculation algorithm compared with the current TMR10 algorithm. Methods: A novel phantom was designed to fit the Leskell Gamma Knife G Frame which could accommodate various materials in the form of one inch diameter, cylindrical plugs. The plugs were split axially to allow EBT2 film placement. Film measurements were made during two experiments. The first utilized plans generated on a homogeneous acrylic phantom setup using the TMR10 algorithm, with various materials inserted into the phantom during film irradiation to assess the effect on delivered dose due to unplanned heterogeneities upstream in the beam path. The second experiment utilized plans made on CT scans of different heterogeneous setups, with one plan using the TMR10 dose calculation algorithm and the second using the convolution-based algorithm. Materials used to introduce heterogeneities included air, LDPE, polystyrene, Delrin, Teflon, and aluminum. Results: The data shows that, as would be expected, having heterogeneities in the beam path does induce dose delivery error when using the TMR10 algorithm, with the largest errors being due to the heterogeneities with electron densities most different from that of water, i.e. air, Teflon, and aluminum. Additionally, the Convolution algorithm did account for the heterogeneous material and provided a more accurate predicted dose, in extreme cases up to a 7–12% improvement over the TMR10 algorithm. The convolution algorithm expected dose was accurate to within 3% in all cases. Conclusion: This study proves that the convolution algorithm is an improvement over the TMR10 algorithm when heterogeneities are present. More work is needed to determine what the heterogeneity size/volume limits are where this improvement exists, and in what clinical and/or research cases this would be relevant.

  14. Sagnac interferometry with coherent vortex superposition states in exciton-polariton condensates

    Science.gov (United States)

    Moxley, Frederick Ira; Dowling, Jonathan P.; Dai, Weizhong; Byrnes, Tim

    2016-05-01

    We investigate prospects of using counter-rotating vortex superposition states in nonequilibrium exciton-polariton Bose-Einstein condensates for the purposes of Sagnac interferometry. We first investigate the stability of vortex-antivortex superposition states, and show that they survive at steady state in a variety of configurations. Counter-rotating vortex superpositions are of potential interest to gyroscope and seismometer applications for detecting rotations. Methods of improving the sensitivity are investigated by targeting high momentum states via metastable condensation, and the application of periodic lattices. The sensitivity of the polariton gyroscope is compared to its optical and atomic counterparts. Due to the large interferometer areas in optical systems and small de Broglie wavelengths for atomic BECs, the sensitivity per detected photon is found to be considerably less for the polariton gyroscope than with competing methods. However, polariton gyroscopes have an advantage over atomic BECs in a high signal-to-noise ratio, and have other practical advantages such as room-temperature operation, area independence, and robust design. We estimate that the final sensitivities including signal-to-noise aspects are competitive with existing methods.

  15. Detecting atrial fibrillation by deep convolutional neural networks.

    Science.gov (United States)

    Xia, Yong; Wulan, Naren; Wang, Kuanquan; Zhang, Henggui

    2018-02-01

    Atrial fibrillation (AF) is the most common cardiac arrhythmia. The incidence of AF increases with age, causing high risks of stroke and increased morbidity and mortality. Efficient and accurate diagnosis of AF based on the ECG is valuable in clinical settings and remains challenging. In this paper, we proposed a novel method with high reliability and accuracy for AF detection via deep learning. The short-term Fourier transform (STFT) and stationary wavelet transform (SWT) were used to analyze ECG segments to obtain two-dimensional (2-D) matrix input suitable for deep convolutional neural networks. Then, two different deep convolutional neural network models corresponding to STFT output and SWT output were developed. Our new method did not require detection of P or R peaks, nor feature designs for classification, in contrast to existing algorithms. Finally, the performances of the two models were evaluated and compared with those of existing algorithms. Our proposed method demonstrated favorable performances on ECG segments as short as 5 s. The deep convolutional neural network using input generated by STFT, presented a sensitivity of 98.34%, specificity of 98.24% and accuracy of 98.29%. For the deep convolutional neural network using input generated by SWT, a sensitivity of 98.79%, specificity of 97.87% and accuracy of 98.63% was achieved. The proposed method using deep convolutional neural networks shows high sensitivity, specificity and accuracy, and, therefore, is a valuable tool for AF detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.

    Science.gov (United States)

    Huang, Yan; Wang, Wei; Wang, Liang

    2018-04-01

    Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.

  17. Maximum coherent superposition state achievement using a non-resonant pulse train in non-degenerate three-level atoms

    International Nuclear Information System (INIS)

    Deng, Li; Niu, Yueping; Jin, Luling; Gong, Shangqing

    2010-01-01

    The coherent superposition state of the lower two levels in non-degenerate three-level Λ atoms is investigated using the accumulative effects of non-resonant pulse trains when the repetition period is smaller than the decay time of the upper level. First, using a rectangular pulse train, the accumulative effects are re-examined in the non-resonant two-level atoms and the modified constructive accumulation equation is analytically given. The equation shows that the relative phase and the repetition period are important in the accumulative effect. Next, under the modified equation in the non-degenerate three-level Λ atoms, we show that besides the constructive accumulation effect, the use of the partial constructive accumulation effect can also achieve the steady state of the maximum coherent superposition state of the lower two levels and the latter condition is relatively easier to manipulate. The analysis is verified by numerical calculations. The influence of the external levels in such a case is also considered and we find that it can be avoided effectively. The above analysis is also applicable to pulse trains with arbitrary envelopes.

  18. Network class superposition analyses.

    Directory of Open Access Journals (Sweden)

    Carl A B Pearson

    Full Text Available Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30 for the yeast cell cycle process, considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses.

  19. Reducing dose calculation time for accurate iterative IMRT planning

    International Nuclear Information System (INIS)

    Siebers, Jeffrey V.; Lauterbach, Marc; Tong, Shidong; Wu Qiuwen; Mohan, Radhe

    2002-01-01

    A time-consuming component of IMRT optimization is the dose computation required in each iteration for the evaluation of the objective function. Accurate superposition/convolution (SC) and Monte Carlo (MC) dose calculations are currently considered too time-consuming for iterative IMRT dose calculation. Thus, fast, but less accurate algorithms such as pencil beam (PB) algorithms are typically used in most current IMRT systems. This paper describes two hybrid methods that utilize the speed of fast PB algorithms yet achieve the accuracy of optimizing based upon SC algorithms via the application of dose correction matrices. In one method, the ratio method, an infrequently computed voxel-by-voxel dose ratio matrix (R=D SC /D PB ) is applied for each beam to the dose distributions calculated with the PB method during the optimization. That is, D PB xR is used for the dose calculation during the optimization. The optimization proceeds until both the IMRT beam intensities and the dose correction ratio matrix converge. In the second method, the correction method, a periodically computed voxel-by-voxel correction matrix for each beam, defined to be the difference between the SC and PB dose computations, is used to correct PB dose distributions. To validate the methods, IMRT treatment plans developed with the hybrid methods are compared with those obtained when the SC algorithm is used for all optimization iterations and with those obtained when PB-based optimization is followed by SC-based optimization. In the 12 patient cases studied, no clinically significant differences exist in the final treatment plans developed with each of the dose computation methodologies. However, the number of time-consuming SC iterations is reduced from 6-32 for pure SC optimization to four or less for the ratio matrix method and five or less for the correction method. Because the PB algorithm is faster at computing dose, this reduces the inverse planning optimization time for our implementation

  20. On the Fresnel sine integral and the convolution

    Directory of Open Access Journals (Sweden)

    Adem Kılıçman

    2003-01-01

    Full Text Available The Fresnel sine integral S(x, the Fresnel cosine integral C(x, and the associated functions S+(x, S−(x, C+(x, and C−(x are defined as locally summable functions on the real line. Some convolutions and neutrix convolutions of the Fresnel sine integral and its associated functions with x+r, xr are evaluated.

  1. Logarithmic superposition of force response with rapid length changes in relaxed porcine airway smooth muscle.

    Science.gov (United States)

    Ijpma, G; Al-Jumaily, A M; Cairns, S P; Sieck, G C

    2010-12-01

    We present a systematic quantitative analysis of power-law force relaxation and investigate logarithmic superposition of force response in relaxed porcine airway smooth muscle (ASM) strips in vitro. The term logarithmic superposition describes linear superposition on a logarithmic scale, which is equivalent to multiplication on a linear scale. Additionally, we examine whether the dynamic response of contracted and relaxed muscles is dominated by cross-bridge cycling or passive dynamics. The study shows the following main findings. For relaxed ASM, the force response to length steps of varying amplitude (0.25-4% of reference length, both lengthening and shortening) are well-fitted with power-law functions over several decades of time (10⁻² to 10³ s), and the force response after consecutive length changes is more accurately fitted assuming logarithmic superposition rather than linear superposition. Furthermore, for sinusoidal length oscillations in contracted and relaxed muscles, increasing the oscillation amplitude induces greater hysteresivity and asymmetry of force-length relationships, whereas increasing the frequency dampens hysteresivity but increases asymmetry. We conclude that logarithmic superposition is an important feature of relaxed ASM, which may facilitate a more accurate prediction of force responses in the continuous dynamic environment of the respiratory system. In addition, the single power-function response to length changes shows that the dynamics of cross-bridge cycling can be ignored in relaxed muscle. The similarity in response between relaxed and contracted states implies that the investigated passive dynamics play an important role in both states and should be taken into account.

  2. Classification of urine sediment based on convolution neural network

    Science.gov (United States)

    Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian

    2018-04-01

    By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.

  3. Object Detection Based on Fast/Faster RCNN Employing Fully Convolutional Architectures

    Directory of Open Access Journals (Sweden)

    Yun Ren

    2018-01-01

    Full Text Available Modern object detectors always include two major parts: a feature extractor and a feature classifier as same as traditional object detectors. The deeper and wider convolutional architectures are adopted as the feature extractor at present. However, many notable object detection systems such as Fast/Faster RCNN only consider simple fully connected layers as the feature classifier. In this paper, we declare that it is beneficial for the detection performance to elaboratively design deep convolutional networks (ConvNets of various depths for feature classification, especially using the fully convolutional architectures. In addition, this paper also demonstrates how to employ the fully convolutional architectures in the Fast/Faster RCNN. Experimental results show that a classifier based on convolutional layer is more effective for object detection than that based on fully connected layer and that the better detection performance can be achieved by employing deeper ConvNets as the feature classifier.

  4. Study of dose calculation and beam parameters optimization with genetic algorithm in IMRT

    International Nuclear Information System (INIS)

    Chen Chaomin; Tang Mutao; Zhou Linghong; Lv Qingwen; Wang Zhuoyu; Chen Guangjie

    2006-01-01

    Objective: To study the construction of dose calculation model and the method of automatic beam parameters selection in IMRT. Methods: The three-dimension convolution dose calculation model of photon was constructed with the methods of Fast Fourier Transform. The objective function based on dose constrain was used to evaluate the fitness of individuals. The beam weights were optimized with genetic algorithm. Results: After 100 iterative analyses, the treatment planning system produced highly conformal and homogeneous dose distributions. Conclusion: the throe-dimension convolution dose calculation model of photon gave more accurate results than the conventional models; genetic algorithm is valid and efficient in IMRT beam parameters optimization. (authors)

  5. A Revised Piecewise Linear Recursive Convolution FDTD Method for Magnetized Plasmas

    International Nuclear Information System (INIS)

    Liu Song; Zhong Shuangying; Liu Shaobin

    2005-01-01

    The piecewise linear recursive convolution (PLRC) finite-different time-domain (FDTD) method improves accuracy over the original recursive convolution (RC) FDTD approach and current density convolution (JEC) but retains their advantages in speed and efficiency. This paper describes a revised piecewise linear recursive convolution PLRC-FDTD formulation for magnetized plasma which incorporates both anisotropy and frequency dispersion at the same time, enabling the transient analysis of magnetized plasma media. The technique is illustrated by numerical simulations of the reflection and transmission coefficients through a magnetized plasma layer. The results show that the revised PLRC-FDTD method has improved the accuracy over the original RC FDTD method and JEC FDTD method

  6. Some calculations of the failure statistics of coated fuel particles

    International Nuclear Information System (INIS)

    Martin, D.G.; Hobbs, J.E.

    1977-03-01

    Statistical variations of coated fuel particle parameters were considered in stress model calculations and the resulting particle failure fraction versus burn-up evaluated. Variations in the following parameters were considered simultaneously: kernel diameter and porosity, thickness of the buffer, seal, silicon carbide and inner and outer pyrocarbon layers, which were all assumed to be normally distributed, and the silicon carbide fracture stress which was assumed to follow a Weibull distribution. Two methods, based respectively on random sampling and convolution of the variations were employed and applied to particles manufactured by Dragon Project and RFL Springfields. Convolution calculations proved the more satisfactory. In the present calculations variations in the silicon carbide fracture stress caused the greatest spread in burn-up for a given change in failure fraction; kernel porosity is the next most important parameter. (author)

  7. Convolutional cylinder-type block-circulant cycle codes

    Directory of Open Access Journals (Sweden)

    Mohammad Gholami

    2013-06-01

    Full Text Available In this paper, we consider a class of column-weight two quasi-cyclic low-density paritycheck codes in which the girth can be large enough, as an arbitrary multiple of 8. Then we devote a convolutional form to these codes, such that their generator matrix can be obtained by elementary row and column operations on the parity-check matrix. Finally, we show that the free distance of the convolutional codes is equal to the minimum distance of their block counterparts.

  8. Convolution of large 3D images on GPU and its decomposition

    Science.gov (United States)

    Karas, Pavel; Svoboda, David

    2011-12-01

    In this article, we propose a method for computing convolution of large 3D images. The convolution is performed in a frequency domain using a convolution theorem. The algorithm is accelerated on a graphic card by means of the CUDA parallel computing model. Convolution is decomposed in a frequency domain using the decimation in frequency algorithm. We pay attention to keeping our approach efficient in terms of both time and memory consumption and also in terms of memory transfers between CPU and GPU which have a significant inuence on overall computational time. We also study the implementation on multiple GPUs and compare the results between the multi-GPU and multi-CPU implementations.

  9. Modified Stieltjes Transform and Generalized Convolutions of Probability Distributions

    Directory of Open Access Journals (Sweden)

    Lev B. Klebanov

    2018-01-01

    Full Text Available The classical Stieltjes transform is modified in such a way as to generalize both Stieltjes and Fourier transforms. This transform allows the introduction of new classes of commutative and non-commutative generalized convolutions. A particular case of such a convolution for degenerate distributions appears to be the Wigner semicircle distribution.

  10. Linear Plasma Oscillation Described by Superposition of Normal Modes

    DEFF Research Database (Denmark)

    Pécseli, Hans

    1974-01-01

    The existence of steady‐state solutions to the linearized ion and electron Vlasov equation is demonstrated for longitudinal waves in an initially stable plasma. The evolution of an arbitrary initial perturbation can be described by superposition of these solutions. Some common approximations...

  11. Generating superpositions of higher–order Bessel beams [Journal article

    CSIR Research Space (South Africa)

    Vasilyeu, R

    2009-12-01

    Full Text Available The authors report the first experimental generation of the superposition of higher-order Bessel beams, by means of a spatial light modulator (SLM) and a ring slit aperture. They present illuminating a ring slit aperture with light which has...

  12. Spectral properties of superpositions of Ornstein-Uhlenbeck type processes

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Leonenko, N.N.

    2005-01-01

    Stationary processes with prescribed one-dimensional marginal laws and long-range dependence are constructed. The asymptotic properties of the spectral densities are studied. The possibility of Mittag-Leffler decay in the autocorrelation function of superpositions of Ornstein-Uhlenbeck type...... processes is proved....

  13. Prediction of Electricity Usage Using Convolutional Neural Networks

    OpenAIRE

    Hansen, Martin

    2017-01-01

    Master's thesis Information- and communication technology IKT590 - University of Agder 2017 Convolutional Neural Networks are overwhelmingly accurate when attempting to predict numbers using the famous MNIST-dataset. In this paper, we are attempting to transcend these results for time- series forecasting, and compare them with several regression mod- els. The Convolutional Neural Network model predicted the same value through the entire time lapse in contrast with the other ...

  14. Single image super-resolution based on convolutional neural networks

    Science.gov (United States)

    Zou, Lamei; Luo, Ming; Yang, Weidong; Li, Peng; Jin, Liujia

    2018-03-01

    We present a deep learning method for single image super-resolution (SISR). The proposed approach learns end-to-end mapping between low-resolution (LR) images and high-resolution (HR) images. The mapping is represented as a deep convolutional neural network which inputs the LR image and outputs the HR image. Our network uses 5 convolution layers, which kernels size include 5×5, 3×3 and 1×1. In our proposed network, we use residual-learning and combine different sizes of convolution kernels at the same layer. The experiment results show that our proposed method performs better than the existing methods in reconstructing quality index and human visual effects on benchmarked images.

  15. Nonclassical thermal-state superpositions: Analytical evolution law and decoherence behavior

    Science.gov (United States)

    Meng, Xiang-guo; Goan, Hsi-Sheng; Wang, Ji-suo; Zhang, Ran

    2018-03-01

    Employing the integration technique within normal products of bosonic operators, we present normal product representations of thermal-state superpositions and investigate their nonclassical features, such as quadrature squeezing, sub-Poissonian distribution, and partial negativity of the Wigner function. We also analytically and numerically investigate their evolution law and decoherence characteristics in an amplitude-decay model via the variations of the probability distributions and the negative volumes of Wigner functions in phase space. The results indicate that the evolution formulas of two thermal component states for amplitude decay can be viewed as the same integral form as a displaced thermal state ρ(V , d) , but governed by the combined action of photon loss and thermal noise. In addition, the larger values of the displacement d and noise V lead to faster decoherence for thermal-state superpositions.

  16. The impact of dose calculation algorithms on partial and whole breast radiation treatment plans

    International Nuclear Information System (INIS)

    Basran, Parminder S; Zavgorodni, Sergei; Berrang, Tanya; Olivotto, Ivo A; Beckham, Wayne

    2010-01-01

    This paper compares the calculated dose to target and normal tissues when using pencil beam (PBC), superposition/convolution (AAA) and Monte Carlo (MC) algorithms for whole breast (WBI) and accelerated partial breast irradiation (APBI) treatment plans. Plans for 10 patients who met all dosimetry constraints on a prospective APBI protocol when using PBC calculations were recomputed with AAA and MC, keeping the monitor units and beam angles fixed. Similar calculations were performed for WBI plans on the same patients. Doses to target and normal tissue volumes were tested for significance using the paired Student's t-test. For WBI plans the average dose to target volumes when using PBC calculations was not significantly different than AAA calculations, the average PBC dose to the ipsilateral breast was 10.5% higher than the AAA calculations and the average MC dose to the ipsilateral breast was 11.8% lower than the PBC calculations. For ABPI plans there were no differences in dose to the planning target volume, ipsilateral breast, heart, ipsilateral lung, or contra-lateral lung. Although not significant, the maximum PBC dose to the contra-lateral breast was 1.9% higher than AAA and the PBC dose to the clinical target volume was 2.1% higher than AAA. When WBI technique is switched to APBI, there was significant reduction in dose to the ipsilateral breast when using PBC, a significant reduction in dose to the ipsilateral lung when using AAA, and a significant reduction in dose to the ipsilateral breast and lung and contra-lateral lung when using MC. There is very good agreement between PBC, AAA and MC for all target and most normal tissues when treating with APBI and WBI and most of the differences in doses to target and normal tissues are not clinically significant. However, a commonly used dosimetry constraint, as recommended by the ASTRO consensus document for APBI, that no point in the contra-lateral breast volume should receive >3% of the prescribed dose needs

  17. Model selection for convolutive ICA with an application to spatiotemporal analysis of EEG

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2007-01-01

    We present a new algorithm for maximum likelihood convolutive independent component analysis (ICA) in which components are unmixed using stable autoregressive filters determined implicitly by estimating a convolutive model of the mixing process. By introducing a convolutive mixing model...... for the components, we show how the order of the filters in the model can be correctly detected using Bayesian model selection. We demonstrate a framework for deconvolving a subspace of independent components in electroencephalography (EEG). Initial results suggest that in some cases, convolutive mixing may...

  18. Phylogenetic convolutional neural networks in metagenomics.

    Science.gov (United States)

    Fioravanti, Diego; Giarratano, Ylenia; Maggio, Valerio; Agostinelli, Claudio; Chierici, Marco; Jurman, Giuseppe; Furlanello, Cesare

    2018-03-08

    Convolutional Neural Networks can be effectively used only when data are endowed with an intrinsic concept of neighbourhood in the input space, as is the case of pixels in images. We introduce here Ph-CNN, a novel deep learning architecture for the classification of metagenomics data based on the Convolutional Neural Networks, with the patristic distance defined on the phylogenetic tree being used as the proximity measure. The patristic distance between variables is used together with a sparsified version of MultiDimensional Scaling to embed the phylogenetic tree in a Euclidean space. Ph-CNN is tested with a domain adaptation approach on synthetic data and on a metagenomics collection of gut microbiota of 38 healthy subjects and 222 Inflammatory Bowel Disease patients, divided in 6 subclasses. Classification performance is promising when compared to classical algorithms like Support Vector Machines and Random Forest and a baseline fully connected neural network, e.g. the Multi-Layer Perceptron. Ph-CNN represents a novel deep learning approach for the classification of metagenomics data. Operatively, the algorithm has been implemented as a custom Keras layer taking care of passing to the following convolutional layer not only the data but also the ranked list of neighbourhood of each sample, thus mimicking the case of image data, transparently to the user.

  19. Two new proofs of the test particle superposition principle of plasma kinetic theory

    International Nuclear Information System (INIS)

    Krommes, J.A.

    1976-01-01

    The test particle superposition principle of plasma kinetic theory is discussed in relation to the recent theory of two-time fluctuations in plasma given by Williams and Oberman. Both a new deductive and a new inductive proof of the principle are presented; the deductive approach appears here for the first time in the literature. The fundamental observation is that two-time expectations of one-body operators are determined completely in terms of the (x,v) phase space density autocorrelation, which to lowest order in the discreteness parameter obeys the linearized Vlasov equation with singular initial condition. For the deductive proof, this equation is solved formally using time-ordered operators, and the solution is then re-arranged into the superposition principle. The inductive proof is simpler than Rostoker's although similar in some ways; it differs in that first-order equations for pair correlation functions need not be invoked. It is pointed out that the superposition principle is also applicable to the short-time theory of neutral fluids

  20. Two new proofs of the test particle superposition principle of plasma kinetic theory

    International Nuclear Information System (INIS)

    Krommes, J.A.

    1975-12-01

    The test particle superposition principle of plasma kinetic theory is discussed in relation to the recent theory of two-time fluctuations in plasma given by Williams and Oberman. Both a new deductive and a new inductive proof of the principle are presented. The fundamental observation is that two-time expectations of one-body operators are determined completely in terms of the (x,v) phase space density autocorrelation, which to lowest order in the discreteness parameter obeys the linearized Vlasov equation with singular initial condition. For the deductive proof, this equation is solved formally using time-ordered operators, and the solution then rearranged into the superposition principle. The inductive proof is simpler than Rostoker's, although similar in some ways; it differs in that first order equations for pair correlation functions need not be invoked. It is pointed out that the superposition principle is also applicable to the short-time theory of neutral fluids

  1. Invariant moments based convolutional neural networks for image analysis

    Directory of Open Access Journals (Sweden)

    Vijayalakshmi G.V. Mahesh

    2017-01-01

    Full Text Available The paper proposes a method using convolutional neural network to effectively evaluate the discrimination between face and non face patterns, gender classification using facial images and facial expression recognition. The novelty of the method lies in the utilization of the initial trainable convolution kernels coefficients derived from the zernike moments by varying the moment order. The performance of the proposed method was compared with the convolutional neural network architecture that used random kernels as initial training parameters. The multilevel configuration of zernike moments was significant in extracting the shape information suitable for hierarchical feature learning to carry out image analysis and classification. Furthermore the results showed an outstanding performance of zernike moment based kernels in terms of the computation time and classification accuracy.

  2. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    Science.gov (United States)

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  3. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-12-01

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high-dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaicing and 4D light field view synthesis.

  4. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-04-11

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaickingand 4D light field view synthesis.

  5. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup; Swanson, Robin; Heide, Felix; Wetzstein, Gordon; Heidrich, Wolfgang

    2017-01-01

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high-dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaicing and 4D light field view synthesis.

  6. Noise-based logic hyperspace with the superposition of 2 states in a single wire

    Science.gov (United States)

    Kish, Laszlo B.; Khatri, Sunil; Sethuraman, Swaminathan

    2009-05-01

    In the introductory paper [L.B. Kish, Phys. Lett. A 373 (2009) 911], about noise-based logic, we showed how simple superpositions of single logic basis vectors can be achieved in a single wire. The superposition components were the N orthogonal logic basis vectors. Supposing that the different logic values have “on/off” states only, the resultant discrete superposition state represents a single number with N bit accuracy in a single wire, where N is the number of orthogonal logic vectors in the base. In the present Letter, we show that the logic hyperspace (product) vectors defined in the introductory paper can be generalized to provide the discrete superposition of 2 orthogonal system states. This is equivalent to a multi-valued logic system with 2 logic values per wire. This is a similar situation to quantum informatics with N qubits, and hence we introduce the notion of noise-bit. This system has major differences compared to quantum informatics. The noise-based logic system is deterministic and each superposition element is instantly accessible with the high digital accuracy, via a real hardware parallelism, without decoherence and error correction, and without the requirement of repeating the logic operation many times to extract the probabilistic information. Moreover, the states in noise-based logic do not have to be normalized, and non-unitary operations can also be used. As an example, we introduce a string search algorithm which is O(√{M}) times faster than Grover's quantum algorithm (where M is the number of string entries), while it has the same hardware complexity class as the quantum algorithm.

  7. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  8. Development and application of deep convolutional neural network in target detection

    Science.gov (United States)

    Jiang, Xiaowei; Wang, Chunping; Fu, Qiang

    2018-04-01

    With the development of big data and algorithms, deep convolution neural networks with more hidden layers have more powerful feature learning and feature expression ability than traditional machine learning methods, making artificial intelligence surpass human level in many fields. This paper first reviews the development and application of deep convolutional neural networks in the field of object detection in recent years, then briefly summarizes and ponders some existing problems in the current research, and the future development of deep convolutional neural network is prospected.

  9. Bessel function expansion to reduce the calculation time and memory usage for cylindrical computer-generated holograms.

    Science.gov (United States)

    Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko

    2017-07-10

    This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.

  10. Traffic sign recognition with deep convolutional neural networks

    OpenAIRE

    Karamatić, Boris

    2016-01-01

    The problem of detection and recognition of traffic signs is becoming an important problem when it comes to the development of self driving cars and advanced driver assistance systems. In this thesis we will develop a system for detection and recognition of traffic signs. For the problem of detection we will use aggregate channel features and for the problem of recognition we will use a deep convolutional neural network. We will describe how convolutional neural networks work, how they are co...

  11. Analysis of magnetic damping problem by the coupled mode superposition method

    International Nuclear Information System (INIS)

    Horie, Tomoyoshi; Niho, Tomoya

    1997-01-01

    In this paper we describe the coupled mode superposition method for the magnetic damping problem, which is produced by the coupled effect between the deformation and the induced eddy current of the structures for future fusion reactors and magnetically levitated vehicles. The formulation of the coupled mode superposition method is based on the matrix equation for the eddy current and the structure using the coupled mode vectors. Symmetric form of the coupled matrix equation is obtained. Coupled problems of a thin plate are solved to verify the formulation and the computer code. These problems are solved efficiently by this method using only a few coupled modes. Consideration of the coupled mode vectors shows that the coupled effects are included completely in each coupled mode. (author)

  12. Superposition as a logical glue

    Directory of Open Access Journals (Sweden)

    Andrea Asperti

    2011-03-01

    Full Text Available The typical mathematical language systematically exploits notational and logical abuses whose resolution requires not just the knowledge of domain specific notation and conventions, but not trivial skills in the given mathematical discipline. A large part of this background knowledge is expressed in form of equalities and isomorphisms, allowing mathematicians to freely move between different incarnations of the same entity without even mentioning the transformation. Providing ITP-systems with similar capabilities seems to be a major way to improve their intelligence, and to ease the communication between the user and the machine. The present paper discusses our experience of integration of a superposition calculus within the Matita interactive prover, providing in particular a very flexible, "smart" application tactic, and a simple, innovative approach to automation.

  13. Glue detection based on teaching points constraint and tracking model of pixel convolution

    Science.gov (United States)

    Geng, Lei; Ma, Xiao; Xiao, Zhitao; Wang, Wen

    2018-01-01

    On-line glue detection based on machine version is significant for rust protection and strengthening in car production. Shadow stripes caused by reflect light and unevenness of inside front cover of car reduce the accuracy of glue detection. In this paper, we propose an effective algorithm to distinguish the edges of the glue and shadow stripes. Teaching points are utilized to calculate slope between the two adjacent points. Then a tracking model based on pixel convolution along motion direction is designed to segment several local rectangular regions using distance. The distance is the height of rectangular region. The pixel convolution along the motion direction is proposed to extract edges of gules in local rectangular region. A dataset with different illumination and complexity shape stripes are used to evaluate proposed method, which include 500 thousand images captured from the camera of glue gun machine. Experimental results demonstrate that the proposed method can detect the edges of glue accurately. The shadow stripes are distinguished and removed effectively. Our method achieves the 99.9% accuracies for the image dataset.

  14. Research of convolutional neural networks for traffic sign recognition

    OpenAIRE

    Stadalnikas, Kasparas

    2017-01-01

    In this thesis the convolutional neural networks application for traffic sign recognition is analyzed. Thesis describes the basic operations, techniques that are commonly used to apply in the image classification using convolutional neural networks. Also, this paper describes the data sets used for traffic sign recognition, their problems affecting the final training results. The paper reviews most popular existing technologies – frameworks for developing the solution for traffic sign recogni...

  15. Analysis of dose in heterogeneity adjuvant radiotherapy after surgical treatment of cases of breast cancer; Analise da heterogeneidade de dose em radioterapia adjuvante apos tratamento cirurgico de casos de cancer de mama

    Energy Technology Data Exchange (ETDEWEB)

    Grechi, Bruna E.; Schwarz, Ana Paula, E-mail: anapaulaschwarz@yahoo.com.br [Centro Universitario Franciscano (UNIFRA), Santa Maria, RS (Brazil); Teston, Adriano; Rodrigues, Joanilso S. [Clinica de Radioterapia Santa Maria, Santa Maria, RS (Brazil)

    2013-12-15

    Assuming the systems planning radiotherapy recognize all body structures of the same density (d=1 g/cm³), variations in electron density within the irradiated area, as is the case of patients who undergo reconstruction mammary processes and use tissue expanders, may influence the dose distribution in the treatment and may produce heterogeneities which are not measured by changing its actual distribution into healthy tissues or in the target volume to be irradiated. Through the calculation of the algorithms' dose distribution of the XiO® planning system (Fast Fourier Transform, Convolution, Superposition, Fast Superposition e Clarkson), when using correction of heterogeneity between tissues of different densities, there was obtained a percentage ratio of dose increase in the structures of interest, and of the amount of absorbed dose by healthy organs adjacent to the target volume. (author)

  16. AFG-MONSU. A program for calculating axial heterogeneities in cylindrical pin cells

    International Nuclear Information System (INIS)

    Neltrup, H.; Kirkegaard, P.

    1978-08-01

    The AGF-MONSU program complex is designed to calculate the flux in cylindrical fuel pin cells into which heterogeneities are introduced in a regular array. The theory - integral transport theory combined with Monte Carlo by help of a superposition principle - is described in some detail. Detailed derivation of the superposition principle as well as the formulas used in the DIT (Discrete Integral Transport) method is given in the appendices along with a description of the input structure of the AFG-MONSU program complex. (author)

  17. High Performance Implementation of 3D Convolutional Neural Networks on a GPU

    Science.gov (United States)

    Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie

    2017-01-01

    Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version. PMID:29250109

  18. High Performance Implementation of 3D Convolutional Neural Networks on a GPU.

    Science.gov (United States)

    Lan, Qiang; Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie

    2017-01-01

    Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version.

  19. A MacWilliams Identity for Convolutional Codes: The General Case

    OpenAIRE

    Gluesing-Luerssen, Heide; Schneider, Gert

    2008-01-01

    A MacWilliams Identity for convolutional codes will be established. It makes use of the weight adjacency matrices of the code and its dual, based on state space realizations (the controller canonical form) of the codes in question. The MacWilliams Identity applies to various notions of duality appearing in the literature on convolutional coding theory.

  20. Down image recognition based on deep convolutional neural network

    Directory of Open Access Journals (Sweden)

    Wenzhu Yang

    2018-06-01

    Full Text Available Since of the scale and the various shapes of down in the image, it is difficult for traditional image recognition method to correctly recognize the type of down image and get the required recognition accuracy, even for the Traditional Convolutional Neural Network (TCNN. To deal with the above problems, a Deep Convolutional Neural Network (DCNN for down image classification is constructed, and a new weight initialization method is proposed. Firstly, the salient regions of a down image were cut from the image using the visual saliency model. Then, these salient regions of the image were used to train a sparse autoencoder and get a collection of convolutional filters, which accord with the statistical characteristics of dataset. At last, a DCNN with Inception module and its variants was constructed. To improve the recognition accuracy, the depth of the network is deepened. The experiment results indicate that the constructed DCNN increases the recognition accuracy by 2.7% compared to TCNN, when recognizing the down in the images. The convergence rate of the proposed DCNN with the new weight initialization method is improved by 25.5% compared to TCNN. Keywords: Deep convolutional neural network, Weight initialization, Sparse autoencoder, Visual saliency model, Image recognition

  1. Superposition Enhanced Nested Sampling

    Directory of Open Access Journals (Sweden)

    Stefano Martiniani

    2014-08-01

    Full Text Available The theoretical analysis of many problems in physics, astronomy, and applied mathematics requires an efficient numerical exploration of multimodal parameter spaces that exhibit broken ergodicity. Monte Carlo methods are widely used to deal with these classes of problems, but such simulations suffer from a ubiquitous sampling problem: The probability of sampling a particular state is proportional to its entropic weight. Devising an algorithm capable of sampling efficiently the full phase space is a long-standing problem. Here, we report a new hybrid method for the exploration of multimodal parameter spaces exhibiting broken ergodicity. Superposition enhanced nested sampling combines the strengths of global optimization with the unbiased or athermal sampling of nested sampling, greatly enhancing its efficiency with no additional parameters. We report extensive tests of this new approach for atomic clusters that are known to have energy landscapes for which conventional sampling schemes suffer from broken ergodicity. We also introduce a novel parallelization algorithm for nested sampling.

  2. DCMDN: Deep Convolutional Mixture Density Network

    Science.gov (United States)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  3. A New Reverberator Based on Variable Sparsity Convolution

    DEFF Research Database (Denmark)

    Holm-Rasmussen, Bo; Lehtonen, Heidi-Maria; Välimäki, Vesa

    2013-01-01

    FIR filter coefficients are selected from a velvet noise sequence, which consists of ones, minus ones, and zeros only. In this application, it is sufficient perceptually to use very sparse velvet noise sequences having only about 0.1 to 0.2% non-zero elements, with increasing sparsity along...... the impulse response. The algorithm yields a parametric approximation of the late part of the impulse response, which is more than 100 times more efficient computationally than the direct convolution. The computational load of the proposed algorithm is comparable to that of FFT-based partitioned convolution...

  4. Spacings and pair correlations for finite Bernoulli convolutions

    International Nuclear Information System (INIS)

    Benjamini, Itai; Solomyak, Boris

    2009-01-01

    We consider finite Bernoulli convolutions with a parameter 1/2 N . These sequences are uniformly distributed with respect to the infinite Bernoulli convolution measure ν λ , as N → ∞. Numerical evidence suggests that for a generic λ, the distribution of spacings between appropriately rescaled points is Poissonian. We obtain some partial results in this direction; for instance, we show that, on average, the pair correlations do not exhibit attraction or repulsion in the limit. On the other hand, for certain algebraic λ the behaviour is totally different

  5. Efficient and Invariant Convolutional Neural Networks for Dense Prediction

    OpenAIRE

    Gao, Hongyang; Ji, Shuiwang

    2017-01-01

    Convolutional neural networks have shown great success on feature extraction from raw input data such as images. Although convolutional neural networks are invariant to translations on the inputs, they are not invariant to other transformations, including rotation and flip. Recent attempts have been made to incorporate more invariance in image recognition applications, but they are not applicable to dense prediction tasks, such as image segmentation. In this paper, we propose a set of methods...

  6. Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.

    Science.gov (United States)

    Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di

    2018-03-06

    Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.

  7. Noise-based logic hyperspace with the superposition of 2N states in a single wire

    International Nuclear Information System (INIS)

    Kish, Laszlo B.; Khatri, Sunil; Sethuraman, Swaminathan

    2009-01-01

    In the introductory paper [L.B. Kish, Phys. Lett. A 373 (2009) 911], about noise-based logic, we showed how simple superpositions of single logic basis vectors can be achieved in a single wire. The superposition components were the N orthogonal logic basis vectors. Supposing that the different logic values have 'on/off' states only, the resultant discrete superposition state represents a single number with N bit accuracy in a single wire, where N is the number of orthogonal logic vectors in the base. In the present Letter, we show that the logic hyperspace (product) vectors defined in the introductory paper can be generalized to provide the discrete superposition of 2 N orthogonal system states. This is equivalent to a multi-valued logic system with 2 2 N logic values per wire. This is a similar situation to quantum informatics with N qubits, and hence we introduce the notion of noise-bit. This system has major differences compared to quantum informatics. The noise-based logic system is deterministic and each superposition element is instantly accessible with the high digital accuracy, via a real hardware parallelism, without decoherence and error correction, and without the requirement of repeating the logic operation many times to extract the probabilistic information. Moreover, the states in noise-based logic do not have to be normalized, and non-unitary operations can also be used. As an example, we introduce a string search algorithm which is O(√(M)) times faster than Grover's quantum algorithm (where M is the number of string entries), while it has the same hardware complexity class as the quantum algorithm.

  8. Superpositions of higher-order bessel beams and nondiffracting speckle fields

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2009-08-01

    Full Text Available speckle fields. The paper reports on illuminating a ring slit aperture with light which has an azimuthal phase dependence, such that the field produced is a superposition of two higher-order Bessel beams. In the case that the phase dependence of the light...

  9. Transforming spatial point processes into Poisson processes using random superposition

    DEFF Research Database (Denmark)

    Møller, Jesper; Berthelsen, Kasper Klitgaaard

    with a complementary spatial point process Y  to obtain a Poisson process X∪Y  with intensity function β. Underlying this is a bivariate spatial birth-death process (Xt,Yt) which converges towards the distribution of (X,Y). We study the joint distribution of X and Y, and their marginal and conditional distributions....... In particular, we introduce a fast and easy simulation procedure for Y conditional on X. This may be used for model checking: given a model for the Papangelou intensity of the original spatial point process, this model is used to generate the complementary process, and the resulting superposition is a Poisson...... process with intensity function β if and only if the true Papangelou intensity is used. Whether the superposition is actually such a Poisson process can easily be examined using well known results and fast simulation procedures for Poisson processes. We illustrate this approach to model checking...

  10. Isointense infant brain MRI segmentation with a dilated convolutional neural network

    NARCIS (Netherlands)

    Moeskops, P.; Pluim, J.P.W.

    2017-01-01

    Quantitative analysis of brain MRI at the age of 6 months is difficult because of the limited contrast between white matter and gray matter. In this study, we use a dilated triplanar convolutional neural network in combination with a non-dilated 3D convolutional neural network for the segmentation

  11. Nuclear norm regularized convolutional Max Pos@Top machine

    KAUST Repository

    Li, Qinfeng

    2016-11-18

    In this paper, we propose a novel classification model for the multiple instance data, which aims to maximize the number of positive instances ranked before the top-ranked negative instances. This method belongs to a recently emerged performance, named as Pos@Top. Our proposed classification model has a convolutional structure that is composed by four layers, i.e., the convolutional layer, the activation layer, the max-pooling layer and the full connection layer. In this paper, we propose an algorithm to learn the convolutional filters and the full connection weights to maximize the Pos@Top measure over the training set. Also, we try to minimize the rank of the filter matrix to explore the low-dimensional space of the instances in conjunction with the classification results. The rank minimization is conducted by the nuclear norm minimization of the filter matrix. In addition, we develop an iterative algorithm to solve the corresponding problem. We test our method on several benchmark datasets. The experimental results show the superiority of our method compared with other state-of-the-art Pos@Top maximization methods.

  12. A digital pixel cell for address event representation image convolution processing

    Science.gov (United States)

    Camunas-Mesa, Luis; Acosta-Jimenez, Antonio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2005-06-01

    Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number of neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate events according to their information levels. Neurons with more information (activity, derivative of activities, contrast, motion, edges,...) generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. AER technology has been used and reported for the implementation of various type of image sensors or retinae: luminance with local agc, contrast retinae, motion retinae,... Also, there has been a proposal for realizing programmable kernel image convolution chips. Such convolution chips would contain an array of pixels that perform weighted addition of events. Once a pixel has added sufficient event contributions to reach a fixed threshold, the pixel fires an event, which is then routed out of the chip for further processing. Such convolution chips have been proposed to be implemented using pulsed current mode mixed analog and digital circuit techniques. In this paper we present a fully digital pixel implementation to perform the weighted additions and fire the events. This way, for a given technology, there is a fully digital implementation reference against which compare the mixed signal implementations. We have designed, implemented and tested a fully digital AER convolution pixel. This pixel will be used to implement a full AER convolution chip for programmable kernel image convolution processing.

  13. A convolutional neural network neutrino event classifier

    International Nuclear Information System (INIS)

    Aurisano, A.; Sousa, A.; Radovic, A.; Vahle, P.; Rocco, D.; Pawloski, G.; Himmel, A.; Niner, E.; Messier, M.D.; Psihas, F.

    2016-01-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  14. The convolution transform

    CERN Document Server

    Hirschman, Isidore Isaac

    2005-01-01

    In studies of general operators of the same nature, general convolution transforms are immediately encountered as the objects of inversion. The relation between differential operators and integral transforms is the basic theme of this work, which is geared toward upper-level undergraduates and graduate students. It may be read easily by anyone with a working knowledge of real and complex variable theory. Topics include the finite and non-finite kernels, variation diminishing transforms, asymptotic behavior of kernels, real inversion theory, representation theory, the Weierstrass transform, and

  15. Off-resonance artifacts correction with convolution in k-space (ORACLE).

    Science.gov (United States)

    Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne

    2012-06-01

    Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.

  16. Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes

    International Nuclear Information System (INIS)

    Houshmand, Monireh; Hosseini-Khayat, Saied

    2011-01-01

    Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation and practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.

  17. Applying Gradient Descent in Convolutional Neural Networks

    Science.gov (United States)

    Cui, Nan

    2018-04-01

    With the development of the integrated circuit and computer science, people become caring more about solving practical issues via information technologies. Along with that, a new subject called Artificial Intelligent (AI) comes up. One popular research interest of AI is about recognition algorithm. In this paper, one of the most common algorithms, Convolutional Neural Networks (CNNs) will be introduced, for image recognition. Understanding its theory and structure is of great significance for every scholar who is interested in this field. Convolution Neural Network is an artificial neural network which combines the mathematical method of convolution and neural network. The hieratical structure of CNN provides it reliable computer speed and reasonable error rate. The most significant characteristics of CNNs are feature extraction, weight sharing and dimension reduction. Meanwhile, combining with the Back Propagation (BP) mechanism and the Gradient Descent (GD) method, CNNs has the ability to self-study and in-depth learning. Basically, BP provides an opportunity for backwardfeedback for enhancing reliability and GD is used for self-training process. This paper mainly discusses the CNN and the related BP and GD algorithms, including the basic structure and function of CNN, details of each layer, the principles and features of BP and GD, and some examples in practice with a summary in the end.

  18. RSR Calculator, a tool for the Calibration / Validation activities

    Directory of Open Access Journals (Sweden)

    C. Durán-Alarcón

    2014-12-01

    Full Text Available The calibration/validation of remote sensing products is a key step that needs to be done before its use in different kinds of environmental applications and to ensure the success of remote sensing missions. In order to compare the measurements from remote sensors on spacecrafts and airborne platforms with in-situ data, it is necessary to perform a spectral comparison process that takes into account the relative spectral response of the sensors. This technical note presents the RSR Calculator, a new tool to estimate, through numerical convolution, the values corresponding to each spectral range of a given sensor. RSR Calculator is useful for several applications ranging from the convolution of spectral signatures of laboratory or field measurements to the parameter estimation for the calibration of sensors, such as extraterrestrial solar irradiance (ESUN or atmospheric transmissivity (τ per spectral band. RSR Calculator is a useful tool that allows the processing of spectral data and that it can be successfully applied in the calibration/validation remote sensing process of the optical domain.

  19. Implementation and validation of collapsed cone superposition for radiopharmaceutical dosimetry of photon emitters

    Science.gov (United States)

    Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud

    2015-10-01

    Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and 18F, 99mTc, 131I and 177Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the 99mTc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.

  20. Implementation and validation of collapsed cone superposition for radiopharmaceutical dosimetry of photon emitters.

    Science.gov (United States)

    Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud

    2015-10-21

    Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and (18)F, (99m)Tc, (131)I and (177)Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the (99m)Tc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.

  1. Resilience to decoherence of the macroscopic quantum superpositions generated by universally covariant optimal quantum cloning

    International Nuclear Information System (INIS)

    Spagnolo, Nicolo; Sciarrino, Fabio; De Martini, Francesco

    2010-01-01

    We show that the quantum states generated by universal optimal quantum cloning of a single photon represent a universal set of quantum superpositions resilient to decoherence. We adopt the Bures distance as a tool to investigate the persistence of quantum coherence of these quantum states. According to this analysis, the process of universal cloning realizes a class of quantum superpositions that exhibits a covariance property in lossy configuration over the complete set of polarization states in the Bloch sphere.

  2. Efficient airport detection using region-based fully convolutional neural networks

    Science.gov (United States)

    Xin, Peng; Xu, Yuelei; Zhang, Xulei; Ma, Shiping; Li, Shuai; Lv, Chao

    2018-04-01

    This paper presents a model for airport detection using region-based fully convolutional neural networks. To achieve fast detection with high accuracy, we shared the conv layers between the region proposal procedure and the airport detection procedure and used graphics processing units (GPUs) to speed up the training and testing time. For lack of labeled data, we transferred the convolutional layers of ZF net pretrained by ImageNet to initialize the shared convolutional layers, then we retrained the model using the alternating optimization training strategy. The proposed model has been tested on an airport dataset consisting of 600 images. Experiments show that the proposed method can distinguish airports in our dataset from similar background scenes almost real-time with high accuracy, which is much better than traditional methods.

  3. On some properties of the superposition operator on topological manifolds

    Directory of Open Access Journals (Sweden)

    Janusz Dronka

    2010-01-01

    Full Text Available In this paper the superposition operator in the space of vector-valued, bounded and continuous functions on a topological manifold is considered. The acting conditions and criteria of continuity and compactness are established. As an application, an existence result for the nonlinear Hammerstein integral equation is obtained.

  4. On a computational method for modelling complex ecosystems by superposition procedure

    International Nuclear Information System (INIS)

    He Shanyu.

    1986-12-01

    In this paper, the Superposition Procedure is concisely described, and a computational method for modelling a complex ecosystem is proposed. With this method, the information contained in acceptable submodels and observed data can be utilized to maximal degree. (author). 1 ref

  5. SUPERPOSITION OF STOCHASTIC PROCESSES AND THE RESULTING PARTICLE DISTRIBUTIONS

    International Nuclear Information System (INIS)

    Schwadron, N. A.; Dayeh, M. A.; Desai, M.; Fahr, H.; Jokipii, J. R.; Lee, M. A.

    2010-01-01

    Many observations of suprathermal and energetic particles in the solar wind and the inner heliosheath show that distribution functions scale approximately with the inverse of particle speed (v) to the fifth power. Although there are exceptions to this behavior, there is a growing need to understand why this type of distribution function appears so frequently. This paper develops the concept that a superposition of exponential and Gaussian distributions with different characteristic speeds and temperatures show power-law tails. The particular type of distribution function, f ∝ v -5 , appears in a number of different ways: (1) a series of Poisson-like processes where entropy is maximized with the rates of individual processes inversely proportional to the characteristic exponential speed, (2) a series of Gaussian distributions where the entropy is maximized with the rates of individual processes inversely proportional to temperature and the density of individual Gaussian distributions proportional to temperature, and (3) a series of different diffusively accelerated energetic particle spectra with individual spectra derived from observations (1997-2002) of a multiplicity of different shocks. Thus, we develop a proof-of-concept for the superposition of stochastic processes that give rise to power-law distribution functions.

  6. Solutions to Arithmetic Convolution Equations

    Czech Academy of Sciences Publication Activity Database

    Glöckner, H.; Lucht, L.G.; Porubský, Štefan

    2007-01-01

    Roč. 135, č. 6 (2007), s. 1619-1629 ISSN 0002-9939 R&D Projects: GA ČR GA201/04/0381 Institutional research plan: CEZ:AV0Z10300504 Keywords : arithmetic functions * Dirichlet convolution * polynomial equations * analytic equations * topological algebras * holomorphic functional calculus Subject RIV: BA - General Mathematics Impact factor: 0.520, year: 2007

  7. Quantum Experiments and Graphs: Multiparty States as Coherent Superpositions of Perfect Matchings

    Science.gov (United States)

    Krenn, Mario; Gu, Xuemei; Zeilinger, Anton

    2017-12-01

    We show a surprising link between experimental setups to realize high-dimensional multipartite quantum states and graph theory. In these setups, the paths of photons are identified such that the photon-source information is never created. We find that each of these setups corresponds to an undirected graph, and every undirected graph corresponds to an experimental setup. Every term in the emerging quantum superposition corresponds to a perfect matching in the graph. Calculating the final quantum state is in the #P-complete complexity class, thus it cannot be done efficiently. To strengthen the link further, theorems from graph theory—such as Hall's marriage problem—are rephrased in the language of pair creation in quantum experiments. We show explicitly how this link allows one to answer questions about quantum experiments (such as which classes of entangled states can be created) with graph theoretical methods, and how to potentially simulate properties of graphs and networks with quantum experiments (such as critical exponents and phase transitions).

  8. Quantum Experiments and Graphs: Multiparty States as Coherent Superpositions of Perfect Matchings.

    Science.gov (United States)

    Krenn, Mario; Gu, Xuemei; Zeilinger, Anton

    2017-12-15

    We show a surprising link between experimental setups to realize high-dimensional multipartite quantum states and graph theory. In these setups, the paths of photons are identified such that the photon-source information is never created. We find that each of these setups corresponds to an undirected graph, and every undirected graph corresponds to an experimental setup. Every term in the emerging quantum superposition corresponds to a perfect matching in the graph. Calculating the final quantum state is in the #P-complete complexity class, thus it cannot be done efficiently. To strengthen the link further, theorems from graph theory-such as Hall's marriage problem-are rephrased in the language of pair creation in quantum experiments. We show explicitly how this link allows one to answer questions about quantum experiments (such as which classes of entangled states can be created) with graph theoretical methods, and how to potentially simulate properties of graphs and networks with quantum experiments (such as critical exponents and phase transitions).

  9. Gas Classification Using Deep Convolutional Neural Networks

    Science.gov (United States)

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-01

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP). PMID:29316723

  10. Gas Classification Using Deep Convolutional Neural Networks.

    Science.gov (United States)

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-08

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP).

  11. Linear diffusion-wave channel routing using a discrete Hayami convolution method

    Science.gov (United States)

    Li Wang; Joan Q. Wu; William J. Elliot; Fritz R. Feidler; Sergey. Lapin

    2014-01-01

    The convolution of an input with a response function has been widely used in hydrology as a means to solve various problems analytically. Due to the high computation demand in solving the functions using numerical integration, it is often advantageous to use the discrete convolution instead of the integration of the continuous functions. This approach greatly reduces...

  12. Convolution equations on lattices: periodic solutions with values in a prime characteristic field

    OpenAIRE

    Zaidenberg, Mikhail

    2006-01-01

    These notes are inspired by the theory of cellular automata. A linear cellular automaton on a lattice of finite rank or on a toric grid is a discrete dinamical system generated by a convolution operator with kernel concentrated in the nearest neighborhood of the origin. In the present paper we deal with general convolution operators. We propose an approach via harmonic analysis which works over a field of positive characteristic. It occurs that a standard spectral problem for a convolution op...

  13. Using Musical Intervals to Demonstrate Superposition of Waves and Fourier Analysis

    Science.gov (United States)

    LoPresto, Michael C.

    2013-01-01

    What follows is a description of a demonstration of superposition of waves and Fourier analysis using a set of four tuning forks mounted on resonance boxes and oscilloscope software to create, capture and analyze the waveforms and Fourier spectra of musical intervals.

  14. Neural Network Molecule: a Solution of the Inverse Biometry Problem through Software Support of Quantum Superposition on Outputs of the Network of Artificial Neurons

    Directory of Open Access Journals (Sweden)

    Vladimir I. Volchikhin

    2017-12-01

    Full Text Available Introduction: The aim of the study is to accelerate the solution of neural network biometrics inverse problem on an ordinary desktop computer. Materials and Methods: To speed up the calculations, the artificial neural network is introduced into the dynamic mode of “jittering” of the states of all 256 output bits. At the same time, too many output states of the neural network are logarithmically folded by transitioning to the Hamming distance space between the code of the image “Own” and the codes of the images “Alien”. From the database of images of “Alien” 2.5 % of the most similar images are selected. In the next generation, 97.5 % of the discarded images are restored with GOST R 52633.2-2010 procedures by crossing parent images and obtaining descendant images from them. Results: Over a period of about 10 minutes, 60 generations of directed search for the solution of the inverse problem can be realized that allows inversing matrices of neural network functionals of dimension 416 inputs to 256 outputs with restoration of up to 97 % information on unknown biometric parameters of the image “Own”. Discussion and Conclusions: Supporting for 10 minutes of computer time the 256 qubit quantum superposition allows on a conventional computer to bypass the actual infinity of analyzed states in 5050 (50 to 50 times more than the same computer could process realizing the usual calculations. The increase in the length of the supported quantum superposition by 40 qubits is equivalent to increasing the processor clock speed by about a billion times. It is for this reason that it is more profitable to increase the number of quantum superpositions supported by the software emulator in comparison with the creation of a more powerful processor.

  15. AFM tip-sample convolution effects for cylinder protrusions

    Science.gov (United States)

    Shen, Jian; Zhang, Dan; Zhang, Fei-Hu; Gan, Yang

    2017-11-01

    A thorough understanding about the AFM tip geometry dependent artifacts and tip-sample convolution effect is essential for reliable AFM topographic characterization and dimensional metrology. Using rigid sapphire cylinder protrusions (diameter: 2.25 μm, height: 575 nm) as the model system, a systematic and quantitative study about the imaging artifacts of four types of tips-two different pyramidal tips, one tetrahedral tip and one super sharp whisker tip-is carried out through comparing tip geometry dependent variations in AFM topography of cylinders and constructing the rigid tip-cylinder convolution models. We found that the imaging artifacts and the tip-sample convolution effect are critically related to the actual inclination of the working cantilever, the tip geometry, and the obstructive contacts between the working tip's planes/edges and the cylinder. Artifact-free images can only be obtained provided that all planes and edges of the working tip are steeper than the cylinder sidewalls. The findings reported here will contribute to reliable AFM characterization of surface features of micron or hundreds of nanometers in height that are frequently met in semiconductor, biology and materials fields.

  16. Noise-based logic hyperspace with the superposition of 2{sup N} states in a single wire

    Energy Technology Data Exchange (ETDEWEB)

    Kish, Laszlo B. [Texas A and M University, Department of Electrical and Computer Engineering, College Station, TX 77843-3128 (United States)], E-mail: laszlo.kish@ece.tamu.edu; Khatri, Sunil; Sethuraman, Swaminathan [Texas A and M University, Department of Electrical and Computer Engineering, College Station, TX 77843-3128 (United States)

    2009-05-11

    In the introductory paper [L.B. Kish, Phys. Lett. A 373 (2009) 911], about noise-based logic, we showed how simple superpositions of single logic basis vectors can be achieved in a single wire. The superposition components were the N orthogonal logic basis vectors. Supposing that the different logic values have 'on/off' states only, the resultant discrete superposition state represents a single number with N bit accuracy in a single wire, where N is the number of orthogonal logic vectors in the base. In the present Letter, we show that the logic hyperspace (product) vectors defined in the introductory paper can be generalized to provide the discrete superposition of 2{sup N} orthogonal system states. This is equivalent to a multi-valued logic system with 2{sup 2{sup N}} logic values per wire. This is a similar situation to quantum informatics with N qubits, and hence we introduce the notion of noise-bit. This system has major differences compared to quantum informatics. The noise-based logic system is deterministic and each superposition element is instantly accessible with the high digital accuracy, via a real hardware parallelism, without decoherence and error correction, and without the requirement of repeating the logic operation many times to extract the probabilistic information. Moreover, the states in noise-based logic do not have to be normalized, and non-unitary operations can also be used. As an example, we introduce a string search algorithm which is O({radical}(M)) times faster than Grover's quantum algorithm (where M is the number of string entries), while it has the same hardware complexity class as the quantum algorithm.

  17. Spectral interpolation - Zero fill or convolution. [image processing

    Science.gov (United States)

    Forman, M. L.

    1977-01-01

    Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.

  18. Image quality assessment using deep convolutional networks

    Science.gov (United States)

    Li, Yezhou; Ye, Xiang; Li, Yong

    2017-12-01

    This paper proposes a method of accurately assessing image quality without a reference image by using a deep convolutional neural network. Existing training based methods usually utilize a compact set of linear filters for learning features of images captured by different sensors to assess their quality. These methods may not be able to learn the semantic features that are intimately related with the features used in human subject assessment. Observing this drawback, this work proposes training a deep convolutional neural network (CNN) with labelled images for image quality assessment. The ReLU in the CNN allows non-linear transformations for extracting high-level image features, providing a more reliable assessment of image quality than linear filters. To enable the neural network to take images of any arbitrary size as input, the spatial pyramid pooling (SPP) is introduced connecting the top convolutional layer and the fully-connected layer. In addition, the SPP makes the CNN robust to object deformations to a certain extent. The proposed method taking an image as input carries out an end-to-end learning process, and outputs the quality of the image. It is tested on public datasets. Experimental results show that it outperforms existing methods by a large margin and can accurately assess the image quality on images taken by different sensors of varying sizes.

  19. Entanglement of arbitrary superpositions of modes within two-dimensional orbital angular momentum state spaces

    International Nuclear Information System (INIS)

    Jack, B.; Leach, J.; Franke-Arnold, S.; Ireland, D. G.; Padgett, M. J.; Yao, A. M.; Barnett, S. M.; Romero, J.

    2010-01-01

    We use spatial light modulators (SLMs) to measure correlations between arbitrary superpositions of orbital angular momentum (OAM) states generated by spontaneous parametric down-conversion. Our technique allows us to fully access a two-dimensional OAM subspace described by a Bloch sphere, within the higher-dimensional OAM Hilbert space. We quantify the entanglement through violations of a Bell-type inequality for pairs of modal superpositions that lie on equatorial, polar, and arbitrary great circles of the Bloch sphere. Our work shows that SLMs can be used to measure arbitrary spatial states with a fidelity sufficient for appropriate quantum information processing systems.

  20. Deep Recurrent Convolutional Neural Network: Improving Performance For Speech Recognition

    OpenAIRE

    Zhang, Zewang; Sun, Zheng; Liu, Jiaqi; Chen, Jingwen; Huo, Zhao; Zhang, Xiao

    2016-01-01

    A deep learning approach has been widely applied in sequence modeling problems. In terms of automatic speech recognition (ASR), its performance has significantly been improved by increasing large speech corpus and deeper neural network. Especially, recurrent neural network and deep convolutional neural network have been applied in ASR successfully. Given the arising problem of training speed, we build a novel deep recurrent convolutional network for acoustic modeling and then apply deep resid...

  1. Weed Growth Stage Estimator Using Deep Convolutional Neural Networks

    DEFF Research Database (Denmark)

    Teimouri, Nima; Dyrmann, Mads; Nielsen, Per Rydahl

    2018-01-01

    conditions with regards to soil types, resolution and light settings. Then, 9649 of these images were used for training the computer, which automatically divided the weeds into nine growth classes. The performance of this proposed convolutional neural network approach was evaluated on a further set of 2516...... in estimating the number of leaves and 96% accuracy when accepting a deviation of two leaves. These results show that this new method of using deep convolutional neural networks has a relatively high ability to estimate early growth stages across a wide variety of weed species....

  2. Experimental study of current loss and plasma formation in the Z machine post-hole convolute

    Directory of Open Access Journals (Sweden)

    M. R. Gomez

    2017-01-01

    Full Text Available The Z pulsed-power generator at Sandia National Laboratories drives high energy density physics experiments with load currents of up to 26 MA. Z utilizes a double post-hole convolute to combine the current from four parallel magnetically insulated transmission lines into a single transmission line just upstream of the load. Current loss is observed in most experiments and is traditionally attributed to inefficient convolute performance. The apparent loss current varies substantially for z-pinch loads with different inductance histories; however, a similar convolute impedance history is observed for all load types. This paper details direct spectroscopic measurements of plasma density, temperature, and apparent and actual plasma closure velocities within the convolute. Spectral measurements indicate a correlation between impedance collapse and plasma formation in the convolute. Absorption features in the spectra show the convolute plasma consists primarily of hydrogen, which likely forms from desorbed electrode contaminant species such as H_{2}O, H_{2}, and hydrocarbons. Plasma densities increase from 1×10^{16}  cm^{−3} (level of detectability just before peak current to over 1×10^{17}  cm^{−3} at stagnation (tens of ns later. The density seems to be highest near the cathode surface, with an apparent cathode to anode plasma velocity in the range of 35–50  cm/μs. Similar plasma conditions and convolute impedance histories are observed in experiments with high and low losses, suggesting that losses are driven largely by load dynamics, which determine the voltage on the convolute.

  3. SU-E-J-60: Efficient Monte Carlo Dose Calculation On CPU-GPU Heterogeneous Systems

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, K; Chen, D. Z; Hu, X. S [University of Notre Dame, Notre Dame, IN (United States); Zhou, B [Altera Corp., San Jose, CA (United States)

    2014-06-01

    Purpose: It is well-known that the performance of GPU-based Monte Carlo dose calculation implementations is bounded by memory bandwidth. One major cause of this bottleneck is the random memory writing patterns in dose deposition, which leads to several memory efficiency issues on GPU such as un-coalesced writing and atomic operations. We propose a new method to alleviate such issues on CPU-GPU heterogeneous systems, which achieves overall performance improvement for Monte Carlo dose calculation. Methods: Dose deposition is to accumulate dose into the voxels of a dose volume along the trajectories of radiation rays. Our idea is to partition this procedure into the following three steps, which are fine-tuned for CPU or GPU: (1) each GPU thread writes dose results with location information to a buffer on GPU memory, which achieves fully-coalesced and atomic-free memory transactions; (2) the dose results in the buffer are transferred to CPU memory; (3) the dose volume is constructed from the dose buffer on CPU. We organize the processing of all radiation rays into streams. Since the steps within a stream use different hardware resources (i.e., GPU, DMA, CPU), we can overlap the execution of these steps for different streams by pipelining. Results: We evaluated our method using a Monte Carlo Convolution Superposition (MCCS) program and tested our implementation for various clinical cases on a heterogeneous system containing an Intel i7 quad-core CPU and an NVIDIA TITAN GPU. Comparing with a straightforward MCCS implementation on the same system (using both CPU and GPU for radiation ray tracing), our method gained 2-5X speedup without losing dose calculation accuracy. Conclusion: The results show that our new method improves the effective memory bandwidth and overall performance for MCCS on the CPU-GPU systems. Our proposed method can also be applied to accelerate other Monte Carlo dose calculation approaches. This research was supported in part by NSF under Grants CCF

  4. Infrared dim moving target tracking via sparsity-based discriminative classifier and convolutional network

    Science.gov (United States)

    Qian, Kun; Zhou, Huixin; Wang, Bingjian; Song, Shangzhen; Zhao, Dong

    2017-11-01

    Infrared dim and small target tracking is a great challenging task. The main challenge for target tracking is to account for appearance change of an object, which submerges in the cluttered background. An efficient appearance model that exploits both the global template and local representation over infrared image sequences is constructed for dim moving target tracking. A Sparsity-based Discriminative Classifier (SDC) and a Convolutional Network-based Generative Model (CNGM) are combined with a prior model. In the SDC model, a sparse representation-based algorithm is adopted to calculate the confidence value that assigns more weights to target templates than negative background templates. In the CNGM model, simple cell feature maps are obtained by calculating the convolution between target templates and fixed filters, which are extracted from the target region at the first frame. These maps measure similarities between each filter and local intensity patterns across the target template, therefore encoding its local structural information. Then, all the maps form a representation, preserving the inner geometric layout of a candidate template. Furthermore, the fixed target template set is processed via an efficient prior model. The same operation is applied to candidate templates in the CNGM model. The online update scheme not only accounts for appearance variations but also alleviates the migration problem. At last, collaborative confidence values of particles are utilized to generate particles' importance weights. Experiments on various infrared sequences have validated the tracking capability of the presented algorithm. Experimental results show that this algorithm runs in real-time and provides a higher accuracy than state of the art algorithms.

  5. Convolutional Neural Networks - Generalizability and Interpretations

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David

    from data despite it being limited in amount or context representation. Within Machine Learning this thesis focuses on Convolutional Neural Networks for Computer Vision. The research aims to answer how to explore a model's generalizability to the whole population of data samples and how to interpret...

  6. A Deep Convolutional Coupling Network for Change Detection Based on Heterogeneous Optical and Radar Images.

    Science.gov (United States)

    Liu, Jia; Gong, Maoguo; Qin, Kai; Zhang, Puzhao

    2018-03-01

    We propose an unsupervised deep convolutional coupling network for change detection based on two heterogeneous images acquired by optical sensors and radars on different dates. Most existing change detection methods are based on homogeneous images. Due to the complementary properties of optical and radar sensors, there is an increasing interest in change detection based on heterogeneous images. The proposed network is symmetric with each side consisting of one convolutional layer and several coupling layers. The two input images connected with the two sides of the network, respectively, are transformed into a feature space where their feature representations become more consistent. In this feature space, the different map is calculated, which then leads to the ultimate detection map by applying a thresholding algorithm. The network parameters are learned by optimizing a coupling function. The learning process is unsupervised, which is different from most existing change detection methods based on heterogeneous images. Experimental results on both homogenous and heterogeneous images demonstrate the promising performance of the proposed network compared with several existing approaches.

  7. Convolutional Codes with Maximum Column Sum Rank for Network Streaming

    OpenAIRE

    Mahmood, Rafid; Badr, Ahmed; Khisti, Ashish

    2015-01-01

    The column Hamming distance of a convolutional code determines the error correction capability when streaming over a class of packet erasure channels. We introduce a metric known as the column sum rank, that parallels column Hamming distance when streaming over a network with link failures. We prove rank analogues of several known column Hamming distance properties and introduce a new family of convolutional codes that maximize the column sum rank up to the code memory. Our construction invol...

  8. On Multiple Users Scheduling Using Superposition Coding over Rayleigh Fading Channels

    KAUST Repository

    Zafar, Ammar

    2013-02-20

    In this letter, numerical results are provided to analyze the gains of multiple users scheduling via superposition coding with successive interference cancellation in comparison with the conventional single user scheduling in Rayleigh blockfading broadcast channels. The information-theoretic optimal power, rate and decoding order allocation for the superposition coding scheme are considered and the corresponding histogram for the optimal number of scheduled users is evaluated. Results show that at optimality there is a high probability that only two or three users are scheduled per channel transmission block. Numerical results for the gains of multiple users scheduling in terms of the long term throughput under hard and proportional fairness as well as for fixed merit weights for the users are also provided. These results show that the performance gain of multiple users scheduling over single user scheduling increases when the total number of users in the network increases, and it can exceed 10% for high number of users

  9. SU-D-204-07: Retrospective Correlation of Dose Accuracy with Regions of Local Failure for Early Stage Lung Cancer Patients Treated with Stereotactic Body Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Devpura, S; Li, H; Liu, C; Fraser, C; Ajlouni, M; Movsas, B; Chetty, I [Henry Ford Health System, Detroit, MI (United States)

    2016-06-15

    Purpose: To correlate dose distributions computed using six algorithms for recurrent early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT), with outcome (local failure). Methods: Of 270 NSCLC patients treated with 12Gyx4, 20 were found to have local recurrence prior to the 2-year time point. These patients were originally planned with 1-D pencil beam (1-D PB) algorithm. 4D imaging was performed to manage tumor motion. Regions of local failures were determined from follow-up PET-CT scans. Follow-up CT images were rigidly fused to the planning CT (pCT), and recurrent tumor volumes (Vrecur) were mapped to the pCT. Dose was recomputed, retrospectively, using five algorithms: 3-D PB, collapsed cone convolution (CCC), anisotropic analytical algorithm (AAA), AcurosXB, and Monte Carlo (MC). Tumor control probability (TCP) was computed using the Marsden model (1,2). Patterns of failure were classified as central, in-field, marginal, and distant for Vrecur ≥95% of prescribed dose, 95–80%, 80–20%, and ≤20%, respectively (3). Results: Average PTV D95 (dose covering 95% of the PTV) for 3-D PB, CCC, AAA, AcurosXB, and MC relative to 1-D PB were 95.3±2.1%, 84.1±7.5%, 84.9±5.7%, 86.3±6.0%, and 85.1±7.0%, respectively. TCP values for 1-D PB, 3-D PB, CCC, AAA, AcurosXB, and MC were 98.5±1.2%, 95.7±3.0, 79.6±16.1%, 79.7±16.5%, 81.1±17.5%, and 78.1±20%, respectively. Patterns of local failures were similar for 1-D and 3D PB plans, which predicted that the majority of failures occur in centraldistal regions, with only ∼15% occurring distantly. However, with convolution/superposition and MC type algorithms, the majority of failures (65%) were predicted to be distant, consistent with the literature. Conclusion: Based on MC and convolution/superposition type algorithms, average PTV D95 and TCP were ∼15% lower than the planned 1-D PB dose calculation. Patterns of failure results suggest that MC and convolution/superposition

  10. Composite and case study analyses of the large-scale environments associated with West Pacific Polar and subtropical vertical jet superposition events

    Science.gov (United States)

    Handlos, Zachary J.

    Though considerable research attention has been devoted to examination of the Northern Hemispheric polar and subtropical jet streams, relatively little has been directed toward understanding the circumstances that conspire to produce the relatively rare vertical superposition of these usually separate features. This dissertation investigates the structure and evolution of large-scale environments associated with jet superposition events in the northwest Pacific. An objective identification scheme, using NCEP/NCAR Reanalysis 1 data, is employed to identify all jet superpositions in the west Pacific (30-40°N, 135-175°E) for boreal winters (DJF) between 1979/80 - 2009/10. The analysis reveals that environments conducive to west Pacific jet superposition share several large-scale features usually associated with East Asian Winter Monsoon (EAWM) northerly cold surges, including the presence of an enhanced Hadley Cell-like circulation within the jet entrance region. It is further demonstrated that several EAWM indices are statistically significantly correlated with jet superposition frequency in the west Pacific. The life cycle of EAWM cold surges promotes interaction between tropical convection and internal jet dynamics. Low potential vorticity (PV), high theta e tropical boundary layer air, exhausted by anomalous convection in the west Pacific lower latitudes, is advected poleward towards the equatorward side of the jet in upper tropospheric isentropic layers resulting in anomalous anticyclonic wind shear that accelerates the jet. This, along with geostrophic cold air advection in the left jet entrance region that drives the polar tropopause downward through the jet core, promotes the development of the deep, vertical PV wall characteristic of superposed jets. West Pacific jet superpositions preferentially form within an environment favoring the aforementioned characteristics regardless of EAWM seasonal strength. Post-superposition, it is shown that the west Pacific

  11. Convolution-deconvolution in DIGES

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Simos, N.

    1995-01-01

    Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities

  12. A convolutional neural network to filter artifacts in spectroscopic MRI.

    Science.gov (United States)

    Gurbani, Saumya S; Schreibmann, Eduard; Maudsley, Andrew A; Cordova, James Scott; Soher, Brian J; Poptani, Harish; Verma, Gaurav; Barker, Peter B; Shim, Hyunsuk; Cooper, Lee A D

    2018-03-09

    Proton MRSI is a noninvasive modality capable of generating volumetric maps of in vivo tissue metabolism without the need for ionizing radiation or injected contrast agent. Magnetic resonance spectroscopic imaging has been shown to be a viable imaging modality for studying several neuropathologies. However, a key hurdle in the routine clinical adoption of MRSI is the presence of spectral artifacts that can arise from a number of sources, possibly leading to false information. A deep learning model was developed that was capable of identifying and filtering out poor quality spectra. The core of the model used a tiled convolutional neural network that analyzed frequency-domain spectra to detect artifacts. When compared with a panel of MRS experts, our convolutional neural network achieved high sensitivity and specificity with an area under the curve of 0.95. A visualization scheme was implemented to better understand how the convolutional neural network made its judgement on single-voxel or multivoxel MRSI, and the convolutional neural network was embedded into a pipeline capable of producing whole-brain spectroscopic MRI volumes in real time. The fully automated method for assessment of spectral quality provides a valuable tool to support clinical MRSI or spectroscopic MRI studies for use in fields such as adaptive radiation therapy planning. © 2018 International Society for Magnetic Resonance in Medicine.

  13. Quantum-mechanical Green's functions and nonlinear superposition law

    International Nuclear Information System (INIS)

    Nassar, A.B.; Bassalo, J.M.F.; Antunes Neto, H.S.; Alencar, P. de T.S.

    1986-01-01

    The quantum-mechanical Green's function is derived for the problem of a time-dependent variable mass particle subject to a time-dependent forced harmonic oscillator potential by taking direct recourse of the corresponding Schroedinger equation. Through the usage of the nonlinear superposition law of Ray and Reid, it is shown that such a Green's function can be obtained from that for the problem of a particle with unit (constant) mass subject to either a forced harmonic potential with constant frequency or only to a time-dependent linear field. (Author) [pt

  14. Quantum-mechanical Green's function and nonlinear superposition law

    International Nuclear Information System (INIS)

    Nassar, A.B.; Bassalo, J.M.F.; Antunes Neto, H.S.; Alencar, P.T.S.

    1986-01-01

    It is derived the quantum-mechanical Green's function for the problem of a time-dependent variable mass particle subject to a time-dependent forced harmonic-oscillator potential by taking direct recourse of the corresponding Schroedinger equation. Through the usage of the nonlinear superposition law of Ray and Reid, it is shown that such a Green's function can be obtained from that for the problem of a particle with unit (constant) mass subject to either a forced harmonic potential with constant frequency or only to a time-dependent linear field

  15. Efficient Power Allocation for Video over Superposition Coding

    KAUST Repository

    Lau, Chun Pong

    2013-03-01

    In this paper we consider a wireless multimedia system by mapping scalable video coded (SVC) bit stream upon superposition coded (SPC) signals, referred to as (SVC-SPC) architecture. Empirical experiments using a software-defined radio(SDR) emulator are conducted to gain a better understanding of its efficiency, specifically, the impact of the received signal due to different power allocation ratios. Our experimental results show that to maintain high video quality, the power allocated to the base layer should be approximately four times higher than the power allocated to the enhancement layer.

  16. Push-pull optical pumping of pure superposition states

    International Nuclear Information System (INIS)

    Jau, Y.-Y.; Miron, E.; Post, A.B.; Kuzma, N.N.; Happer, W.

    2004-01-01

    A new optical pumping method, 'push-pull pumping', can produce very nearly pure, coherent superposition states between the initial and the final sublevels of the important field-independent 0-0 clock resonance of alkali-metal atoms. The key requirement for push-pull pumping is the use of D1 resonant light which alternates between left and right circular polarization at the Bohr frequency of the state. The new pumping method works for a wide range of conditions, including atomic beams with almost no collisions, and atoms in buffer gases with pressures of many atmospheres

  17. Unified calculations of the optical band positions and EPR g factors for NaCrS2 crystal

    International Nuclear Information System (INIS)

    Mei, Yang; Zheng, Wen-Chen; Zhang, Lin

    2014-01-01

    Six optical band positions and EPR g factors g || , g ⊥ for the trigonal Cr 3+ octahedral clusters in NaCrS 2 crystal are calculated together through the complete diagonalization (of energy matrix) method based on the two-spin–orbit-parameter model, where besides the contribution due to the spin–orbit parameter of central d n ion in the conventional crystal-field theory, the contribution due to the spin–orbit parameter of ligand ion via the covalence effect is also considered. In the calculations, the crystal-field parameters B kl are obtained from the superposition model with the structural data of Cr 3+ octahedral clusters in NaCrS 2 crystal measured exactly by the X-ray diffraction method. The calculated optical and EPR spectral data are in a reasonable agreement with the observed values. So, the reliability of the superposition model in the studies of crystal-field parameters for d n ions in crystals is confirmed, and the complete diagonalization (of energy matrix) method based on the two-spin–orbit-model is effective in the unified calculations of optical and EPR spectral data for d n ions in crystals. - Highlights: • Six optical band positions and g factors g || , g ⊥ of NaCrS 2 are calculated together. • Calculation is using the complete diagonalization (of energy matrix) method. • The diagonalization method is based on the two-spin–orbit-parameter model. • Reliability of superposition model in the studies of CF parameters is confirmed

  18. Effective calculation algorithm for nuclear chains of arbitrary length and branching

    International Nuclear Information System (INIS)

    Chirkov, V.A.; Mishanin, B.V.

    1994-01-01

    An effective algorithm for calculation of the isotope concentration in the spent nuclear fuel when it is kept in storage, is presented. Using the superposition principle and representing the transfer function in a rather compact form it becomes possible achieve high calculation speed and a moderate computer code size. The algorithm is applied for the calculation of activity, energy release and toxicity of heavy nuclides and products of their decay when the fuel is kept in storage. (authors). 1 ref., 4 tabs

  19. Evaluation of the treatment planning system of three-dimensional conformal external radiotherapy in Hospital Mexico of San Jose, Costa Rica

    International Nuclear Information System (INIS)

    Venegas Rojas, Deybith

    2014-01-01

    An evaluation and analysis are realized of dosimetry of the treatment planning system (TPS) of three-dimensional conformal external radiotherapy in the Servicio de Radioterapia of the Hospital Mexico of Costa Rica. An evaluation procedure is proposed based on IAEA-TECDOC-1540 document, and may continue to be applied periodically in this or other radiotherapy services. Tests realized have checked the representation of distances and electronics densities transferred to the TPS, match with those of real objects. The 16 tests applied have represented situations of real treatments with different configurations and beam modifiers in the equipment used daily. The tests have measured the absorbed dose to water in different significant points at different depths, using photon beams of 6 MeV and 18 MeV. The physical parameters of the tests were simulated. The absorbed dose has been calculated at specified points. The XiO and Eclipse TPS have been used with the calculation algorithms: Superposition, Convolution and AAA. The results of the calculations are evaluated with statistical methods and comparing them with the measurements of absorbed dose. A generalized tendency has been detected toward negative relative errors, implying an underestimation of the dose by the TPS; due to a difference found in the accelerator output factor respect to its commissioning. The AAA algorithm has determined a better performance, although with greater difficulties of calculus in the region of build-up. Convolution and Superposition algorithms have had similar performances and both have presented problems in high depths and out of edges of the fields. The result of the dosimetric evaluation has been satisfactory in real conditions of equipment; but several particularities have been found that should be reviewed and adjusted. The precision of the TPS has been adequate in the majority of situations important for treatment planning. [author] [es

  20. Unveiling the curtain of superposition: Recent gedanken and laboratory experiments

    Science.gov (United States)

    Cohen, E.; Elitzur, A. C.

    2017-08-01

    What is the true meaning of quantum superposition? Can a particle genuinely reside in several places simultaneously? These questions lie at the heart of this paper which presents an updated survey of some important stages in the evolution of the three-boxes paradox, as well as novel conclusions drawn from it. We begin with the original thought experiment of Aharonov and Vaidman, and proceed to its non-counterfactual version. The latter was recently realized by Okamoto and Takeuchi using a quantum router. We then outline a dynamic version of this experiment, where a particle is shown to “disappear” and “re-appear” during the time evolution of the system. This surprising prediction based on self-cancellation of weak values is directly related to our notion of Quantum Oblivion. Finally, we present the non-counterfactual version of this disappearing-reappearing experiment. Within the near future, this last version of the experiment is likely to be realized in the lab, proving the existence of exotic hitherto unknown forms of superposition. With the aid of Bell’s theorem, we prove the inherent nonlocality and nontemporality underlying such pre- and post-selected systems, rendering anomalous weak values ontologically real.

  1. Deformable image registration using convolutional neural networks

    NARCIS (Netherlands)

    Eppenhof, Koen A.J.; Lafarge, Maxime W.; Moeskops, Pim; Veta, Mitko; Pluim, Josien P.W.

    2018-01-01

    Deformable image registration can be time-consuming and often needs extensive parameterization to perform well on a specific application. We present a step towards a registration framework based on a three-dimensional convolutional neural network. The network directly learns transformations between

  2. Convolutional over Recurrent Encoder for Neural Machine Translation

    Directory of Open Access Journals (Sweden)

    Dakwale Praveen

    2017-06-01

    Full Text Available Neural machine translation is a recently proposed approach which has shown competitive results to traditional MT approaches. Standard neural MT is an end-to-end neural network where the source sentence is encoded by a recurrent neural network (RNN called encoder and the target words are predicted using another RNN known as decoder. Recently, various models have been proposed which replace the RNN encoder with a convolutional neural network (CNN. In this paper, we propose to augment the standard RNN encoder in NMT with additional convolutional layers in order to capture wider context in the encoder output. Experiments on English to German translation demonstrate that our approach can achieve significant improvements over a standard RNN-based baseline.

  3. Data-based diffraction kernels for surface waves from convolution and correlation processes through active seismic interferometry

    Science.gov (United States)

    Chmiel, Malgorzata; Roux, Philippe; Herrmann, Philippe; Rondeleux, Baptiste; Wathelet, Marc

    2018-05-01

    We investigated the construction of diffraction kernels for surface waves using two-point convolution and/or correlation from land active seismic data recorded in the context of exploration geophysics. The high density of controlled sources and receivers, combined with the application of the reciprocity principle, allows us to retrieve two-dimensional phase-oscillation diffraction kernels (DKs) of surface waves between any two source or receiver points in the medium at each frequency (up to 15 Hz, at least). These DKs are purely data-based as no model calculations and no synthetic data are needed. They naturally emerge from the interference patterns of the recorded wavefields projected on the dense array of sources and/or receivers. The DKs are used to obtain multi-mode dispersion relations of Rayleigh waves, from which near-surface shear velocity can be extracted. Using convolution versus correlation with a grid of active sources is an important step in understanding the physics of the retrieval of surface wave Green's functions. This provides the foundation for future studies based on noise sources or active sources with a sparse spatial distribution.

  4. Deep Galaxy: Classification of Galaxies based on Deep Convolutional Neural Networks

    OpenAIRE

    Khalifa, Nour Eldeen M.; Taha, Mohamed Hamed N.; Hassanien, Aboul Ella; Selim, I. M.

    2017-01-01

    In this paper, a deep convolutional neural network architecture for galaxies classification is presented. The galaxy can be classified based on its features into main three categories Elliptical, Spiral, and Irregular. The proposed deep galaxies architecture consists of 8 layers, one main convolutional layer for features extraction with 96 filters, followed by two principles fully connected layers for classification. It is trained over 1356 images and achieved 97.272% in testing accuracy. A c...

  5. Approaches to reducing photon dose calculation errors near metal implants

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Jessie Y.; Followill, David S.; Howell, Rebecca M.; Mirkovic, Dragan; Kry, Stephen F., E-mail: sfkry@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States); Liu, Xinming [Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States); Stingo, Francesco C. [Department of Biostatistics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States)

    2016-09-15

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well as two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact

  6. Approaches to reducing photon dose calculation errors near metal implants

    International Nuclear Information System (INIS)

    Huang, Jessie Y.; Followill, David S.; Howell, Rebecca M.; Mirkovic, Dragan; Kry, Stephen F.; Liu, Xinming; Stingo, Francesco C.

    2016-01-01

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well as two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact

  7. HETERO code, heterogeneous procedure for reactor calculation

    International Nuclear Information System (INIS)

    Jovanovic, S.M.; Raisic, N.M.

    1966-11-01

    This report describes the procedure for calculating the parameters of heterogeneous reactor system taking into account the interaction between fuel elements related to established geometry. First part contains the analysis of single fuel element in a diffusion medium, and criticality condition of the reactor system described by superposition of elements interactions. the possibility of performing such analysis by determination of heterogeneous system lattice is described in the second part. Computer code HETERO with the code KETAP (calculation of criticality factor η n and flux distribution) is part of this report together with the example of RB reactor square lattice

  8. Traffic sign recognition based on deep convolutional neural network

    Science.gov (United States)

    Yin, Shi-hao; Deng, Ji-cai; Zhang, Da-wei; Du, Jing-yuan

    2017-11-01

    Traffic sign recognition (TSR) is an important component of automated driving systems. It is a rather challenging task to design a high-performance classifier for the TSR system. In this paper, we propose a new method for TSR system based on deep convolutional neural network. In order to enhance the expression of the network, a novel structure (dubbed block-layer below) which combines network-in-network and residual connection is designed. Our network has 10 layers with parameters (block-layer seen as a single layer): the first seven are alternate convolutional layers and block-layers, and the remaining three are fully-connected layers. We train our TSR network on the German traffic sign recognition benchmark (GTSRB) dataset. To reduce overfitting, we perform data augmentation on the training images and employ a regularization method named "dropout". The activation function we employ in our network adopts scaled exponential linear units (SELUs), which can induce self-normalizing properties. To speed up the training, we use an efficient GPU to accelerate the convolutional operation. On the test dataset of GTSRB, we achieve the accuracy rate of 99.67%, exceeding the state-of-the-art results.

  9. Optical information encryption based on incoherent superposition with the help of the QR code

    Science.gov (United States)

    Qin, Yi; Gong, Qiong

    2014-01-01

    In this paper, a novel optical information encryption approach is proposed with the help of QR code. This method is based on the concept of incoherent superposition which we introduce for the first time. The information to be encrypted is first transformed into the corresponding QR code, and thereafter the QR code is further encrypted into two phase only masks analytically by use of the intensity superposition of two diffraction wave fields. The proposed method has several advantages over the previous interference-based method, such as a higher security level, a better robustness against noise attack, a more relaxed work condition, and so on. Numerical simulation results and actual smartphone collected results are shown to validate our proposal.

  10. Optical threshold secret sharing scheme based on basic vector operations and coherence superposition

    Science.gov (United States)

    Deng, Xiaopeng; Wen, Wei; Mi, Xianwu; Long, Xuewen

    2015-04-01

    We propose, to our knowledge for the first time, a simple optical algorithm for secret image sharing with the (2,n) threshold scheme based on basic vector operations and coherence superposition. The secret image to be shared is firstly divided into n shadow images by use of basic vector operations. In the reconstruction stage, the secret image can be retrieved by recording the intensity of the coherence superposition of any two shadow images. Compared with the published encryption techniques which focus narrowly on information encryption, the proposed method can realize information encryption as well as secret sharing, which further ensures the safety and integrality of the secret information and prevents power from being kept centralized and abused. The feasibility and effectiveness of the proposed method are demonstrated by numerical results.

  11. Epileptiform spike detection via convolutional neural networks

    DEFF Research Database (Denmark)

    Johansen, Alexander Rosenberg; Jin, Jing; Maszczyk, Tomasz

    2016-01-01

    The EEG of epileptic patients often contains sharp waveforms called "spikes", occurring between seizures. Detecting such spikes is crucial for diagnosing epilepsy. In this paper, we develop a convolutional neural network (CNN) for detecting spikes in EEG of epileptic patients in an automated...

  12. Very deep recurrent convolutional neural network for object recognition

    Science.gov (United States)

    Brahimi, Sourour; Ben Aoun, Najib; Ben Amar, Chokri

    2017-03-01

    In recent years, Computer vision has become a very active field. This field includes methods for processing, analyzing, and understanding images. The most challenging problems in computer vision are image classification and object recognition. This paper presents a new approach for object recognition task. This approach exploits the success of the Very Deep Convolutional Neural Network for object recognition. In fact, it improves the convolutional layers by adding recurrent connections. This proposed approach was evaluated on two object recognition benchmarks: Pascal VOC 2007 and CIFAR-10. The experimental results prove the efficiency of our method in comparison with the state of the art methods.

  13. Deep Convolutional Neural Networks: Structure, Feature Extraction and Training

    Directory of Open Access Journals (Sweden)

    Namatēvs Ivars

    2017-12-01

    Full Text Available Deep convolutional neural networks (CNNs are aimed at processing data that have a known network like topology. They are widely used to recognise objects in images and diagnose patterns in time series data as well as in sensor data classification. The aim of the paper is to present theoretical and practical aspects of deep CNNs in terms of convolution operation, typical layers and basic methods to be used for training and learning. Some practical applications are included for signal and image classification. Finally, the present paper describes the proposed block structure of CNN for classifying crucial features from 3D sensor data.

  14. A frequency bin-wise nonlinear masking algorithm in convolutive mixtures for speech segregation.

    Science.gov (United States)

    Chi, Tai-Shih; Huang, Ching-Wen; Chou, Wen-Sheng

    2012-05-01

    A frequency bin-wise nonlinear masking algorithm is proposed in the spectrogram domain for speech segregation in convolutive mixtures. The contributive weight from each speech source to a time-frequency unit of the mixture spectrogram is estimated by a nonlinear function based on location cues. For each sound source, a non-binary mask is formed from the estimated weights and is multiplied to the mixture spectrogram to extract the sound. Head-related transfer functions (HRTFs) are used to simulate convolutive sound mixtures perceived by listeners. Simulation results show our proposed method outperforms convolutive independent component analysis and degenerate unmixing and estimation technique methods in almost all test conditions.

  15. Transient change in the shape of premixed burner flame with the superposition of pulsed dielectric barrier discharge

    OpenAIRE

    Zaima, Kazunori; Sasaki, Koichi

    2016-01-01

    We investigated the transient phenomena in a premixed burner flame with the superposition of a pulsed dielectric barrier discharge (DBD). The length of the flame was shortened by the superposition of DBD, indicating the activation of combustion chemical reactions with the help of the plasma. In addition, we observed the modulation of the top position of the unburned gas region and the formations of local minimums in the axial distribution of the optical emission intensity of OH. These experim...

  16. Automatic superposition of drug molecules based on their common receptor site

    Science.gov (United States)

    Kato, Yuichi; Inoue, Atsushi; Yamada, Miho; Tomioka, Nobuo; Itai, Akiko

    1992-10-01

    We have prevously developed a new rational method for superposing molecules in terms of submolecular physical and chemical properties, but not in terms of atom positions or chemical structures as has been done in the conventional methods. The program was originally developed for interactive use on a three-dimensional graphic display, providing goodness-of-fit indices on molecular shape, hydrogen bonds, electrostatic interactions and others. Here, we report a new unbiased searching method for the best superposition of molecules, covering all the superposing modes and conformational freedom, as an additional function of the program. The function is based on a novel least-squares method which superposes the expected positions and orientations of hydrogen bonding partners in the receptor that are deduced from both molecules. The method not only gives reliability and reproducibility to the result of the superposition, but also allows us to save labor and time. It is demonstrated that this method is very efficient for finding the correct superposing mode in such systems where hydrogen bonds play important roles.

  17. Face recognition via Gabor and convolutional neural network

    Science.gov (United States)

    Lu, Tongwei; Wu, Menglu; Lu, Tao

    2018-04-01

    In recent years, the powerful feature learning and classification ability of convolutional neural network have attracted widely attention. Compared with the deep learning, the traditional machine learning algorithm has a good explanatory which deep learning does not have. Thus, In this paper, we propose a method to extract the feature of the traditional algorithm as the input of convolution neural network. In order to reduce the complexity of the network, the kernel function of Gabor wavelet is used to extract the feature from different position, frequency and direction of target image. It is sensitive to edge of image which can provide good direction and scale selection. The extraction of the image from eight directions on a scale are as the input of network that we proposed. The network have the advantage of weight sharing and local connection and texture feature of the input image can reduce the influence of facial expression, gesture and illumination. At the same time, we introduced a layer which combined the results of the pooling and convolution can extract deeper features. The training network used the open source caffe framework which is beneficial to feature extraction. The experiment results of the proposed method proved that the network structure effectively overcame the barrier of illumination and had a good robustness as well as more accurate and rapid than the traditional algorithm.

  18. Tensor decomposition in electronic structure calculations on 3D Cartesian grids

    International Nuclear Information System (INIS)

    Khoromskij, B.N.; Khoromskaia, V.; Chinnamsetty, S.R.; Flad, H.-J.

    2009-01-01

    In this paper, we investigate a novel approach based on the combination of Tucker-type and canonical tensor decomposition techniques for the efficient numerical approximation of functions and operators in electronic structure calculations. In particular, we study applicability of tensor approximations for the numerical solution of Hartree-Fock and Kohn-Sham equations on 3D Cartesian grids. We show that the orthogonal Tucker-type tensor approximation of electron density and Hartree potential of simple molecules leads to low tensor rank representations. This enables an efficient tensor-product convolution scheme for the computation of the Hartree potential using a collocation-type approximation via piecewise constant basis functions on a uniform nxnxn grid. Combined with the Richardson extrapolation, our approach exhibits O(h 3 ) convergence in the grid-size h=O(n -1 ). Moreover, this requires O(3rn+r 3 ) storage, where r denotes the Tucker rank of the electron density with r=O(logn), almost uniformly in n. For example, calculations of the Coulomb matrix and the Hartree-Fock energy for the CH 4 molecule, with a pseudopotential on the C atom, achieved accuracies of the order of 10 -6 hartree with a grid-size n of several hundreds. Since the tensor-product convolution in 3D is performed via 1D convolution transforms, our scheme markedly outperforms the 3D-FFT in both the computing time and storage requirements.

  19. Isointense infant brain MRI segmentation with a dilated convolutional neural network

    OpenAIRE

    Moeskops, Pim; Pluim, Josien P. W.

    2017-01-01

    Quantitative analysis of brain MRI at the age of 6 months is difficult because of the limited contrast between white matter and gray matter. In this study, we use a dilated triplanar convolutional neural network in combination with a non-dilated 3D convolutional neural network for the segmentation of white matter, gray matter and cerebrospinal fluid in infant brain MR images, as provided by the MICCAI grand challenge on 6-month infant brain MRI segmentation.

  20. Learning text representation using recurrent convolutional neural network with highway layers

    OpenAIRE

    Wen, Ying; Zhang, Weinan; Luo, Rui; Wang, Jun

    2016-01-01

    Recently, the rapid development of word embedding and neural networks has brought new inspiration to various NLP and IR tasks. In this paper, we describe a staged hybrid model combining Recurrent Convolutional Neural Networks (RCNN) with highway layers. The highway network module is incorporated in the middle takes the output of the bi-directional Recurrent Neural Network (Bi-RNN) module in the first stage and provides the Convolutional Neural Network (CNN) module in the last stage with the i...

  1. Testing of the analytical anisotropic algorithm for photon dose calculation

    International Nuclear Information System (INIS)

    Esch, Ann van; Tillikainen, Laura; Pyykkonen, Jukka; Tenhunen, Mikko; Helminen, Hannu; Siljamaeki, Sami; Alakuijala, Jyrki; Paiusco, Marta; Iori, Mauro; Huyskens, Dominique P.

    2006-01-01

    The analytical anisotropic algorithm (AAA) was implemented in the Eclipse (Varian Medical Systems) treatment planning system to replace the single pencil beam (SPB) algorithm for the calculation of dose distributions for photon beams. AAA was developed to improve the dose calculation accuracy, especially in heterogeneous media. The total dose deposition is calculated as the superposition of the dose deposited by two photon sources (primary and secondary) and by an electron contamination source. The photon dose is calculated as a three-dimensional convolution of Monte-Carlo precalculated scatter kernels, scaled according to the electron density matrix. For the configuration of AAA, an optimization algorithm determines the parameters characterizing the multiple source model by optimizing the agreement between the calculated and measured depth dose curves and profiles for the basic beam data. We have combined the acceptance tests obtained in three different departments for 6, 15, and 18 MV photon beams. The accuracy of AAA was tested for different field sizes (symmetric and asymmetric) for open fields, wedged fields, and static and dynamic multileaf collimation fields. Depth dose behavior at different source-to-phantom distances was investigated. Measurements were performed on homogeneous, water equivalent phantoms, on simple phantoms containing cork inhomogeneities, and on the thorax of an anthropomorphic phantom. Comparisons were made among measurements, AAA, and SPB calculations. The optimization procedure for the configuration of the algorithm was successful in reproducing the basic beam data with an overall accuracy of 3%, 1 mm in the build-up region, and 1%, 1 mm elsewhere. Testing of the algorithm in more clinical setups showed comparable results for depth dose curves, profiles, and monitor units of symmetric open and wedged beams below d max . The electron contamination model was found to be suboptimal to model the dose around d max , especially for physical

  2. A locality aware convolutional neural networks accelerator

    NARCIS (Netherlands)

    Shi, R.; Xu, Z.; Sun, Z.; Peemen, M.C.J.; Li, A.; Corporaal, H.; Wu, D.

    2015-01-01

    The advantages of Convolutional Neural Networks (CNNs) with respect to traditional methods for visual pattern recognition have changed the field of machine vision. The main issue that hinders broad adoption of this technique is the massive computing workload in CNN that prevents real-time

  3. Quantum superposition of massive objects and collapse models

    International Nuclear Information System (INIS)

    Romero-Isart, Oriol

    2011-01-01

    We analyze the requirements to test some of the most paradigmatic collapse models with a protocol that prepares quantum superpositions of massive objects. This consists of coherently expanding the wave function of a ground-state-cooled mechanical resonator, performing a squared position measurement that acts as a double slit, and observing interference after further evolution. The analysis is performed in a general framework and takes into account only unavoidable sources of decoherence: blackbody radiation and scattering of environmental particles. We also discuss the limitations imposed by the experimental implementation of this protocol using cavity quantum optomechanics with levitating dielectric nanospheres.

  4. Quantum superposition of massive objects and collapse models

    Energy Technology Data Exchange (ETDEWEB)

    Romero-Isart, Oriol [Max-Planck-Institut fuer Quantenoptik, Hans-Kopfermann-Str. 1, D-85748 Garching (Germany)

    2011-11-15

    We analyze the requirements to test some of the most paradigmatic collapse models with a protocol that prepares quantum superpositions of massive objects. This consists of coherently expanding the wave function of a ground-state-cooled mechanical resonator, performing a squared position measurement that acts as a double slit, and observing interference after further evolution. The analysis is performed in a general framework and takes into account only unavoidable sources of decoherence: blackbody radiation and scattering of environmental particles. We also discuss the limitations imposed by the experimental implementation of this protocol using cavity quantum optomechanics with levitating dielectric nanospheres.

  5. The convolution integral for the forward-backward asymmetry in e+e- annihilation

    International Nuclear Information System (INIS)

    Bardin, D.; Bilenky, M.; Chizhov, A.; Sazonov, A.; Sedykh, Yu.; Riemann, T.; Sachwitz, M.

    1989-01-01

    The complete convolution integral for the forward-backward asymmetry in A FB in e + e - annihilation is obtained in order O(α) with soft photon exponentiation. The influence of these QED corrections on A FB in the vicinity of the Z peak is discussed. The results are used to comment on a recent ad hoc ansatz using convolution weights derived for the total cross section. (orig.)

  6. Implementation of random set-up errors in Monte Carlo calculated dynamic IMRT treatment plans

    International Nuclear Information System (INIS)

    Stapleton, S; Zavgorodni, S; Popescu, I A; Beckham, W A

    2005-01-01

    The fluence-convolution method for incorporating random set-up errors (RSE) into the Monte Carlo treatment planning dose calculations was previously proposed by Beckham et al, and it was validated for open field radiotherapy treatments. This study confirms the applicability of the fluence-convolution method for dynamic intensity modulated radiotherapy (IMRT) dose calculations and evaluates the impact of set-up uncertainties on a clinical IMRT dose distribution. BEAMnrc and DOSXYZnrc codes were used for Monte Carlo calculations. A sliding window IMRT delivery was simulated using a dynamic multi-leaf collimator (DMLC) transport model developed by Keall et al. The dose distributions were benchmarked for dynamic IMRT fields using extended dose range (EDR) film, accumulating the dose from 16 subsequent fractions shifted randomly. Agreement of calculated and measured relative dose values was well within statistical uncertainty. A clinical seven field sliding window IMRT head and neck treatment was then simulated and the effects of random set-up errors (standard deviation of 2 mm) were evaluated. The dose-volume histograms calculated in the PTV with and without corrections for RSE showed only small differences indicating a reduction of the volume of high dose region due to set-up errors. As well, it showed that adequate coverage of the PTV was maintained when RSE was incorporated. Slice-by-slice comparison of the dose distributions revealed differences of up to 5.6%. The incorporation of set-up errors altered the position of the hot spot in the plan. This work demonstrated validity of implementation of the fluence-convolution method to dynamic IMRT Monte Carlo dose calculations. It also showed that accounting for the set-up errors could be essential for correct identification of the value and position of the hot spot

  7. Implementation of random set-up errors in Monte Carlo calculated dynamic IMRT treatment plans

    Science.gov (United States)

    Stapleton, S.; Zavgorodni, S.; Popescu, I. A.; Beckham, W. A.

    2005-02-01

    The fluence-convolution method for incorporating random set-up errors (RSE) into the Monte Carlo treatment planning dose calculations was previously proposed by Beckham et al, and it was validated for open field radiotherapy treatments. This study confirms the applicability of the fluence-convolution method for dynamic intensity modulated radiotherapy (IMRT) dose calculations and evaluates the impact of set-up uncertainties on a clinical IMRT dose distribution. BEAMnrc and DOSXYZnrc codes were used for Monte Carlo calculations. A sliding window IMRT delivery was simulated using a dynamic multi-leaf collimator (DMLC) transport model developed by Keall et al. The dose distributions were benchmarked for dynamic IMRT fields using extended dose range (EDR) film, accumulating the dose from 16 subsequent fractions shifted randomly. Agreement of calculated and measured relative dose values was well within statistical uncertainty. A clinical seven field sliding window IMRT head and neck treatment was then simulated and the effects of random set-up errors (standard deviation of 2 mm) were evaluated. The dose-volume histograms calculated in the PTV with and without corrections for RSE showed only small differences indicating a reduction of the volume of high dose region due to set-up errors. As well, it showed that adequate coverage of the PTV was maintained when RSE was incorporated. Slice-by-slice comparison of the dose distributions revealed differences of up to 5.6%. The incorporation of set-up errors altered the position of the hot spot in the plan. This work demonstrated validity of implementation of the fluence-convolution method to dynamic IMRT Monte Carlo dose calculations. It also showed that accounting for the set-up errors could be essential for correct identification of the value and position of the hot spot.

  8. Superpositions of higher-order bessel beams and nondiffracting speckle fields - (SAIP 2009)

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2009-07-01

    Full Text Available speckle fields. The paper reports on illuminating a ring slit aperture with light which has an azimuthal phase dependence, such that the field produced is a superposition of two higher-order Bessel beams. In the case that the phase dependence of the light...

  9. A Fast Numerical Method for Max-Convolution and the Application to Efficient Max-Product Inference in Bayesian Networks.

    Science.gov (United States)

    Serang, Oliver

    2015-08-01

    Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.

  10. Design and Implementation of Behavior Recognition System Based on Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Yu Bo

    2017-01-01

    Full Text Available We build a set of human behavior recognition system based on the convolution neural network constructed for the specific human behavior in public places. Firstly, video of human behavior data set will be segmented into images, then we process the images by the method of background subtraction to extract moving foreground characters of body. Secondly, the training data sets are trained into the designed convolution neural network, and the depth learning network is constructed by stochastic gradient descent. Finally, the various behaviors of samples are classified and identified with the obtained network model, and the recognition results are compared with the current mainstream methods. The result show that the convolution neural network can study human behavior model automatically and identify human’s behaviors without any manually annotated trainings.

  11. A cute and highly contrast-sensitive superposition eye : The diurnal owlfly Libelloides macaronius

    NARCIS (Netherlands)

    Belušič, Gregor; Pirih, Primož; Stavenga, Doekele G.

    The owlfly Libelloides macaronius (Insecta: Neuroptera) has large bipartite eyes of the superposition type. The spatial resolution and sensitivity of the photoreceptor array in the dorsofrontal eye part was studied with optical and electrophysiological methods. Using structured illumination

  12. Geometric measure of pairwise quantum discord for superpositions of multipartite generalized coherent states

    International Nuclear Information System (INIS)

    Daoud, M.; Ahl Laamara, R.

    2012-01-01

    We give the explicit expressions of the pairwise quantum correlations present in superpositions of multipartite coherent states. A special attention is devoted to the evaluation of the geometric quantum discord. The dynamics of quantum correlations under a dephasing channel is analyzed. A comparison of geometric measure of quantum discord with that of concurrence shows that quantum discord in multipartite coherent states is more resilient to dissipative environments than is quantum entanglement. To illustrate our results, we consider some special superpositions of Weyl–Heisenberg, SU(2) and SU(1,1) coherent states which interpolate between Werner and Greenberger–Horne–Zeilinger states. -- Highlights: ► Pairwise quantum correlations multipartite coherent states. ► Explicit expression of geometric quantum discord. ► Entanglement sudden death and quantum discord robustness. ► Generalized coherent states interpolating between Werner and Greenberger–Horne–Zeilinger states

  13. Geometric measure of pairwise quantum discord for superpositions of multipartite generalized coherent states

    Energy Technology Data Exchange (ETDEWEB)

    Daoud, M., E-mail: m_daoud@hotmail.com [Department of Physics, Faculty of Sciences, University Ibnou Zohr, Agadir (Morocco); Ahl Laamara, R., E-mail: ahllaamara@gmail.com [LPHE-Modeling and Simulation, Faculty of Sciences, University Mohammed V, Rabat (Morocco); Centre of Physics and Mathematics, CPM, CNESTEN, Rabat (Morocco)

    2012-07-16

    We give the explicit expressions of the pairwise quantum correlations present in superpositions of multipartite coherent states. A special attention is devoted to the evaluation of the geometric quantum discord. The dynamics of quantum correlations under a dephasing channel is analyzed. A comparison of geometric measure of quantum discord with that of concurrence shows that quantum discord in multipartite coherent states is more resilient to dissipative environments than is quantum entanglement. To illustrate our results, we consider some special superpositions of Weyl–Heisenberg, SU(2) and SU(1,1) coherent states which interpolate between Werner and Greenberger–Horne–Zeilinger states. -- Highlights: ► Pairwise quantum correlations multipartite coherent states. ► Explicit expression of geometric quantum discord. ► Entanglement sudden death and quantum discord robustness. ► Generalized coherent states interpolating between Werner and Greenberger–Horne–Zeilinger states.

  14. Rock images classification by using deep convolution neural network

    Science.gov (United States)

    Cheng, Guojian; Guo, Wenhui

    2017-08-01

    Granularity analysis is one of the most essential issues in authenticate under microscope. To improve the efficiency and accuracy of traditional manual work, an convolutional neural network based method is proposed for granularity analysis from thin section image, which chooses and extracts features from image samples while build classifier to recognize granularity of input image samples. 4800 samples from Ordos basin are used for experiments under colour spaces of HSV, YCbCr and RGB respectively. On the test dataset, the correct rate in RGB colour space is 98.5%, and it is believable in HSV and YCbCr colour space. The results show that the convolution neural network can classify the rock images with high reliability.

  15. High Order Tensor Formulation for Convolutional Sparse Coding

    KAUST Repository

    Bibi, Adel Aamer; Ghanem, Bernard

    2017-01-01

    Convolutional sparse coding (CSC) has gained attention for its successful role as a reconstruction and a classification tool in the computer vision and machine learning community. Current CSC methods can only reconstruct singlefeature 2D images

  16. Resting State fMRI Functional Connectivity-Based Classification Using a Convolutional Neural Network Architecture.

    Science.gov (United States)

    Meszlényi, Regina J; Buza, Krisztian; Vidnyánszky, Zoltán

    2017-01-01

    Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network.

  17. Convolutional Neural Networks for SAR Image Segmentation

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Nobel-Jørgensen, Morten

    2015-01-01

    Segmentation of Synthetic Aperture Radar (SAR) images has several uses, but it is a difficult task due to a number of properties related to SAR images. In this article we show how Convolutional Neural Networks (CNNs) can easily be trained for SAR image segmentation with good results. Besides...

  18. On the superposition principle in interference experiments.

    Science.gov (United States)

    Sinha, Aninda; H Vijay, Aravind; Sinha, Urbasi

    2015-05-14

    The superposition principle is usually incorrectly applied in interference experiments. This has recently been investigated through numerics based on Finite Difference Time Domain (FDTD) methods as well as the Feynman path integral formalism. In the current work, we have derived an analytic formula for the Sorkin parameter which can be used to determine the deviation from the application of the principle. We have found excellent agreement between the analytic distribution and those that have been earlier estimated by numerical integration as well as resource intensive FDTD simulations. The analytic handle would be useful for comparing theory with future experiments. It is applicable both to physics based on classical wave equations as well as the non-relativistic Schrödinger equation.

  19. On a Generalized Hankel Type Convolution of Generalized Functions

    Indian Academy of Sciences (India)

    Generalized Hankel type transformation; Parserval relation; generalized ... The classical generalized Hankel type convolution are defined and extended to a class of generalized functions. ... Proceedings – Mathematical Sciences | News.

  20. Limitations of a convolution method for modeling geometric uncertainties in radiation therapy: the radiobiological dose-per-fraction effect

    International Nuclear Information System (INIS)

    Song, William; Battista, Jerry; Van Dyk, Jake

    2004-01-01

    The convolution method can be used to model the effect of random geometric uncertainties into planned dose distributions used in radiation treatment planning. This is effectively done by linearly adding infinitesimally small doses, each with a particular geometric offset, over an assumed infinite number of fractions. However, this process inherently ignores the radiobiological dose-per-fraction effect since only the summed physical dose distribution is generated. The resultant potential error on predicted radiobiological outcome [quantified in this work with tumor control probability (TCP), equivalent uniform dose (EUD), normal tissue complication probability (NTCP), and generalized equivalent uniform dose (gEUD)] has yet to be thoroughly quantified. In this work, the results of a Monte Carlo simulation of geometric displacements are compared to those of the convolution method for random geometric uncertainties of 0, 1, 2, 3, 4, and 5 mm (standard deviation). The α/β CTV ratios of 0.8, 1.5, 3, 5, and 10 Gy are used to represent the range of radiation responses for different tumors, whereas a single α/β OAR ratio of 3 Gy is used to represent all the organs at risk (OAR). The analysis is performed on a four-field prostate treatment plan of 18 MV x rays. The fraction numbers are varied from 1-50, with isoeffective adjustments of the corresponding dose-per-fractions to maintain a constant tumor control, using the linear-quadratic cell survival model. The average differences in TCP and EUD of the target, and in NTCP and gEUD of the OAR calculated from the convolution and Monte Carlo methods reduced asymptotically as the total fraction number increased, with the differences reaching negligible levels beyond the treatment fraction number of ≥20. The convolution method generally overestimates the radiobiological indices, as compared to the Monte Carlo method, for the target volume, and underestimates those for the OAR. These effects are interconnected and attributed

  1. Convoluted laminations in waterlain sediments:three examples from Eastern Canada and their relevance to neotectonics

    International Nuclear Information System (INIS)

    Macdougall, D.A.; Broster, B.E.

    1995-10-01

    The catastrophic disturbance of unconsolidated sediment produces a wide variety of deformation structures, particularly if the sediment is water-saturated at the time of disturbance. Layers, originally deposited as sub-horizontal, can become stretched or distended resulting in convoluted laminations. Faulted beds, slumped units, or dewatering structures may also occur in association with the disturbance. Convolutions were studied in five examples of Pleistocene glaciomarine deltas, at three locations in eastern Canada. Results from this study indicate that similar structures were produced in each of the sediment deposits, but some are especially common in specific facies (e.g. bottomset, foreset, topset). However, the particular cause of the convolutions varied within each deposit, and the origin could be better assessed when studied in relationship to other structures. None of the convolutions found could be attributed, categorically, to a seismic origin. However, neither could a seismic origin be dismissed for structures associated with convolutions occurring in deposits at: St. George, New Brunswick; Economy Point, Nova Scotia; and Lanark, Ontario. Of these deposits, the deformed structures at Economy Point are apparently post-glacial. (author). 24 refs., 58 figs

  2. Cascaded K-means convolutional feature learner and its application to face recognition

    Science.gov (United States)

    Zhou, Daoxiang; Yang, Dan; Zhang, Xiaohong; Huang, Sheng; Feng, Shu

    2017-09-01

    Currently, considerable efforts have been devoted to devise image representation. However, handcrafted methods need strong domain knowledge and show low generalization ability, and conventional feature learning methods require enormous training data and rich parameters tuning experience. A lightened feature learner is presented to solve these problems with application to face recognition, which shares similar topology architecture as a convolutional neural network. Our model is divided into three components: cascaded convolution filters bank learning layer, nonlinear processing layer, and feature pooling layer. Specifically, in the filters learning layer, we use K-means to learn convolution filters. Features are extracted via convoluting images with the learned filters. Afterward, in the nonlinear processing layer, hyperbolic tangent is employed to capture the nonlinear feature. In the feature pooling layer, to remove the redundancy information and incorporate the spatial layout, we exploit multilevel spatial pyramid second-order pooling technique to pool the features in subregions and concatenate them together as the final representation. Extensive experiments on four representative datasets demonstrate the effectiveness and robustness of our model to various variations, yielding competitive recognition results on extended Yale B and FERET. In addition, our method achieves the best identification performance on AR and labeled faces in the wild datasets among the comparative methods.

  3. Superposition Principle in Auger Recombination of Charged and Neutral Multicarrier States in Semiconductor Quantum Dots.

    Science.gov (United States)

    Wu, Kaifeng; Lim, Jaehoon; Klimov, Victor I

    2017-08-22

    Application of colloidal semiconductor quantum dots (QDs) in optical and optoelectronic devices is often complicated by unintentional generation of extra charges, which opens fast nonradiative Auger recombination pathways whereby the recombination energy of an exciton is quickly transferred to the extra carrier(s) and ultimately dissipated as heat. Previous studies of Auger recombination have primarily focused on neutral and, more recently, negatively charged multicarrier states. Auger dynamics of positively charged species remains more poorly explored due to difficulties in creating, stabilizing, and detecting excess holes in the QDs. Here we apply photochemical doping to prepare both negatively and positively charged CdSe/CdS QDs with two distinct core/shell interfacial profiles ("sharp" versus "smooth"). Using neutral and charged QD samples we evaluate Auger lifetimes of biexcitons, negative and positive trions (an exciton with an extra electron or a hole, respectively), and multiply negatively charged excitons. Using these measurements, we demonstrate that Auger decay of both neutral and charged multicarrier states can be presented as a superposition of independent elementary three-particle Auger events. As one of the manifestations of the superposition principle, we observe that the biexciton Auger decay rate can be presented as a sum of the Auger rates for independent negative and positive trion pathways. By comparing the measurements on the QDs with the "sharp" versus "smooth" interfaces, we also find that while affecting the absolute values of Auger lifetimes, manipulation of the shape of the confinement potential does not lead to violation of the superposition principle, which still allows us to accurately predict the biexciton Auger lifetimes based on the measured negative and positive trion dynamics. These findings indicate considerable robustness of the superposition principle as applied to Auger decay of charged and neutral multicarrier states

  4. Deep learning for steganalysis via convolutional neural networks

    Science.gov (United States)

    Qian, Yinlong; Dong, Jing; Wang, Wei; Tan, Tieniu

    2015-03-01

    Current work on steganalysis for digital images is focused on the construction of complex handcrafted features. This paper proposes a new paradigm for steganalysis to learn features automatically via deep learning models. We novelly propose a customized Convolutional Neural Network for steganalysis. The proposed model can capture the complex dependencies that are useful for steganalysis. Compared with existing schemes, this model can automatically learn feature representations with several convolutional layers. The feature extraction and classification steps are unified under a single architecture, which means the guidance of classification can be used during the feature extraction step. We demonstrate the effectiveness of the proposed model on three state-of-theart spatial domain steganographic algorithms - HUGO, WOW, and S-UNIWARD. Compared to the Spatial Rich Model (SRM), our model achieves comparable performance on BOSSbase and the realistic and large ImageNet database.

  5. An upper bound on the number of errors corrected by a convolutional code

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2000-01-01

    The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length.......The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length....

  6. Using convolutional decoding to improve time delay and phase estimation in digital communications

    Science.gov (United States)

    Ormesher, Richard C [Albuquerque, NM; Mason, John J [Albuquerque, NM

    2010-01-26

    The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.

  7. On Kolmogorov's superpositions and Boolean functions

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-31

    The paper overviews results dealing with the approximation capabilities of neural networks, as well as bounds on the size of threshold gate circuits. Based on an explicit numerical (i.e., constructive) algorithm for Kolmogorov's superpositions they will show that for obtaining minimum size neutral networks for implementing any Boolean function, the activation function of the neurons is the identity function. Because classical AND-OR implementations, as well as threshold gate implementations require exponential size (in the worst case), it will follow that size-optimal solutions for implementing arbitrary Boolean functions require analog circuitry. Conclusions and several comments on the required precision are ending the paper.

  8. Vibration analysis of FG cylindrical shells with power-law index using discrete singular convolution technique

    Science.gov (United States)

    Mercan, Kadir; Demir, Çiǧdem; Civalek, Ömer

    2016-01-01

    In the present manuscript, free vibration response of circular cylindrical shells with functionally graded material (FGM) is investigated. The method of discrete singular convolution (DSC) is used for numerical solution of the related governing equation of motion of FGM cylindrical shell. The constitutive relations are based on the Love's first approximation shell theory. The material properties are graded in the thickness direction according to a volume fraction power law indexes. Frequency values are calculated for different types of boundary conditions, material and geometric parameters. In general, close agreement between the obtained results and those of other researchers has been found.

  9. Convolution of second order linear recursive sequences II.

    Directory of Open Access Journals (Sweden)

    Szakács Tamás

    2017-12-01

    Full Text Available We continue the investigation of convolutions of second order linear recursive sequences (see the first part in [1]. In this paper, we focus on the case when the characteristic polynomials of the sequences have common root.

  10. Evolutionary image simplification for lung nodule classification with convolutional neural networks.

    Science.gov (United States)

    Lückehe, Daniel; von Voigt, Gabriele

    2018-05-29

    Understanding decisions of deep learning techniques is important. Especially in the medical field, the reasons for a decision in a classification task are as crucial as the pure classification results. In this article, we propose a new approach to compute relevant parts of a medical image. Knowing the relevant parts makes it easier to understand decisions. In our approach, a convolutional neural network is employed to learn structures of images of lung nodules. Then, an evolutionary algorithm is applied to compute a simplified version of an unknown image based on the learned structures by the convolutional neural network. In the simplified version, irrelevant parts are removed from the original image. In the results, we show simplified images which allow the observer to focus on the relevant parts. In these images, more than 50% of the pixels are simplified. The simplified pixels do not change the meaning of the images based on the learned structures by the convolutional neural network. An experimental analysis shows the potential of the approach. Besides the examples of simplified images, we analyze the run time development. Simplified images make it easier to focus on relevant parts and to find reasons for a decision. The combination of an evolutionary algorithm employing a learned convolutional neural network is well suited for the simplification task. From a research perspective, it is interesting which areas of the images are simplified and which parts are taken as relevant.

  11. REAL-TIME VIDEO SCALING BASED ON CONVOLUTION NEURAL NETWORK ARCHITECTURE

    Directory of Open Access Journals (Sweden)

    S Safinaz

    2017-08-01

    Full Text Available In recent years, video super resolution techniques becomes mandatory requirements to get high resolution videos. Many super resolution techniques researched but still video super resolution or scaling is a vital challenge. In this paper, we have presented a real-time video scaling based on convolution neural network architecture to eliminate the blurriness in the images and video frames and to provide better reconstruction quality while scaling of large datasets from lower resolution frames to high resolution frames. We compare our outcomes with multiple exiting algorithms. Our extensive results of proposed technique RemCNN (Reconstruction error minimization Convolution Neural Network shows that our model outperforms the existing technologies such as bicubic, bilinear, MCResNet and provide better reconstructed motioning images and video frames. The experimental results shows that our average PSNR result is 47.80474 considering upscale-2, 41.70209 for upscale-3 and 36.24503 for upscale-4 for Myanmar dataset which is very high in contrast to other existing techniques. This results proves our proposed model real-time video scaling based on convolution neural network architecture’s high efficiency and better performance.

  12. Digital Tomosynthesis System Geometry Analysis Using Convolution-Based Blur-and-Add (BAA) Model.

    Science.gov (United States)

    Wu, Meng; Yoon, Sungwon; Solomon, Edward G; Star-Lack, Josh; Pelc, Norbert; Fahrig, Rebecca

    2016-01-01

    Digital tomosynthesis is a three-dimensional imaging technique with a lower radiation dose than computed tomography (CT). Due to the missing data in tomosynthesis systems, out-of-plane structures in the depth direction cannot be completely removed by the reconstruction algorithms. In this work, we analyzed the impulse responses of common tomosynthesis systems on a plane-to-plane basis and proposed a fast and accurate convolution-based blur-and-add (BAA) model to simulate the backprojected images. In addition, the analysis formalism describing the impulse response of out-of-plane structures can be generalized to both rotating and parallel gantries. We implemented a ray tracing forward projection and backprojection (ray-based model) algorithm and the convolution-based BAA model to simulate the shift-and-add (backproject) tomosynthesis reconstructions. The convolution-based BAA model with proper geometry distortion correction provides reasonably accurate estimates of the tomosynthesis reconstruction. A numerical comparison indicates that the simulated images using the two models differ by less than 6% in terms of the root-mean-squared error. This convolution-based BAA model can be used in efficient system geometry analysis, reconstruction algorithm design, out-of-plane artifacts suppression, and CT-tomosynthesis registration.

  13. Alternate symbol inversion for improved symbol synchronization in convolutionally coded systems

    Science.gov (United States)

    Simon, M. K.; Smith, J. G.

    1980-01-01

    Inverting alternate symbols of the encoder output of a convolutionally coded system provides sufficient density of symbol transitions to guarantee adequate symbol synchronizer performance, a guarantee otherwise lacking. Although alternate symbol inversion may increase or decrease the average transition density, depending on the data source model, it produces a maximum number of contiguous symbols without transition for a particular class of convolutional codes, independent of the data source model. Further, this maximum is sufficiently small to guarantee acceptable symbol synchronizer performance for typical applications. Subsequent inversion of alternate detected symbols permits proper decoding.

  14. Adaptive Graph Convolutional Neural Networks

    OpenAIRE

    Li, Ruoyu; Wang, Sheng; Zhu, Feiyun; Huang, Junzhou

    2018-01-01

    Graph Convolutional Neural Networks (Graph CNNs) are generalizations of classical CNNs to handle graph data such as molecular data, point could and social networks. Current filters in graph CNNs are built for fixed and shared graph structure. However, for most real data, the graph structures varies in both size and connectivity. The paper proposes a generalized and flexible graph CNN taking data of arbitrary graph structure as input. In that way a task-driven adaptive graph is learned for eac...

  15. Interplay of gravitation and linear superposition of different mass eigenstates

    International Nuclear Information System (INIS)

    Ahluwalia, D.V.

    1998-01-01

    The interplay of gravitation and the quantum-mechanical principle of linear superposition induces a new set of neutrino oscillation phases. These ensure that the flavor-oscillation clocks, inherent in the phenomenon of neutrino oscillations, redshift precisely as required by Einstein close-quote s theory of gravitation. The physical observability of these phases in the context of the solar neutrino anomaly, type-II supernova, and certain atomic systems is briefly discussed. copyright 1998 The American Physical Society

  16. No-reference image quality assessment based on statistics of convolution feature maps

    Science.gov (United States)

    Lv, Xiaoxin; Qin, Min; Chen, Xiaohui; Wei, Guo

    2018-04-01

    We propose a Convolutional Feature Maps (CFM) driven approach to accurately predict image quality. Our motivation bases on the finding that the Nature Scene Statistic (NSS) features on convolution feature maps are significantly sensitive to distortion degree of an image. In our method, a Convolutional Neural Network (CNN) is trained to obtain kernels for generating CFM. We design a forward NSS layer which performs on CFM to better extract NSS features. The quality aware features derived from the output of NSS layer is effective to describe the distortion type and degree an image suffered. Finally, a Support Vector Regression (SVR) is employed in our No-Reference Image Quality Assessment (NR-IQA) model to predict a subjective quality score of a distorted image. Experiments conducted on two public databases demonstrate the promising performance of the proposed method is competitive to state of the art NR-IQA methods.

  17. Fast Automatic Airport Detection in Remote Sensing Images Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Fen Chen

    2018-03-01

    Full Text Available Fast and automatic detection of airports from remote sensing images is useful for many military and civilian applications. In this paper, a fast automatic detection method is proposed to detect airports from remote sensing images based on convolutional neural networks using the Faster R-CNN algorithm. This method first applies a convolutional neural network to generate candidate airport regions. Based on the features extracted from these proposals, it then uses another convolutional neural network to perform airport detection. By taking the typical elongated linear geometric shape of airports into consideration, some specific improvements to the method are proposed. These approaches successfully improve the quality of positive samples and achieve a better accuracy in the final detection results. Experimental results on an airport dataset, Landsat 8 images, and a Gaofen-1 satellite scene demonstrate the effectiveness and efficiency of the proposed method.

  18. Probing the conductance superposition law in single-molecule circuits with parallel paths.

    Science.gov (United States)

    Vazquez, H; Skouta, R; Schneebeli, S; Kamenetska, M; Breslow, R; Venkataraman, L; Hybertsen, M S

    2012-10-01

    According to Kirchhoff's circuit laws, the net conductance of two parallel components in an electronic circuit is the sum of the individual conductances. However, when the circuit dimensions are comparable to the electronic phase coherence length, quantum interference effects play a critical role, as exemplified by the Aharonov-Bohm effect in metal rings. At the molecular scale, interference effects dramatically reduce the electron transfer rate through a meta-connected benzene ring when compared with a para-connected benzene ring. For longer conjugated and cross-conjugated molecules, destructive interference effects have been observed in the tunnelling conductance through molecular junctions. Here, we investigate the conductance superposition law for parallel components in single-molecule circuits, particularly the role of interference. We synthesize a series of molecular systems that contain either one backbone or two backbones in parallel, bonded together cofacially by a common linker on each end. Single-molecule conductance measurements and transport calculations based on density functional theory show that the conductance of a double-backbone molecular junction can be more than twice that of a single-backbone junction, providing clear evidence for constructive interference.

  19. Authentication Protocol using Quantum Superposition States

    Energy Technology Data Exchange (ETDEWEB)

    Kanamori, Yoshito [University of Alaska; Yoo, Seong-Moo [University of Alabama, Huntsville; Gregory, Don A. [University of Alabama, Huntsville; Sheldon, Frederick T [ORNL

    2009-01-01

    When it became known that quantum computers could break the RSA (named for its creators - Rivest, Shamir, and Adleman) encryption algorithm within a polynomial-time, quantum cryptography began to be actively studied. Other classical cryptographic algorithms are only secure when malicious users do not have sufficient computational power to break security within a practical amount of time. Recently, many quantum authentication protocols sharing quantum entangled particles between communicators have been proposed, providing unconditional security. An issue caused by sharing quantum entangled particles is that it may not be simple to apply these protocols to authenticate a specific user in a group of many users. An authentication protocol using quantum superposition states instead of quantum entangled particles is proposed. The random number shared between a sender and a receiver can be used for classical encryption after the authentication has succeeded. The proposed protocol can be implemented with the current technologies we introduce in this paper.

  20. Integral superposition of paraxial Gaussian beams in inhomogeneous anisotropic layered structures in Cartesian coordinates

    Czech Academy of Sciences Publication Activity Database

    Červený, V.; Pšenčík, Ivan

    2015-01-01

    Roč. 25, - (2015), s. 109-155 ISSN 2336-3827 Institutional support: RVO:67985530 Keywords : integral superposition of paraxial Gaussian beams * inhomogeneous anisotropic media * S waves in weakly anisotropic media Subject RIV: DC - Siesmology, Volcanology, Earth Structure

  1. Evolution of superpositions of quantum states through a level crossing

    International Nuclear Information System (INIS)

    Torosov, B. T.; Vitanov, N. V.

    2011-01-01

    The Landau-Zener-Stueckelberg-Majorana (LZSM) model is widely used for estimating transition probabilities in the presence of crossing energy levels in quantum physics. This model, however, makes the unphysical assumption of an infinitely long constant interaction, which introduces a divergent phase in the propagator. This divergence remains hidden when estimating output probabilities for a single input state insofar as the divergent phase cancels out. In this paper we show that, because of this divergent phase, the LZSM model is inadequate to describe the evolution of pure or mixed superposition states across a level crossing. The LZSM model can be used only if the system is initially in a single state or in a completely mixed superposition state. To this end, we show that the more realistic Demkov-Kunike model, which assumes a hyperbolic-tangent level crossing and a hyperbolic-secant interaction envelope, is free of divergences and is a much more adequate tool for describing the evolution through a level crossing for an arbitrary input state. For multiple crossing energies which are reducible to one or more effective two-state systems (e.g., by the Majorana and Morris-Shore decompositions), similar conclusions apply: the LZSM model does not produce definite values of the populations and the coherences, and one should use the Demkov-Kunike model instead.

  2. Design and Implementation of Convolutional Encoder and Viterbi Decoder Using FPGA.

    Directory of Open Access Journals (Sweden)

    Riham Ali Zbaid

    2018-01-01

    Full Text Available Keeping  the  fineness of data is the most significant thing in communication.There are many factors that affect the accuracy of the data when it is transmitted over the communication channel such as noise etc. to overcome these effects are encoding channels encryption.In this paper is used for one type of channel coding is convolutional codes. Convolution encoding is a Forward Error Correction (FEC method used in incessant one-way and real time communication links .It can offer a great development in the error bit rates so that small, low energy, and devices cheap transmission when used in applications such as satellites. In this paper highlight the design, simulation and implementation of convolution encoder and Viterbi decoder by using MATLAB- program (2011. SIMULINK HDL coder is used to convert MATLAB-SIMULINK models to VHDL using plates Altera Cyclone II code DE2-70. Simulation and evaluation of the implementation of the results coincided with the results of the design show the coinciding with the designed results.

  3. Enhancement of digital radiography image quality using a convolutional neural network.

    Science.gov (United States)

    Sun, Yuewen; Li, Litao; Cong, Peng; Wang, Zhentao; Guo, Xiaojing

    2017-01-01

    Digital radiography system is widely used for noninvasive security check and medical imaging examination. However, the system has a limitation of lower image quality in spatial resolution and signal to noise ratio. In this study, we explored whether the image quality acquired by the digital radiography system can be improved with a modified convolutional neural network to generate high-resolution images with reduced noise from the original low-quality images. The experiment evaluated on a test dataset, which contains 5 X-ray images, showed that the proposed method outperformed the traditional methods (i.e., bicubic interpolation and 3D block-matching approach) as measured by peak signal to noise ratio (PSNR) about 1.3 dB while kept highly efficient processing time within one second. Experimental results demonstrated that a residual to residual (RTR) convolutional neural network remarkably improved the image quality of object structural details by increasing the image resolution and reducing image noise. Thus, this study indicated that applying this RTR convolutional neural network system was useful to improve image quality acquired by the digital radiography system.

  4. Superposition of Planckian spectra and the distortions of the cosmic microwave background radiation

    International Nuclear Information System (INIS)

    Alexanian, M.

    1982-01-01

    A fit of the spectrum of the cosmic microwave background radiation (CMB) by means of a positive linear superposition of Planckian spectra implies an upper bound to the photon spectrum. The observed spectrum of the CMB gives a weighting function with a normalization greater than unity

  5. A numerical dressing method for the nonlinear superposition of solutions of the KdV equation

    International Nuclear Information System (INIS)

    Trogdon, Thomas; Deconinck, Bernard

    2014-01-01

    In this paper we present the unification of two existing numerical methods for the construction of solutions of the Korteweg–de Vries (KdV) equation. The first method is used to solve the Cauchy initial-value problem on the line for rapidly decaying initial data. The second method is used to compute finite-genus solutions of the KdV equation. The combination of these numerical methods allows for the computation of exact solutions that are asymptotically (quasi-)periodic finite-gap solutions and are a nonlinear superposition of dispersive, soliton and (quasi-)periodic solutions in the finite (x, t)-plane. Such solutions are referred to as superposition solutions. We compute these solutions accurately for all values of x and t. (paper)

  6. The measurement and calculation of the X-ray spatial resolution obtained in the analytical electron microscope

    International Nuclear Information System (INIS)

    Michael, J.R.; Williams, D.B.

    1990-01-01

    The X-ray microanalytical spatial resolution is determined experimentally in various analytical electron microscopes by measuring the degradation of an atomically discrete composition profile across an interphase interface in a thin-foil of Ni-Cr-Fe. The experimental spatial resolutions are then compared with calculated values. The calculated spatial resolutions are obtained by the mathematical convolution of the electron probe size with an assumed beam-broadening distribution and the single-scattering model of beam broadening. The probe size is measured directly from an image of the probe in a TEM/SETEM and indirectly from dark-field signal changes resulting from scanning the probe across the edge of an MgO crystal in a dedicated STEM. This study demonstrates the applicability of the convolution technique to the calculation of the microanalytical spatial resolution obtained in the analytical electron microscope. It is demonstrated that, contrary to popular opinion, the electron probe size has a major impact on the measured spatial resolution in foils < 150 nm thick. (author)

  7. Tandem mass spectrometry data quality assessment by self-convolution

    Directory of Open Access Journals (Sweden)

    Tham Wai

    2007-09-01

    Full Text Available Abstract Background Many algorithms have been developed for deciphering the tandem mass spectrometry (MS data sets. They can be essentially clustered into two classes. The first performs searches on theoretical mass spectrum database, while the second based itself on de novo sequencing from raw mass spectrometry data. It was noted that the quality of mass spectra affects significantly the protein identification processes in both instances. This prompted the authors to explore ways to measure the quality of MS data sets before subjecting them to the protein identification algorithms, thus allowing for more meaningful searches and increased confidence level of proteins identified. Results The proposed method measures the qualities of MS data sets based on the symmetric property of b- and y-ion peaks present in a MS spectrum. Self-convolution on MS data and its time-reversal copy was employed. Due to the symmetric nature of b-ions and y-ions peaks, the self-convolution result of a good spectrum would produce a highest mid point intensity peak. To reduce processing time, self-convolution was achieved using Fast Fourier Transform and its inverse transform, followed by the removal of the "DC" (Direct Current component and the normalisation of the data set. The quality score was defined as the ratio of the intensity at the mid point to the remaining peaks of the convolution result. The method was validated using both theoretical mass spectra, with various permutations, and several real MS data sets. The results were encouraging, revealing a high percentage of positive prediction rates for spectra with good quality scores. Conclusion We have demonstrated in this work a method for determining the quality of tandem MS data set. By pre-determining the quality of tandem MS data before subjecting them to protein identification algorithms, spurious protein predictions due to poor tandem MS data are avoided, giving scientists greater confidence in the

  8. Tandem mass spectrometry data quality assessment by self-convolution.

    Science.gov (United States)

    Choo, Keng Wah; Tham, Wai Mun

    2007-09-20

    Many algorithms have been developed for deciphering the tandem mass spectrometry (MS) data sets. They can be essentially clustered into two classes. The first performs searches on theoretical mass spectrum database, while the second based itself on de novo sequencing from raw mass spectrometry data. It was noted that the quality of mass spectra affects significantly the protein identification processes in both instances. This prompted the authors to explore ways to measure the quality of MS data sets before subjecting them to the protein identification algorithms, thus allowing for more meaningful searches and increased confidence level of proteins identified. The proposed method measures the qualities of MS data sets based on the symmetric property of b- and y-ion peaks present in a MS spectrum. Self-convolution on MS data and its time-reversal copy was employed. Due to the symmetric nature of b-ions and y-ions peaks, the self-convolution result of a good spectrum would produce a highest mid point intensity peak. To reduce processing time, self-convolution was achieved using Fast Fourier Transform and its inverse transform, followed by the removal of the "DC" (Direct Current) component and the normalisation of the data set. The quality score was defined as the ratio of the intensity at the mid point to the remaining peaks of the convolution result. The method was validated using both theoretical mass spectra, with various permutations, and several real MS data sets. The results were encouraging, revealing a high percentage of positive prediction rates for spectra with good quality scores. We have demonstrated in this work a method for determining the quality of tandem MS data set. By pre-determining the quality of tandem MS data before subjecting them to protein identification algorithms, spurious protein predictions due to poor tandem MS data are avoided, giving scientists greater confidence in the predicted results. We conclude that the algorithm performs well

  9. Polyphony: superposition independent methods for ensemble-based drug discovery.

    Science.gov (United States)

    Pitt, William R; Montalvão, Rinaldo W; Blundell, Tom L

    2014-09-30

    Structure-based drug design is an iterative process, following cycles of structural biology, computer-aided design, synthetic chemistry and bioassay. In favorable circumstances, this process can lead to the structures of hundreds of protein-ligand crystal structures. In addition, molecular dynamics simulations are increasingly being used to further explore the conformational landscape of these complexes. Currently, methods capable of the analysis of ensembles of crystal structures and MD trajectories are limited and usually rely upon least squares superposition of coordinates. Novel methodologies are described for the analysis of multiple structures of a protein. Statistical approaches that rely upon residue equivalence, but not superposition, are developed. Tasks that can be performed include the identification of hinge regions, allosteric conformational changes and transient binding sites. The approaches are tested on crystal structures of CDK2 and other CMGC protein kinases and a simulation of p38α. Known interaction - conformational change relationships are highlighted but also new ones are revealed. A transient but druggable allosteric pocket in CDK2 is predicted to occur under the CMGC insert. Furthermore, an evolutionarily-conserved conformational link from the location of this pocket, via the αEF-αF loop, to phosphorylation sites on the activation loop is discovered. New methodologies are described and validated for the superimposition independent conformational analysis of large collections of structures or simulation snapshots of the same protein. The methodologies are encoded in a Python package called Polyphony, which is released as open source to accompany this paper [http://wrpitt.bitbucket.org/polyphony/].

  10. The Features of Moessbauer Spectra of Hemoglobins: Approximation by Superposition of Quadrupole Doublets or by Quadrupole Splitting Distribution?

    International Nuclear Information System (INIS)

    Oshtrakh, M. I.; Semionkin, V. A.

    2004-01-01

    Moessbauer spectra of hemoglobins have some features in the range of liquid nitrogen temperature: a non-Lorentzian asymmetric line shape for oxyhemoglobins and symmetric Lorentzian line shape for deoxyhemoglobins. A comparison of the approximation of the hemoglobin Moessbauer spectra by a superposition of two quadrupole doublets and by a distribution of the quadrupole splitting demonstrates that a superposition of two quadrupole doublets is more reliable and may reflect the non-equivalent iron electronic structure and the stereochemistry in the α- and β-subunits of hemoglobin tetramers.

  11. Discrete singular convolution for the generalized variable-coefficient ...

    African Journals Online (AJOL)

    Numerical solutions of the generalized variable-coefficient Korteweg-de Vries equation are obtained using a discrete singular convolution and a fourth order singly diagonally implicit Runge-Kutta method for space and time discretisation, respectively. The theoretical convergence of the proposed method is rigorously ...

  12. Symbol Stream Combining in a Convolutionally Coded System

    Science.gov (United States)

    Mceliece, R. J.; Pollara, F.; Swanson, L.

    1985-01-01

    Symbol stream combining has been proposed as a method for arraying signals received at different antennas. If convolutional coding and Viterbi decoding are used, it is shown that a Viterbi decoder based on the proposed weighted sum of symbol streams yields maximum likelihood decisions.

  13. Decoherence bypass of macroscopic superpositions in quantum measurement

    International Nuclear Information System (INIS)

    Spehner, Dominique; Haake, Fritz

    2008-01-01

    We study a class of quantum measurement models. A microscopic object is entangled with a macroscopic pointer such that a distinct pointer position is tied to each eigenvalue of the measured object observable. Those different pointer positions mutually decohere under the influence of an environment. Overcoming limitations of previous approaches we (i) cope with initial correlations between pointer and environment by considering them initially in a metastable local thermal equilibrium, (ii) allow for object-pointer entanglement and environment-induced decoherence of distinct pointer readouts to proceed simultaneously, such that mixtures of macroscopically distinct object-pointer product states arise without intervening macroscopic superpositions, and (iii) go beyond the Markovian treatment of decoherence. (fast track communication)

  14. DeepFix: A Fully Convolutional Neural Network for Predicting Human Eye Fixations.

    Science.gov (United States)

    Kruthiventi, Srinivas S S; Ayush, Kumar; Babu, R Venkatesh

    2017-09-01

    Understanding and predicting the human visual attention mechanism is an active area of research in the fields of neuroscience and computer vision. In this paper, we propose DeepFix, a fully convolutional neural network, which models the bottom-up mechanism of visual attention via saliency prediction. Unlike classical works, which characterize the saliency map using various hand-crafted features, our model automatically learns features in a hierarchical fashion and predicts the saliency map in an end-to-end manner. DeepFix is designed to capture semantics at multiple scales while taking global context into account, by using network layers with very large receptive fields. Generally, fully convolutional nets are spatially invariant-this prevents them from modeling location-dependent patterns (e.g., centre-bias). Our network handles this by incorporating a novel location-biased convolutional layer. We evaluate our model on multiple challenging saliency data sets and show that it achieves the state-of-the-art results.

  15. Improving the Yule-Nielsen modified Neugebauer model by dot surface coverages depending on the ink superposition conditions

    Science.gov (United States)

    Hersch, Roger David; Crete, Frederique

    2005-01-01

    Dot gain is different when dots are printed alone, printed in superposition with one ink or printed in superposition with two inks. In addition, the dot gain may also differ depending on which solid ink the considered halftone layer is superposed. In a previous research project, we developed a model for computing the effective surface coverage of a dot according to its superposition conditions. In the present contribution, we improve the Yule-Nielsen modified Neugebauer model by integrating into it our effective dot surface coverage computation model. Calibration of the reproduction curves mapping nominal to effective surface coverages in every superposition condition is carried out by fitting effective dot surfaces which minimize the sum of square differences between the measured reflection density spectra and reflection density spectra predicted according to the Yule-Nielsen modified Neugebauer model. In order to predict the reflection spectrum of a patch, its known nominal surface coverage values are converted into effective coverage values by weighting the contributions from different reproduction curves according to the weights of the contributing superposition conditions. We analyze the colorimetric prediction improvement brought by our extended dot surface coverage model for clustered-dot offset prints, thermal transfer prints and ink-jet prints. The color differences induced by the differences between measured reflection spectra and reflection spectra predicted according to the new dot surface estimation model are quantified on 729 different cyan, magenta, yellow patches covering the full color gamut. As a reference, these differences are also computed for the classical Yule-Nielsen modified spectral Neugebauer model incorporating a single halftone reproduction curve for each ink. Taking into account dot surface coverages according to different superposition conditions considerably improves the predictions of the Yule-Nielsen modified Neugebauer model. In

  16. Convolutional Encoder and Viterbi Decoder Using SOPC For Variable Constraint Length

    DEFF Research Database (Denmark)

    Kulkarni, Anuradha; Dnyaneshwar, Mantri; Prasad, Neeli R.

    2013-01-01

    Convolution encoder and Viterbi decoder are the basic and important blocks in any Code Division Multiple Accesses (CDMA). They are widely used in communication system due to their error correcting capability But the performance degrades with variable constraint length. In this context to have...... detailed analysis, this paper deals with the implementation of convolution encoder and Viterbi decoder using system on programming chip (SOPC). It uses variable constraint length of 7, 8 and 9 bits for 1/2 and 1/3 code rates. By analyzing the Viterbi algorithm it is seen that our algorithm has a better...

  17. Nucleus-Nucleus Collision as Superposition of Nucleon-Nucleus Collisions

    International Nuclear Information System (INIS)

    Orlova, G.I.; Adamovich, M.I.; Aggarwal, M.M.; Alexandrov, Y.A.; Andreeva, N.P.; Badyal, S.K.; Basova, E.S.; Bhalla, K.B.; Bhasin, A.; Bhatia, V.S.; Bradnova, V.; Bubnov, V.I.; Cai, X.; Chasnikov, I.Y.; Chen, G.M.; Chernova, L.P.; Chernyavsky, M.M.; Dhamija, S.; Chenawi, K.El; Felea, D.; Feng, S.Q.; Gaitinov, A.S.; Ganssauge, E.R.; Garpman, S.; Gerassimov, S.G.; Gheata, A.; Gheata, M.; Grote, J.; Gulamov, K.G.; Gupta, S.K.; Gupta, V.K.; Henjes, U.; Jakobsson, B.; Kanygina, E.K.; Karabova, M.; Kharlamov, S.P.; Kovalenko, A.D.; Krasnov, S.A.; Kumar, V.; Larionova, V.G.; Li, Y.X.; Liu, L.S.; Lokanathan, S.; Lord, J.J.; Lukicheva, N.S.; Lu, Y.; Luo, S.B.; Mangotra, L.K.; Manhas, I.; Mittra, I.S.; Musaeva, A.K.; Nasyrov, S.Z.; Navotny, V.S.; Nystrand, J.; Otterlund, I.; Peresadko, N.G.; Qian, W.Y.; Qin, Y.M.; Raniwala, R.; Rao, N.K.; Roeper, M.; Rusakova, V.V.; Saidkhanov, N.; Salmanova, N.A.; Seitimbetov, A.M.; Sethi, R.; Singh, B.; Skelding, D.; Soderstrem, K.; Stenlund, E.; Svechnikova, L.N.; Svensson, T.; Tawfik, A.M.; Tothova, M.; Tretyakova, M.I.; Trofimova, T.P.; Tuleeva, U.I.; Vashisht, Vani; Vokal, S.; Vrlakova, J.; Wang, H.Q.; Wang, X.R.; Weng, Z.Q.; Wilkes, R.J.; Yang, C.B.; Yin, Z.B.; Yu, L.Z.; Zhang, D.H.; Zheng, P.Y.; Zhokhova, S.I.; Zhou, D.C.

    1999-01-01

    Angular distributions of charged particles produced in 16 O and 32 S collisions with nuclear track emulsion were studied at momenta 4.5 and 200 A GeV/c. Comparison with the angular distributions of charged particles produced in proton-nucleus collisions at the same momentum allows to draw the conclusion, that the angular distributions in nucleus-nucleus collisions can be seen as superposition of the angular distributions in nucleon-nucleus collisions taken at the same impact parameter b NA , that is mean impact parameter between the participating projectile nucleons and the center of the target nucleus

  18. Nucleus-Nucleus Collision as Superposition of Nucleon-Nucleus Collisions

    Energy Technology Data Exchange (ETDEWEB)

    Orlova, G I; Adamovich, M I; Aggarwal, M M; Alexandrov, Y A; Andreeva, N P; Badyal, S K; Basova, E S; Bhalla, K B; Bhasin, A; Bhatia, V S; Bradnova, V; Bubnov, V I; Cai, X; Chasnikov, I Y; Chen, G M; Chernova, L P; Chernyavsky, M M; Dhamija, S; Chenawi, K El; Felea, D; Feng, S Q; Gaitinov, A S; Ganssauge, E R; Garpman, S; Gerassimov, S G; Gheata, A; Gheata, M; Grote, J; Gulamov, K G; Gupta, S K; Gupta, V K; Henjes, U; Jakobsson, B; Kanygina, E K; Karabova, M; Kharlamov, S P; Kovalenko, A D; Krasnov, S A; Kumar, V; Larionova, V G; Li, Y X; Liu, L S; Lokanathan, S; Lord, J J; Lukicheva, N S; Lu, Y; Luo, S B; Mangotra, L K; Manhas, I; Mittra, I S; Musaeva, A K; Nasyrov, S Z; Navotny, V S; Nystrand, J; Otterlund, I; Peresadko, N G; Qian, W Y; Qin, Y M; Raniwala, R; Rao, N K; Roeper, M; Rusakova, V V; Saidkhanov, N; Salmanova, N A; Seitimbetov, A M; Sethi, R; Singh, B; Skelding, D; Soderstrem, K; Stenlund, E; Svechnikova, L N; Svensson, T; Tawfik, A M; Tothova, M; Tretyakova, M I; Trofimova, T P; Tuleeva, U I; Vashisht, Vani; Vokal, S; Vrlakova, J; Wang, H Q; Wang, X R; Weng, Z Q; Wilkes, R J; Yang, C B; Yin, Z B; Yu, L Z; Zhang, D H; Zheng, P Y; Zhokhova, S I; Zhou, D C

    1999-03-01

    Angular distributions of charged particles produced in {sup 16}O and {sup 32}S collisions with nuclear track emulsion were studied at momenta 4.5 and 200 A GeV/c. Comparison with the angular distributions of charged particles produced in proton-nucleus collisions at the same momentum allows to draw the conclusion, that the angular distributions in nucleus-nucleus collisions can be seen as superposition of the angular distributions in nucleon-nucleus collisions taken at the same impact parameter b{sub NA}, that is mean impact parameter between the participating projectile nucleons and the center of the target nucleus.

  19. Nucleus-nucleus collision as superposition of nucleon-nucleus collisions

    International Nuclear Information System (INIS)

    Orlova, G.I.; Adamovich, M.I.; Aggarwal, M.M.

    1999-01-01

    Angular distributions of charged particles produced in 16 O and 32 S collisions with nuclear track emulsion were studied at momenta 4.5 and 200 A GeV/c. Comparison with the angular distributions of charged particles produced in proton-nucleus collisions at the same momentum allows to draw the conclusion, that the angular distributions in nucleus-nucleus collisions can be seen as superposition of the angular distributions in nucleon-nucleus collisions taken at the same impact parameter b NA , that is mean impact parameter between the participating projectile nucleons and the center of the target nucleus. (orig.)

  20. Improving the Separability of Deep Features with Discriminative Convolution Filters for RSI Classification

    Directory of Open Access Journals (Sweden)

    Na Liu

    2018-03-01

    Full Text Available The extraction of activation vectors (or deep features from the fully connected layers of a convolutional neural network (CNN model is widely used for remote sensing image (RSI representation. In this study, we propose to learn discriminative convolution filter (DCF based on class-specific separability criteria for linear transformation of deep features. In particular, two types of pretrained CNN called CaffeNet and VGG-VD16 are introduced to illustrate the generality of the proposed DCF. The activation vectors extracted from the fully connected layers of a CNN are rearranged into the form of an image matrix, from which a spatial arrangement of local patches is extracted using sliding window strategy. DCF learning is then performed on each local patch individually to obtain the corresponding discriminative convolution kernel through generalized eigenvalue decomposition. The proposed DCF learning characterizes that a convolutional kernel with small size (e.g., 3 × 3 pixels can be effectively learned on a small-size local patch (e.g., 8 × 8 pixels, thereby ensuring that the linear transformation of deep features can maintain low computational complexity. Experiments on two RSI datasets demonstrate the effectiveness of DCF in improving the classification performances of deep features without increasing dimensionality.

  1. Plant species classification using deep convolutional neural network

    DEFF Research Database (Denmark)

    Dyrmann, Mads; Karstoft, Henrik; Midtiby, Henrik Skov

    2016-01-01

    Information on which weed species are present within agricultural fields is important for site specific weed management. This paper presents a method that is capable of recognising plant species in colour images by using a convolutional neural network. The network is built from scratch trained an...

  2. Cloud Detection by Fusing Multi-Scale Convolutional Features

    Science.gov (United States)

    Li, Zhiwei; Shen, Huanfeng; Wei, Yancong; Cheng, Qing; Yuan, Qiangqiang

    2018-04-01

    Clouds detection is an important pre-processing step for accurate application of optical satellite imagery. Recent studies indicate that deep learning achieves best performance in image segmentation tasks. Aiming at boosting the accuracy of cloud detection for multispectral imagery, especially for those that contain only visible and near infrared bands, in this paper, we proposed a deep learning based cloud detection method termed MSCN (multi-scale cloud net), which segments cloud by fusing multi-scale convolutional features. MSCN was trained on a global cloud cover validation collection, and was tested in more than ten types of optical images with different resolution. Experiment results show that MSCN has obvious advantages over the traditional multi-feature combined cloud detection method in accuracy, especially when in snow and other areas covered by bright non-cloud objects. Besides, MSCN produced more detailed cloud masks than the compared deep cloud detection convolution network. The effectiveness of MSCN make it promising for practical application in multiple kinds of optical imagery.

  3. Is Kinesio Taping to Generate Skin Convolutions Effective for Increasing Local Blood Circulation?

    OpenAIRE

    Yang, Jae-Man; Lee, Jung-Hoon

    2018-01-01

    Background It is unclear whether traditional application of Kinesio taping, which produces wrinkles in the skin, is effective for improving blood circulation. This study investigated local skin temperature changes after the application of an elastic therapeutic tape using convolution and non-convolution taping methods (CTM/NCTM). Material/Methods Twenty-eight pain-free men underwent CTM and NCTM randomly applied to the right and left sides of the lower back. Using infrared thermography, skin ...

  4. Segmentation of Drosophila Heart in Optical Coherence Microscopy Images Using Convolutional Neural Networks

    OpenAIRE

    Duan, Lian; Qin, Xi; He, Yuanhao; Sang, Xialin; Pan, Jinda; Xu, Tao; Men, Jing; Tanzi, Rudolph E.; Li, Airong; Ma, Yutao; Zhou, Chao

    2018-01-01

    Convolutional neural networks are powerful tools for image segmentation and classification. Here, we use this method to identify and mark the heart region of Drosophila at different developmental stages in the cross-sectional images acquired by a custom optical coherence microscopy (OCM) system. With our well-trained convolutional neural network model, the heart regions through multiple heartbeat cycles can be marked with an intersection over union (IOU) of ~86%. Various morphological and dyn...

  5. Chaos and Complexities Theories. Superposition and Standardized Testing: Are We Coming or Going?

    Science.gov (United States)

    Erwin, Susan

    2005-01-01

    The purpose of this paper is to explore the possibility of using the principle of "superposition of states" (commonly illustrated by Schrodinger's Cat experiment) to understand the process of using standardized testing to measure a student's learning. Comparisons from literature, neuroscience, and Schema Theory will be used to expound upon the…

  6. TH-E-BRE-03: A Novel Method to Account for Ion Chamber Volume Averaging Effect in a Commercial Treatment Planning System Through Convolution

    International Nuclear Information System (INIS)

    Barraclough, B; Li, J; Liu, C; Yan, G

    2014-01-01

    Purpose: Fourier-based deconvolution approaches used to eliminate ion chamber volume averaging effect (VAE) suffer from measurement noise. This work aims to investigate a novel method to account for ion chamber VAE through convolution in a commercial treatment planning system (TPS). Methods: Beam profiles of various field sizes and depths of an Elekta Synergy were collected with a finite size ion chamber (CC13) to derive a clinically acceptable beam model for a commercial TPS (Pinnacle 3 ), following the vendor-recommended modeling process. The TPS-calculated profiles were then externally convolved with a Gaussian function representing the chamber (σ = chamber radius). The agreement between the convolved profiles and measured profiles was evaluated with a one dimensional Gamma analysis (1%/1mm) as an objective function for optimization. TPS beam model parameters for focal and extra-focal sources were optimized and loaded back into the TPS for new calculation. This process was repeated until the objective function converged using a Simplex optimization method. Planar dose of 30 IMRT beams were calculated with both the clinical and the re-optimized beam models and compared with MapCHEC™ measurements to evaluate the new beam model. Results: After re-optimization, the two orthogonal source sizes for the focal source reduced from 0.20/0.16 cm to 0.01/0.01 cm, which were the minimal allowed values in Pinnacle. No significant change in the parameters for the extra-focal source was observed. With the re-optimized beam model, average Gamma passing rate for the 30 IMRT beams increased from 92.1% to 99.5% with a 3%/3mm criterion and from 82.6% to 97.2% with a 2%/2mm criterion. Conclusion: We proposed a novel method to account for ion chamber VAE in a commercial TPS through convolution. The reoptimized beam model, with VAE accounted for through a reliable and easy-to-implement convolution and optimization approach, outperforms the original beam model in standard IMRT QA

  7. Histopathological Breast-Image Classification Using Local and Frequency Domains by Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Abdullah-Al Nahid

    2018-01-01

    Full Text Available Identification of the malignancy of tissues from Histopathological images has always been an issue of concern to doctors and radiologists. This task is time-consuming, tedious and moreover very challenging. Success in finding malignancy from Histopathological images primarily depends on long-term experience, though sometimes experts disagree on their decisions. However, Computer Aided Diagnosis (CAD techniques help the radiologist to give a second opinion that can increase the reliability of the radiologist’s decision. Among the different image analysis techniques, classification of the images has always been a challenging task. Due to the intense complexity of biomedical images, it is always very challenging to provide a reliable decision about an image. The state-of-the-art Convolutional Neural Network (CNN technique has had great success in natural image classification. Utilizing advanced engineering techniques along with the CNN, in this paper, we have classified a set of Histopathological Breast-Cancer (BC images utilizing a state-of-the-art CNN model containing a residual block. Conventional CNN operation takes raw images as input and extracts the global features; however, the object oriented local features also contain significant information—for example, the Local Binary Pattern (LBP represents the effective textural information, Histogram represent the pixel strength distribution, Contourlet Transform (CT gives much detailed information about the smoothness about the edges, and Discrete Fourier Transform (DFT derives frequency-domain information from the image. Utilizing these advantages, along with our proposed novel CNN model, we have examined the performance of the novel CNN model as Histopathological image classifier. To do so, we have introduced five cases: (a Convolutional Neural Network Raw Image (CNN-I; (b Convolutional Neural Network CT Histogram (CNN-CH; (c Convolutional Neural Network CT LBP (CNN-CL; (d Convolutional

  8. Estimating the number of sources in a noisy convolutive mixture using BIC

    DEFF Research Database (Denmark)

    Olsson, Rasmus Kongsgaard; Hansen, Lars Kai

    2004-01-01

    The number of source signals in a noisy convolutive mixture is determined based on the exact log-likelihoods of the candidate models. In (Olsson and Hansen, 2004), a novel probabilistic blind source separator was introduced that is based solely on the time-varying second-order statistics of the s......The number of source signals in a noisy convolutive mixture is determined based on the exact log-likelihoods of the candidate models. In (Olsson and Hansen, 2004), a novel probabilistic blind source separator was introduced that is based solely on the time-varying second-order statistics...

  9. The Application of Real Convolution for Analytically Evaluating Fermi-Dirac-Type and Bose-Einstein-Type Integrals

    Directory of Open Access Journals (Sweden)

    Jerry P. Selvaggi

    2018-01-01

    Full Text Available The Fermi-Dirac-type or Bose-Einstein-type integrals can be transformed into two convergent real-convolution integrals. The transformation simplifies the integration process and may ultimately produce a complete analytical solution without recourse to any mathematical approximations. The real-convolution integrals can either be directly integrated or be transformed into the Laplace Transform inversion integral in which case the full power of contour integration becomes available. Which method is employed is dependent upon the complexity of the real-convolution integral. A number of examples are introduced which will illustrate the efficacy of the analytical approach.

  10. Fully Convolutional Network Based Shadow Extraction from GF-2 Imagery

    Science.gov (United States)

    Li, Z.; Cai, G.; Ren, H.

    2018-04-01

    There are many shadows on the high spatial resolution satellite images, especially in the urban areas. Although shadows on imagery severely affect the information extraction of land cover or land use, they provide auxiliary information for building extraction which is hard to achieve a satisfactory accuracy through image classification itself. This paper focused on the method of building shadow extraction by designing a fully convolutional network and training samples collected from GF-2 satellite imagery in the urban region of Changchun city. By means of spatial filtering and calculation of adjacent relationship along the sunlight direction, the small patches from vegetation or bridges have been eliminated from the preliminary extracted shadows. Finally, the building shadows were separated. The extracted building shadow information from the proposed method in this paper was compared with the results from the traditional object-oriented supervised classification algorihtms. It showed that the deep learning network approach can improve the accuracy to a large extent.

  11. Adversarial training and dilated convolutions for brain MRI segmentation

    NARCIS (Netherlands)

    Moeskops, P.; Veta, M.; Lafarge, M.W.; Eppenhof, K.A.J.; Pluim, J.P.W.

    2017-01-01

    Convolutional neural networks (CNNs) have been applied to various automatic image segmentation tasks in medical image analysis, including brain MRI segmentation. Generative adversarial networks have recently gained popularity because of their power in generating images that are difficult to

  12. a Novel Deep Convolutional Neural Network for Spectral-Spatial Classification of Hyperspectral Data

    Science.gov (United States)

    Li, N.; Wang, C.; Zhao, H.; Gong, X.; Wang, D.

    2018-04-01

    Spatial and spectral information are obtained simultaneously by hyperspectral remote sensing. Joint extraction of these information of hyperspectral image is one of most import methods for hyperspectral image classification. In this paper, a novel deep convolutional neural network (CNN) is proposed, which extracts spectral-spatial information of hyperspectral images correctly. The proposed model not only learns sufficient knowledge from the limited number of samples, but also has powerful generalization ability. The proposed framework based on three-dimensional convolution can extract spectral-spatial features of labeled samples effectively. Though CNN has shown its robustness to distortion, it cannot extract features of different scales through the traditional pooling layer that only have one size of pooling window. Hence, spatial pyramid pooling (SPP) is introduced into three-dimensional local convolutional filters for hyperspectral classification. Experimental results with a widely used hyperspectral remote sensing dataset show that the proposed model provides competitive performance.

  13. Teleportation of a Superposition of Three Orthogonal States of an Atom via Photon Interference

    Institute of Scientific and Technical Information of China (English)

    ZHENG Shi-Biao

    2006-01-01

    We propose a scheme to teleport a superposition of three states of an atom trapped in a cavity to a second atom trapped in a remote cavity. The scheme is based on the detection of photons leaking from the cavities after the atom-cavity interaction.

  14. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    Science.gov (United States)

    Lee, L.-N.

    1977-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  15. Space-Time Convolutional Codes over Finite Fields and Rings for Systems with Large Diversity Order

    Directory of Open Access Journals (Sweden)

    B. F. Uchôa-Filho

    2008-06-01

    Full Text Available We propose a convolutional encoder over the finite ring of integers modulo pk,ℤpk, where p is a prime number and k is any positive integer, to generate a space-time convolutional code (STCC. Under this structure, we prove three properties related to the generator matrix of the convolutional code that can be used to simplify the code search procedure for STCCs over ℤpk. Some STCCs of large diversity order (≥4 designed under the trace criterion for n=2,3, and 4 transmit antennas are presented for various PSK signal constellations.

  16. Combining morphometric features and convolutional networks fusion for glaucoma diagnosis

    Science.gov (United States)

    Perdomo, Oscar; Arevalo, John; González, Fabio A.

    2017-11-01

    Glaucoma is an eye condition that leads to loss of vision and blindness. Ophthalmoscopy exam evaluates the shape, color and proportion between the optic disc and physiologic cup, but the lack of agreement among experts is still the main diagnosis problem. The application of deep convolutional neural networks combined with automatic extraction of features such as: the cup-to-disc distance in the four quadrants, the perimeter, area, eccentricity, the major radio, the minor radio in optic disc and cup, in addition to all the ratios among the previous parameters may help with a better automatic grading of glaucoma. This paper presents a strategy to merge morphological features and deep convolutional neural networks as a novel methodology to support the glaucoma diagnosis in eye fundus images.

  17. Airplane detection in remote sensing images using convolutional neural networks

    Science.gov (United States)

    Ouyang, Chao; Chen, Zhong; Zhang, Feng; Zhang, Yifei

    2018-03-01

    Airplane detection in remote sensing images remains a challenging problem and has also been taking a great interest to researchers. In this paper we propose an effective method to detect airplanes in remote sensing images using convolutional neural networks. Deep learning methods show greater advantages than the traditional methods with the rise of deep neural networks in target detection, and we give an explanation why this happens. To improve the performance on detection of airplane, we combine a region proposal algorithm with convolutional neural networks. And in the training phase, we divide the background into multi classes rather than one class, which can reduce false alarms. Our experimental results show that the proposed method is effective and robust in detecting airplane.

  18. Convolute laminations — a theoretical analysis: example of a Pennsylvanian sandstone

    Science.gov (United States)

    Visher, Glenn S.; Cunningham, Russ D.

    1981-03-01

    Data from an outcropping laminated interval were collected and analyzed to test the applicability of a theoretical model describing instability of layered systems. Rayleigh—Taylor wave perturbations result at the interface between fluids of contrasting density, viscosity, and thickness. In the special case where reverse density and viscosity interlaminations are developed, the deformation response produces a single wave with predictable amplitudes, wavelengths, and amplification rates. Physical measurements from both the outcropping section and modern sediments suggest the usefulness of the model for the interpretation of convolute laminations. Internal characteristics of the stratigraphic interval, and the developmental sequence of convoluted beds, are used to document the developmental history of these structures.

  19. Capacity-Approaching Superposition Coding for Optical Fiber Links

    DEFF Research Database (Denmark)

    Estaran Tolosa, Jose Manuel; Zibar, Darko; Tafur Monroy, Idelfonso

    2014-01-01

    We report on the first experimental demonstration of superposition coded modulation (SCM) for polarization-multiplexed coherent-detection optical fiber links. The proposed coded modulation scheme is combined with phase-shifted bit-to-symbol mapping (PSM) in order to achieve geometric and passive......-SCM) is employed in the framework of bit-interleaved coded modulation with iterative decoding (BICM-ID) for forward error correction. The fiber transmission system is characterized in terms of signal-to-noise ratio for back-to-back case and correlated with simulated results for ideal transmission over additive...... white Gaussian noise channel. Thereafter, successful demodulation and decoding after dispersion-unmanaged transmission over 240-km standard single mode fiber of dual-polarization 6-Gbaud 16-, 32- and 64-ary SCM-PSM is experimentally demonstrated....

  20. SU-E-T-329: Dosimetric Impact of Implementing Metal Artifact Reduction Methods and Metal Energy Deposition Kernels for Photon Dose Calculations

    International Nuclear Information System (INIS)

    Huang, J; Followill, D; Howell, R; Liu, X; Mirkovic, D; Stingo, F; Kry, S

    2015-01-01

    Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titanium and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus

  1. Double-contrast examination of the gastric antrum without Duodenal superposition

    International Nuclear Information System (INIS)

    Treugut, H.; Isper, J.

    1980-01-01

    By using a modified technique of double-contrast examination of the stomach it was possible in 75% to perform a study without superposition of the duodenum and jejunum on the distal stomach compared to 36% with the usual method. In this technique a small amount (50 ml) of Barium-suspension is given to the patient in left decubitus position by a straw or gastric tube after antiperistaltic medication. There was no difference in the quality of mucosa-coating compared to the technique using higher volumes of Barium. (orig.) [de

  2. Alcoholism Detection by Data Augmentation and Convolutional Neural Network with Stochastic Pooling.

    Science.gov (United States)

    Wang, Shui-Hua; Lv, Yi-Ding; Sui, Yuxiu; Liu, Shuai; Wang, Su-Jing; Zhang, Yu-Dong

    2017-11-17

    Alcohol use disorder (AUD) is an important brain disease. It alters the brain structure. Recently, scholars tend to use computer vision based techniques to detect AUD. We collected 235 subjects, 114 alcoholic and 121 non-alcoholic. Among the 235 image, 100 images were used as training set, and data augmentation method was used. The rest 135 images were used as test set. Further, we chose the latest powerful technique-convolutional neural network (CNN) based on convolutional layer, rectified linear unit layer, pooling layer, fully connected layer, and softmax layer. We also compared three different pooling techniques: max pooling, average pooling, and stochastic pooling. The results showed that our method achieved a sensitivity of 96.88%, a specificity of 97.18%, and an accuracy of 97.04%. Our method was better than three state-of-the-art approaches. Besides, stochastic pooling performed better than other max pooling and average pooling. We validated CNN with five convolution layers and two fully connected layers performed the best. The GPU yielded a 149× acceleration in training and a 166× acceleration in test, compared to CPU.

  3. Evaluation of six TPS algorithms in computing entrance and exit doses

    Science.gov (United States)

    Metwaly, Mohamed; Glegg, Martin; Baggarley, Shaun P.; Elliott, Alex

    2014-01-01

    Entrance and exit doses are commonly measured in in vivo dosimetry for comparison with expected values, usually generated by the treatment planning system (TPS), to verify accuracy of treatment delivery. This report aims to evaluate the accuracy of six TPS algorithms in computing entrance and exit doses for a 6 MV beam. The algorithms tested were: pencil beam convolution (Eclipse PBC), analytical anisotropic algorithm (Eclipse AAA), AcurosXB (Eclipse AXB), FFT convolution (XiO Convolution), multigrid superposition (XiO Superposition), and Monte Carlo photon (Monaco MC). Measurements with ionization chamber (IC) and diode detector in water phantoms were used as a reference. Comparisons were done in terms of central axis point dose, 1D relative profiles, and 2D absolute gamma analysis. Entrance doses computed by all TPS algorithms agreed to within 2% of the measured values. Exit doses computed by XiO Convolution, XiO Superposition, Eclipse AXB, and Monaco MC agreed with the IC measured doses to within 2%‐3%. Meanwhile, Eclipse PBC and Eclipse AAA computed exit doses were higher than the IC measured doses by up to 5.3% and 4.8%, respectively. Both algorithms assume that full backscatter exists even at the exit level, leading to an overestimation of exit doses. Despite good agreements at the central axis for Eclipse AXB and Monaco MC, 1D relative comparisons showed profiles mismatched at depths beyond 11.5 cm. Overall, the 2D absolute gamma (3%/3 mm) pass rates were better for Monaco MC, while Eclipse AXB failed mostly at the outer 20% of the field area. The findings of this study serve as a useful baseline for the implementation of entrance and exit in vivo dosimetry in clinical departments utilizing any of these six common TPS algorithms for reference comparison. PACS numbers: 87.55.‐x, 87.55.D‐, 87.55.N‐, 87.53.Bn PMID:24892349

  4. Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.

    Science.gov (United States)

    Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong

    Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep

  5. Diffraction and Dirchlet problem for parameter-elliptic convolution ...

    African Journals Online (AJOL)

    In this paper we evaluate the difference between the inverse operators of a Dirichlet problem and of a diffraction problem for parameter-elliptic convolution operators with constant symbols. We prove that the inverse operator of a Dirichlet problem can be obtained as a limit case of such a diffraction problem. Quaestiones ...

  6. Transfer Learning with Convolutional Neural Networks for Classification of Abdominal Ultrasound Images.

    Science.gov (United States)

    Cheng, Phillip M; Malhi, Harshawn S

    2017-04-01

    The purpose of this study is to evaluate transfer learning with deep convolutional neural networks for the classification of abdominal ultrasound images. Grayscale images from 185 consecutive clinical abdominal ultrasound studies were categorized into 11 categories based on the text annotation specified by the technologist for the image. Cropped images were rescaled to 256 × 256 resolution and randomized, with 4094 images from 136 studies constituting the training set, and 1423 images from 49 studies constituting the test set. The fully connected layers of two convolutional neural networks based on CaffeNet and VGGNet, previously trained on the 2012 Large Scale Visual Recognition Challenge data set, were retrained on the training set. Weights in the convolutional layers of each network were frozen to serve as fixed feature extractors. Accuracy on the test set was evaluated for each network. A radiologist experienced in abdominal ultrasound also independently classified the images in the test set into the same 11 categories. The CaffeNet network classified 77.3% of the test set images accurately (1100/1423 images), with a top-2 accuracy of 90.4% (1287/1423 images). The larger VGGNet network classified 77.9% of the test set accurately (1109/1423 images), with a top-2 accuracy of VGGNet was 89.7% (1276/1423 images). The radiologist classified 71.7% of the test set images correctly (1020/1423 images). The differences in classification accuracies between both neural networks and the radiologist were statistically significant (p convolutional neural networks may be used to construct effective classifiers for abdominal ultrasound images.

  7. Trajectory Generation Method with Convolution Operation on Velocity Profile

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Geon [Hanyang Univ., Seoul (Korea, Republic of); Kim, Doik [Korea Institute of Science and Technology, Daejeon (Korea, Republic of)

    2014-03-15

    The use of robots is no longer limited to the field of industrial robots and is now expanding into the fields of service and medical robots. In this light, a trajectory generation method that can respond instantaneously to the external environment is strongly required. Toward this end, this study proposes a method that enables a robot to change its trajectory in real-time using a convolution operation. The proposed method generates a trajectory in real time and satisfies the physical limits of the robot system such as acceleration and velocity limit. Moreover, a new way to improve the previous method, which generates inefficient trajectories in some cases owing to the characteristics of the trapezoidal shape of trajectories, is proposed by introducing a triangle shape. The validity and effectiveness of the proposed method is shown through a numerical simulation and a comparison with the previous convolution method.

  8. Theoretical calculation of zero field splitting parameters of Cr{sup 3+} doped ammonium oxalate monohydrate

    Energy Technology Data Exchange (ETDEWEB)

    Kripal, Ram, E-mail: ram_kripal2001@rediffmail.com; Yadav, Awadhesh Kumar, E-mail: aky.physics@gmail.com

    2015-06-15

    Zero field splitting parameters (ZFSPs) D and E of Cr{sup 3+} ion doped ammonium oxalate monohydrate (AOM) are calculated with formula using the superposition model. The theoretically calculated ZFSPs for Cr{sup 3+} in AOM crystal are compared with the experimental value obtained by electron paramagnetic resonance (EPR). Theoretical ZFSPs are in good agreement with the experimental ones. The energy band positions of optical absorption spectra of Cr{sup 3+} in AOM crystal calculated with CFA package are in good match with the experimental values.

  9. Quasi-cyclic unit memory convolutional codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Paaske, Erik; Ballan, Mark

    1990-01-01

    Unit memory convolutional codes with generator matrices, which are composed of circulant submatrices, are introduced. This structure facilitates the analysis of efficient search for good codes. Equivalences among such codes and some of the basic structural properties are discussed. In particular......, catastrophic encoders and minimal encoders are characterized and dual codes treated. Further, various distance measures are discussed, and a number of good codes, some of which result from efficient computer search and some of which result from known block codes, are presented...

  10. Finding strong lenses in CFHTLS using convolutional neural networks

    Science.gov (United States)

    Jacobs, C.; Glazebrook, K.; Collett, T.; More, A.; McCarthy, C.

    2017-10-01

    We train and apply convolutional neural networks, a machine learning technique developed to learn from and classify image data, to Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) imaging for the identification of potential strong lensing systems. An ensemble of four convolutional neural networks was trained on images of simulated galaxy-galaxy lenses. The training sets consisted of a total of 62 406 simulated lenses and 64 673 non-lens negative examples generated with two different methodologies. An ensemble of trained networks was applied to all of the 171 deg2 of the CFHTLS wide field image data, identifying 18 861 candidates including 63 known and 139 other potential lens candidates. A second search of 1.4 million early-type galaxies selected from the survey catalogue as potential deflectors, identified 2465 candidates including 117 previously known lens candidates, 29 confirmed lenses/high-quality lens candidates, 266 novel probable or potential lenses and 2097 candidates we classify as false positives. For the catalogue-based search we estimate a completeness of 21-28 per cent with respect to detectable lenses and a purity of 15 per cent, with a false-positive rate of 1 in 671 images tested. We predict a human astronomer reviewing candidates produced by the system would identify 20 probable lenses and 100 possible lenses per hour in a sample selected by the robot. Convolutional neural networks are therefore a promising tool for use in the search for lenses in current and forthcoming surveys such as the Dark Energy Survey and the Large Synoptic Survey Telescope.

  11. Reactivity calculation with reduction of the nuclear power fluctuations

    International Nuclear Information System (INIS)

    Suescun Diaz, Daniel; Senra Martinez, Aquilino

    2009-01-01

    A new formulation is presented in this paper for the calculation of reactivity, which is simpler than the formulation that uses the Laplace and Z transforms. A treatment is also made to reduce the intensity of the noise found in the nuclear power signal used in the calculation of reactivity. Two classes of different filters are used for that. This treatment is based on the fact that the reactivity can be written by using the compose Simpson's rule resulting in a sum of two convolution terms with response to the impulse that is characteristic of a linear system. The linear part is calculated by using the filter named finite impulse response filter (FIR). The non-linear part is calculated using the filter exponentially adjusted by the least squares method, which does not cause attenuation in the reactivity calculation.

  12. Reactivity calculation with reduction of the nuclear power fluctuations

    Energy Technology Data Exchange (ETDEWEB)

    Suescun Diaz, Daniel [COPPE/UFRJ, Programa de Engenharia Nuclear, Caixa Postal 68509, CEP 21941-914 RJ (Brazil)], E-mail: dsuescun@hotmail.com; Senra Martinez, Aquilino [COPPE/UFRJ, Programa de Engenharia Nuclear, Caixa Postal 68509, CEP 21941-914 RJ (Brazil)

    2009-05-15

    A new formulation is presented in this paper for the calculation of reactivity, which is simpler than the formulation that uses the Laplace and Z transforms. A treatment is also made to reduce the intensity of the noise found in the nuclear power signal used in the calculation of reactivity. Two classes of different filters are used for that. This treatment is based on the fact that the reactivity can be written by using the compose Simpson's rule resulting in a sum of two convolution terms with response to the impulse that is characteristic of a linear system. The linear part is calculated by using the filter named finite impulse response filter (FIR). The non-linear part is calculated using the filter exponentially adjusted by the least squares method, which does not cause attenuation in the reactivity calculation.

  13. Classification of stroke disease using convolutional neural network

    Science.gov (United States)

    Marbun, J. T.; Seniman; Andayani, U.

    2018-03-01

    Stroke is a condition that occurs when the blood supply stop flowing to the brain because of a blockage or a broken blood vessel. A symptoms that happen when experiencing stroke, some of them is a dropped consciousness, disrupted vision and paralyzed body. The general examination is being done to get a picture of the brain part that have stroke using Computerized Tomography (CT) Scan. The image produced from CT will be manually checked and need a proper lighting by doctor to get a type of stroke. That is why it needs a method to classify stroke from CT image automatically. A method proposed in this research is Convolutional Neural Network. CT image of the brain is used as the input for image processing. The stage before classification are image processing (Grayscaling, Scaling, Contrast Limited Adaptive Histogram Equalization, then the image being classified with Convolutional Neural Network. The result then showed that the method significantly conducted was able to be used as a tool to classify stroke disease in order to distinguish the type of stroke from CT image.

  14. Image Classification Based on Convolutional Denoising Sparse Autoencoder

    Directory of Open Access Journals (Sweden)

    Shuangshuang Chen

    2017-01-01

    Full Text Available Image classification aims to group images into corresponding semantic categories. Due to the difficulties of interclass similarity and intraclass variability, it is a challenging issue in computer vision. In this paper, an unsupervised feature learning approach called convolutional denoising sparse autoencoder (CDSAE is proposed based on the theory of visual attention mechanism and deep learning methods. Firstly, saliency detection method is utilized to get training samples for unsupervised feature learning. Next, these samples are sent to the denoising sparse autoencoder (DSAE, followed by convolutional layer and local contrast normalization layer. Generally, prior in a specific task is helpful for the task solution. Therefore, a new pooling strategy—spatial pyramid pooling (SPP fused with center-bias prior—is introduced into our approach. Experimental results on the common two image datasets (STL-10 and CIFAR-10 demonstrate that our approach is effective in image classification. They also demonstrate that none of these three components: local contrast normalization, SPP fused with center-prior, and l2 vector normalization can be excluded from our proposed approach. They jointly improve image representation and classification performance.

  15. Enhancing neutron beam production with a convoluted moderator

    Energy Technology Data Exchange (ETDEWEB)

    Iverson, E.B., E-mail: iversoneb@ornl.gov [Spallation Neutron Source, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Baxter, D.V. [Center for the Exploration of Energy and Matter, Indiana University, Bloomington, IN 47408 (United States); Muhrer, G. [Lujan Neutron Scattering Center, Los Alamos National Laboratory, P.O. Box 1663, Los Alamos, NM 87545 (United States); Ansell, S.; Dalgliesh, R. [ISIS Facility, Rutherford Appleton Laboratory, Chilton (United Kingdom); Gallmeier, F.X. [Spallation Neutron Source, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Kaiser, H. [Center for the Exploration of Energy and Matter, Indiana University, Bloomington, IN 47408 (United States); Lu, W. [Spallation Neutron Source, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States)

    2014-10-21

    We describe a new concept for a neutron moderating assembly resulting in the more efficient production of slow neutron beams. The Convoluted Moderator, a heterogeneous stack of interleaved moderating material and nearly transparent single-crystal spacers, is a directionally enhanced neutron beam source, improving beam emission over an angular range comparable to the range accepted by neutron beam lines and guides. We have demonstrated gains of 50% in slow neutron intensity for a given fast neutron production rate while simultaneously reducing the wavelength-dependent emission time dispersion by 25%, both coming from a geometric effect in which the neutron beam lines view a large surface area of moderating material in a relatively small volume. Additionally, we have confirmed a Bragg-enhancement effect arising from coherent scattering within the single-crystal spacers. We have not observed hypothesized refractive effects leading to additional gains at long wavelength. In addition to confirmation of the validity of the Convoluted Moderator concept, our measurements provide a series of benchmark experiments suitable for developing simulation and analysis techniques for practical optimization and eventual implementation at slow neutron source facilities.

  16. Multi-Input Convolutional Neural Network for Flower Grading

    Directory of Open Access Journals (Sweden)

    Yu Sun

    2017-01-01

    Full Text Available Flower grading is a significant task because it is extremely convenient for managing the flowers in greenhouse and market. With the development of computer vision, flower grading has become an interdisciplinary focus in both botany and computer vision. A new dataset named BjfuGloxinia contains three quality grades; each grade consists of 107 samples and 321 images. A multi-input convolutional neural network is designed for large scale flower grading. Multi-input CNN achieves a satisfactory accuracy of 89.6% on the BjfuGloxinia after data augmentation. Compared with a single-input CNN, the accuracy of multi-input CNN is increased by 5% on average, demonstrating that multi-input convolutional neural network is a promising model for flower grading. Although data augmentation contributes to the model, the accuracy is still limited by lack of samples diversity. Majority of misclassification is derived from the medium class. The image processing based bud detection is useful for reducing the misclassification, increasing the accuracy of flower grading to approximately 93.9%.

  17. A Convolution Tree with Deconvolution Branches: Exploiting Geometric Relationships for Single Shot Keypoint Detection

    OpenAIRE

    Kumar, Amit; Chellappa, Rama

    2017-01-01

    Recently, Deep Convolution Networks (DCNNs) have been applied to the task of face alignment and have shown potential for learning improved feature representations. Although deeper layers can capture abstract concepts like pose, it is difficult to capture the geometric relationships among the keypoints in DCNNs. In this paper, we propose a novel convolution-deconvolution network for facial keypoint detection. Our model predicts the 2D locations of the keypoints and their individual visibility ...

  18. Long-term creep modeling of wood using time temperature superposition principle

    OpenAIRE

    Gamalath, Sandhya Samarasinghe

    1991-01-01

    Long-term creep and recovery models (master curves) were developed from short-term data using the time temperature superposition principle (TTSP) for kiln-dried southern pine loaded in compression parallel-to-grain and exposed to constant environmental conditions (~70°F, ~9%EMC). Short-term accelerated creep (17 hour) and recovery (35 hour) data were collected for each specimen at a range of temperature (70°F-150°F) and constant moisture condition of 9%. The compressive stra...

  19. The role and production of polar/subtropical jet superpositions in two high-impact weather events over North America

    Science.gov (United States)

    Winters, Andrew C.

    Careful observational work has demonstrated that the tropopause is typically characterized by a three-step pole-to-equator structure, with each break between steps in the tropopause height associated with a jet stream. While the two jet streams, the polar and subtropical jets, typically occupy different latitude bands, their separation can occasionally vanish, resulting in a vertical superposition of the two jets. A cursory examination of a number of historical and recent high-impact weather events over North America and the North Atlantic indicates that superposed jets can be an important component of their evolution. Consequently, this dissertation examines two recent jet superposition cases, the 18--20 December 2009 Mid-Atlantic Blizzard and the 1--3 May 2010 Nashville Flood, in an effort (1) to determine the specific influence that a superposed jet can have on the development of a high-impact weather event and (2) to illuminate the processes that facilitated the production of a superposition in each case. An examination of these cases from a basic-state variable and PV inversion perspective demonstrates that elements of both the remote and local synoptic environment are important to consider while diagnosing the development of a jet superposition. Specifically, the process of jet superposition begins with the remote production of a cyclonic (anticyclonic) tropopause disturbance at high (low) latitudes. The cyclonic circulation typically originates at polar latitudes, while organized tropical convection can encourage the development of an anticyclonic circulation anomaly within the tropical upper-troposphere. The concurrent advection of both anomalies towards middle latitudes subsequently allows their individual circulations to laterally displace the location of the individual tropopause breaks. Once the two circulation anomalies position the polar and subtropical tropopause breaks in close proximity to one another, elements within the local environment, such as

  20. Two-level convolution formula for nuclear structure function

    Science.gov (United States)

    Ma, Boqiang

    1990-05-01

    A two-level convolution formula for the nuclear structure function is derived in considering the nucleus as a composite system of baryon-mesons which are also composite systems of quark-gluons again. The results show that the European Muon Colaboration effect can not be explained by the nuclear effects as nucleon Fermi motion and nuclear binding contributions.

  1. Two-level convolution formula for nuclear structure function

    International Nuclear Information System (INIS)

    Ma Boqiang

    1990-01-01

    A two-level convolution formula for the nuclear structure function is derived in considering the nucleus as a composite system of baryon-mesons which are also composite systems of quark-gluons again. The results show that the European Muon Colaboration effect can not be explained by the nuclear effects as nucleon Fermi motion and nuclear binding contributions

  2. Correction of the tip convolution effects in the imaging of nanostructures studied through scanning force microscopy

    International Nuclear Information System (INIS)

    Canet-Ferrer, Josep; Coronado, Eugenio; Forment-Aliaga, Alicia; Pinilla-Cienfuegos, Elena

    2014-01-01

    AFM images are always affected by artifacts arising from tip convolution effects, resulting in a decrease in the lateral resolution of this technique. The magnitude of such effects is described by means of geometrical considerations, thereby providing better understanding of the convolution phenomenon. We demonstrate that for a constant tip radius, the convolution error is increased with the object height, mainly for the narrowest motifs. Certain influence of the object shape is observed between rectangular and elliptical objects with the same height. Such moderate differences are essentially expected among elongated objects; in contrast they are reduced as the object aspect ratio is increased. Finally, we propose an algorithm to study the influence of the size, shape and aspect ratio of different nanometric motifs on a flat substrate. Indeed, with this algorithm, convolution artifacts can be extended to any kind of motif including real surface roughness. From the simulation results we demonstrate that in most cases the real motif’s width can be estimated from AFM images without knowing its shape in detail. (paper)

  3. Superposition approach for description of electrical conductivity in sheared MWNT/polycarbonate melts

    Directory of Open Access Journals (Sweden)

    M. Saphiannikova

    2012-06-01

    Full Text Available The theoretical description of electrical properties of polymer melts, filled with attractively interacting conductive particles, represents a great challenge. Such filler particles tend to build a network-like structure which is very fragile and can be easily broken in a shear flow with shear rates of about 1 s–1. In this study, measured shear-induced changes in electrical conductivity of polymer composites are described using a superposition approach, in which the filler particles are separated into a highly conductive percolating and low conductive non-percolating phases. The latter is represented by separated well-dispersed filler particles. It is assumed that these phases determine the effective electrical properties of composites through a type of mixing rule involving the phase volume fractions. The conductivity of the percolating phase is described with the help of classical percolation theory, while the conductivity of non-percolating phase is given by the matrix conductivity enhanced by the presence of separate filler particles. The percolation theory is coupled with a kinetic equation for a scalar structural parameter which describes the current state of filler network under particular flow conditions. The superposition approach is applied to transient shear experiments carried out on polycarbonate composites filled with multi-wall carbon nanotubes.

  4. Infimal Convolution Regularisation Functionals of BV and Lp Spaces

    KAUST Repository

    Burger, Martin

    2016-02-03

    We study a general class of infimal convolution type regularisation functionals suitable for applications in image processing. These functionals incorporate a combination of the total variation seminorm and Lp norms. A unified well-posedness analysis is presented and a detailed study of the one-dimensional model is performed, by computing exact solutions for the corresponding denoising problem and the case p=2. Furthermore, the dependency of the regularisation properties of this infimal convolution approach to the choice of p is studied. It turns out that in the case p=2 this regulariser is equivalent to the Huber-type variant of total variation regularisation. We provide numerical examples for image decomposition as well as for image denoising. We show that our model is capable of eliminating the staircasing effect, a well-known disadvantage of total variation regularisation. Moreover as p increases we obtain almost piecewise affine reconstructions, leading also to a better preservation of hat-like structures.

  5. Real-Time Video Convolutional Face Finder on Embedded Platforms

    Directory of Open Access Journals (Sweden)

    Mamalet Franck

    2007-01-01

    Full Text Available A high-level optimization methodology is applied for implementing the well-known convolutional face finder (CFF algorithm for real-time applications on mobile phones, such as teleconferencing, advanced user interfaces, image indexing, and security access control. CFF is based on a feature extraction and classification technique which consists of a pipeline of convolutions and subsampling operations. The design of embedded systems requires a good trade-off between performance and code size due to the limited amount of available resources. The followed methodology copes with the main drawbacks of the original implementation of CFF such as floating-point computation and memory allocation, in order to allow parallelism exploitation and perform algorithm optimizations. Experimental results show that our embedded face detection system can accurately locate faces with less computational load and memory cost. It runs on a 275 MHz Starcore DSP at 35 QCIF images/s with state-of-the-art detection rates and very low false alarm rates.

  6. Real-Time Video Convolutional Face Finder on Embedded Platforms

    Directory of Open Access Journals (Sweden)

    Franck Mamalet

    2007-03-01

    Full Text Available A high-level optimization methodology is applied for implementing the well-known convolutional face finder (CFF algorithm for real-time applications on mobile phones, such as teleconferencing, advanced user interfaces, image indexing, and security access control. CFF is based on a feature extraction and classification technique which consists of a pipeline of convolutions and subsampling operations. The design of embedded systems requires a good trade-off between performance and code size due to the limited amount of available resources. The followed methodology copes with the main drawbacks of the original implementation of CFF such as floating-point computation and memory allocation, in order to allow parallelism exploitation and perform algorithm optimizations. Experimental results show that our embedded face detection system can accurately locate faces with less computational load and memory cost. It runs on a 275 MHz Starcore DSP at 35 QCIF images/s with state-of-the-art detection rates and very low false alarm rates.

  7. sEMG-Based Gesture Recognition with Convolution Neural Networks

    Directory of Open Access Journals (Sweden)

    Zhen Ding

    2018-06-01

    Full Text Available The traditional classification methods for limb motion recognition based on sEMG have been deeply researched and shown promising results. However, information loss during feature extraction reduces the recognition accuracy. To obtain higher accuracy, the deep learning method was introduced. In this paper, we propose a parallel multiple-scale convolution architecture. Compared with the state-of-art methods, the proposed architecture fully considers the characteristics of the sEMG signal. Larger sizes of kernel filter than commonly used in other CNN-based hand recognition methods are adopted. Meanwhile, the characteristics of the sEMG signal, that is, muscle independence, is considered when designing the architecture. All the classification methods were evaluated on the NinaPro database. The results show that the proposed architecture has the highest recognition accuracy. Furthermore, the results indicate that parallel multiple-scale convolution architecture with larger size of kernel filter and considering muscle independence can significantly increase the classification accuracy.

  8. Evaluation of heterogeneity dose distributions for Stereotactic Radiotherapy (SRT: comparison of commercially available Monte Carlo dose calculation with other algorithms

    Directory of Open Access Journals (Sweden)

    Takahashi Wataru

    2012-02-01

    Full Text Available Abstract Background The purpose of this study was to compare dose distributions from three different algorithms with the x-ray Voxel Monte Carlo (XVMC calculations, in actual computed tomography (CT scans for use in stereotactic radiotherapy (SRT of small lung cancers. Methods Slow CT scan of 20 patients was performed and the internal target volume (ITV was delineated on Pinnacle3. All plans were first calculated with a scatter homogeneous mode (SHM which is compatible with Clarkson algorithm using Pinnacle3 treatment planning system (TPS. The planned dose was 48 Gy in 4 fractions. In a second step, the CT images, structures and beam data were exported to other treatment planning systems (TPSs. Collapsed cone convolution (CCC from Pinnacle3, superposition (SP from XiO, and XVMC from Monaco were used for recalculating. The dose distributions and the Dose Volume Histograms (DVHs were compared with each other. Results The phantom test revealed that all algorithms could reproduce the measured data within 1% except for the SHM with inhomogeneous phantom. For the patient study, the SHM greatly overestimated the isocenter (IC doses and the minimal dose received by 95% of the PTV (PTV95 compared to XVMC. The differences in mean doses were 2.96 Gy (6.17% for IC and 5.02 Gy (11.18% for PTV95. The DVH's and dose distributions with CCC and SP were in agreement with those obtained by XVMC. The average differences in IC doses between CCC and XVMC, and SP and XVMC were -1.14% (p = 0.17, and -2.67% (p = 0.0036, respectively. Conclusions Our work clearly confirms that the actual practice of relying solely on a Clarkson algorithm may be inappropriate for SRT planning. Meanwhile, CCC and SP were close to XVMC simulations and actual dose distributions obtained in lung SRT.

  9. Bidirectional-Convolutional LSTM Based Spectral-Spatial Feature Learning for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Qingshan Liu

    2017-12-01

    Full Text Available This paper proposes a novel deep learning framework named bidirectional-convolutional long short term memory (Bi-CLSTM network to automatically learn the spectral-spatial features from hyperspectral images (HSIs. In the network, the issue of spectral feature extraction is considered as a sequence learning problem, and a recurrent connection operator across the spectral domain is used to address it. Meanwhile, inspired from the widely used convolutional neural network (CNN, a convolution operator across the spatial domain is incorporated into the network to extract the spatial feature. In addition, to sufficiently capture the spectral information, a bidirectional recurrent connection is proposed. In the classification phase, the learned features are concatenated into a vector and fed to a Softmax classifier via a fully-connected operator. To validate the effectiveness of the proposed Bi-CLSTM framework, we compare it with six state-of-the-art methods, including the popular 3D-CNN model, on three widely used HSIs (i.e., Indian Pines, Pavia University, and Kennedy Space Center. The obtained results show that Bi-CLSTM can improve the classification performance by almost 1.5 % as compared to 3D-CNN.

  10. A pre-trained convolutional neural network based method for thyroid nodule diagnosis.

    Science.gov (United States)

    Ma, Jinlian; Wu, Fa; Zhu, Jiang; Xu, Dong; Kong, Dexing

    2017-01-01

    In ultrasound images, most thyroid nodules are in heterogeneous appearances with various internal components and also have vague boundaries, so it is difficult for physicians to discriminate malignant thyroid nodules from benign ones. In this study, we propose a hybrid method for thyroid nodule diagnosis, which is a fusion of two pre-trained convolutional neural networks (CNNs) with different convolutional layers and fully-connected layers. Firstly, the two networks pre-trained with ImageNet database are separately trained. Secondly, we fuse feature maps learned by trained convolutional filters, pooling and normalization operations of the two CNNs. Finally, with the fused feature maps, a softmax classifier is used to diagnose thyroid nodules. The proposed method is validated on 15,000 ultrasound images collected from two local hospitals. Experiment results show that the proposed CNN based methods can accurately and effectively diagnose thyroid nodules. In addition, the fusion of the two CNN based models lead to significant performance improvement, with an accuracy of 83.02%±0.72%. These demonstrate the potential clinical applications of this method. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Three dimensional implementation of anisotropy corrected fast fourier transform dose calculation around brachytherapy seeds

    International Nuclear Information System (INIS)

    Kyeremeh, P.O.

    2011-01-01

    Current-available brachytherapy dose computation algorithms ignore heterogeneities such as tissue-air interfaces, shielded gynaecological colpostats, and tissue-composition variations in source implants despite dose computation errors as large as 40%. A convolution kernel, which takes into consideration anisotropy of the dose distribution around a brachytherapy source, and to compute dose in the presence of tissue and applicator heterogeneities, has been established. Resulting from the convolution kernel are functions with polynomial and exponential terms. the solution to the convolution integral was represented by the Fast Fourier transform. The Fast Fourier transform has shown enough potency in accounting for errors due to these heterogeneities and the versatility of this Fast Fourier transform is evident from its capability of switching in between fields. Thus successful procedures in external beam could be adopted in brachytherapy to a yield similar effect. A dose deposition kernel was developed for a 64x64x64 matrix size with wrap around ordering and convoluted with the distribution of the sources in 3D. With MatLab's inverse Fast Fourier transform, dose rate distribution for a given array of interstitial sources, typical of brachytherapy was calculated. The shape of the dose rate distribution peaks appeared comparable with the output expected from computerized treatment planning systems for brachytherapy. Subsequently, the study confirmed the speed and accuracy of dose computation using the FFT convolution as well juxtaposed. Although, dose rate peaks from both the FFT convolution and the TPS(TG43) did not compare quantitatively, which was mainly due to the TPS(TG43) initiation computations from the origin (0,0,0) unlike the FFT convolution which uses sampling points; N=1,2,3..., there is a strong basis for establishing parity since the dose rate peaks compared qualitatively. With both modes compared, the discrepancies in the dose rates ranged between 3.6% to

  12. Automated detection of lung nodules with three-dimensional convolutional neural networks

    Science.gov (United States)

    Pérez, Gustavo; Arbeláez, Pablo

    2017-11-01

    Lung cancer is the cancer type with highest mortality rate worldwide. It has been shown that early detection with computer tomography (CT) scans can reduce deaths caused by this disease. Manual detection of cancer nodules is costly and time-consuming. We present a general framework for the detection of nodules in lung CT images. Our method consists of the pre-processing of a patient's CT with filtering and lung extraction from the entire volume using a previously calculated mask for each patient. From the extracted lungs, we perform a candidate generation stage using morphological operations, followed by the training of a three-dimensional convolutional neural network for feature representation and classification of extracted candidates for false positive reduction. We perform experiments on the publicly available LIDC-IDRI dataset. Our candidate extraction approach is effective to produce precise candidates with a recall of 99.6%. In addition, false positive reduction stage manages to successfully classify candidates and increases precision by a factor of 7.000.

  13. An Interactive Graphics Program for Assistance in Learning Convolution.

    Science.gov (United States)

    Frederick, Dean K.; Waag, Gary L.

    1980-01-01

    A program has been written for the interactive computer graphics facility at Rensselaer Polytechnic Institute that is designed to assist the user in learning the mathematical technique of convolving two functions. Because convolution can be represented graphically by a sequence of steps involving folding, shifting, multiplying, and integration, it…

  14. Transient change in the shape of premixed burner flame with the superposition of pulsed dielectric barrier discharge

    Science.gov (United States)

    Zaima, Kazunori; Sasaki, Koichi

    2016-08-01

    We investigated the transient phenomena in a premixed burner flame with the superposition of a pulsed dielectric barrier discharge (DBD). The length of the flame was shortened by the superposition of DBD, indicating the activation of combustion chemical reactions with the help of the plasma. In addition, we observed the modulation of the top position of the unburned gas region and the formations of local minimums in the axial distribution of the optical emission intensity of OH. These experimental results reveal the oscillation of the rates of combustion chemical reactions as a response to the activation by pulsed DBD. The cycle of the oscillation was 0.18-0.2 ms, which could be understood as the eigenfrequency of the plasma-assisted combustion reaction system.

  15. Quantifying Translation-Invariance in Convolutional Neural Networks

    OpenAIRE

    Kauderer-Abrams, Eric

    2017-01-01

    A fundamental problem in object recognition is the development of image representations that are invariant to common transformations such as translation, rotation, and small deformations. There are multiple hypotheses regarding the source of translation invariance in CNNs. One idea is that translation invariance is due to the increasing receptive field size of neurons in successive convolution layers. Another possibility is that invariance is due to the pooling operation. We develop a simple ...

  16. Applications of deep convolutional neural networks to digitized natural history collections

    Directory of Open Access Journals (Sweden)

    Eric Schuettpelz

    2017-11-01

    Full Text Available Natural history collections contain data that are critical for many scientific endeavors. Recent efforts in mass digitization are generating large datasets from these collections that can provide unprecedented insight. Here, we present examples of how deep convolutional neural networks can be applied in analyses of imaged herbarium specimens. We first demonstrate that a convolutional neural network can detect mercury-stained specimens across a collection with 90% accuracy. We then show that such a network can correctly distinguish two morphologically similar plant families 96% of the time. Discarding the most challenging specimen images increases accuracy to 94% and 99%, respectively. These results highlight the importance of mass digitization and deep learning approaches and reveal how they can together deliver powerful new investigative tools.

  17. Applications of deep convolutional neural networks to digitized natural history collections.

    Science.gov (United States)

    Schuettpelz, Eric; Frandsen, Paul B; Dikow, Rebecca B; Brown, Abel; Orli, Sylvia; Peters, Melinda; Metallo, Adam; Funk, Vicki A; Dorr, Laurence J

    2017-01-01

    Natural history collections contain data that are critical for many scientific endeavors. Recent efforts in mass digitization are generating large datasets from these collections that can provide unprecedented insight. Here, we present examples of how deep convolutional neural networks can be applied in analyses of imaged herbarium specimens. We first demonstrate that a convolutional neural network can detect mercury-stained specimens across a collection with 90% accuracy. We then show that such a network can correctly distinguish two morphologically similar plant families 96% of the time. Discarding the most challenging specimen images increases accuracy to 94% and 99%, respectively. These results highlight the importance of mass digitization and deep learning approaches and reveal how they can together deliver powerful new investigative tools.

  18. A mixed-scale dense convolutional neural network for image analysis

    NARCIS (Netherlands)

    D.M. Pelt (Daniël); J.A. Sethian (James)

    2016-01-01

    textabstractDeep convolutional neural networks have been successfully applied to many image-processing problems in recent works. Popular network architectures often add additional operations and connections to the standard architecture to enable training deeper networks. To achieve accurate results

  19. Fast convolutional sparse coding using matrix inversion lemma

    Czech Academy of Sciences Publication Activity Database

    Šorel, Michal; Šroubek, Filip

    2016-01-01

    Roč. 55, č. 1 (2016), s. 44-51 ISSN 1051-2004 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : Convolutional sparse coding * Feature learning * Deconvolution networks * Shift-invariant sparse coding Subject RIV: JD - Computer Applications, Robotics Impact factor: 2.337, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/sorel-0459332.pdf

  20. A geometrical correction for the inter- and intra-molecular basis set superposition error in Hartree-Fock and density functional theory calculations for large systems

    Science.gov (United States)

    Kruse, Holger; Grimme, Stefan

    2012-04-01

    A semi-empirical counterpoise-type correction for basis set superposition error (BSSE) in molecular systems is presented. An atom pair-wise potential corrects for the inter- and intra-molecular BSSE in supermolecular Hartree-Fock (HF) or density functional theory (DFT) calculations. This geometrical counterpoise (gCP) denoted scheme depends only on the molecular geometry, i.e., no input from the electronic wave-function is required and hence is applicable to molecules with ten thousands of atoms. The four necessary parameters have been determined by a fit to standard Boys and Bernadi counterpoise corrections for Hobza's S66×8 set of non-covalently bound complexes (528 data points). The method's target are small basis sets (e.g., minimal, split-valence, 6-31G*), but reliable results are also obtained for larger triple-ζ sets. The intermolecular BSSE is calculated by gCP within a typical error of 10%-30% that proves sufficient in many practical applications. The approach is suggested as a quantitative correction in production work and can also be routinely applied to estimate the magnitude of the BSSE beforehand. The applicability for biomolecules as the primary target is tested for the crambin protein, where gCP removes intramolecular BSSE effectively and yields conformational energies comparable to def2-TZVP basis results. Good mutual agreement is also found with Jensen's ACP(4) scheme, estimating the intramolecular BSSE in the phenylalanine-glycine-phenylalanine tripeptide, for which also a relaxed rotational energy profile is presented. A variety of minimal and double-ζ basis sets combined with gCP and the dispersion corrections DFT-D3 and DFT-NL are successfully benchmarked on the S22 and S66 sets of non-covalent interactions. Outstanding performance with a mean absolute deviation (MAD) of 0.51 kcal/mol (0.38 kcal/mol after D3-refit) is obtained at the gCP-corrected HF-D3/(minimal basis) level for the S66 benchmark. The gCP-corrected B3LYP-D3/6-31G* model

  1. A geometrical correction for the inter- and intra-molecular basis set superposition error in Hartree-Fock and density functional theory calculations for large systems.

    Science.gov (United States)

    Kruse, Holger; Grimme, Stefan

    2012-04-21

    A semi-empirical counterpoise-type correction for basis set superposition error (BSSE) in molecular systems is presented. An atom pair-wise potential corrects for the inter- and intra-molecular BSSE in supermolecular Hartree-Fock (HF) or density functional theory (DFT) calculations. This geometrical counterpoise (gCP) denoted scheme depends only on the molecular geometry, i.e., no input from the electronic wave-function is required and hence is applicable to molecules with ten thousands of atoms. The four necessary parameters have been determined by a fit to standard Boys and Bernadi counterpoise corrections for Hobza's S66×8 set of non-covalently bound complexes (528 data points). The method's target are small basis sets (e.g., minimal, split-valence, 6-31G*), but reliable results are also obtained for larger triple-ζ sets. The intermolecular BSSE is calculated by gCP within a typical error of 10%-30% that proves sufficient in many practical applications. The approach is suggested as a quantitative correction in production work and can also be routinely applied to estimate the magnitude of the BSSE beforehand. The applicability for biomolecules as the primary target is tested for the crambin protein, where gCP removes intramolecular BSSE effectively and yields conformational energies comparable to def2-TZVP basis results. Good mutual agreement is also found with Jensen's ACP(4) scheme, estimating the intramolecular BSSE in the phenylalanine-glycine-phenylalanine tripeptide, for which also a relaxed rotational energy profile is presented. A variety of minimal and double-ζ basis sets combined with gCP and the dispersion corrections DFT-D3 and DFT-NL are successfully benchmarked on the S22 and S66 sets of non-covalent interactions. Outstanding performance with a mean absolute deviation (MAD) of 0.51 kcal/mol (0.38 kcal/mol after D3-refit) is obtained at the gCP-corrected HF-D3/(minimal basis) level for the S66 benchmark. The gCP-corrected B3LYP-D3/6-31G* model

  2. Superposition of Stress Fields in Diametrically Compressed Cylinders

    Directory of Open Access Journals (Sweden)

    João Augusto de Lima Rocha

    Full Text Available Abstract The theoretical analysis for the Brazilian test is a classical plane stress problem of elasticity theory, where a vertical force is applied to a horizontal plane, the boundary of a semi-infinite medium. Hypothesizing a normal radial stress field, the results of that model are correct. Nevertheless, the superposition of three stress fields, with two being based on prior results and the third based on a hydrostatic stress field, is incorrect. Indeed, this work shows that the Cauchy vectors (tractions are non-vanishing in the parallel planes in which the two opposing vertical forces are applied. The aim of this work is to detail the process used in the construction of the theoretical model for the three stress fields used, with the objective being to demonstrate the inconsistency often stated in the literature.

  3. Joint formation of dissimilar steels in pressure welding with superposition of ultrasonic oscillations

    Energy Technology Data Exchange (ETDEWEB)

    Surovtsev, A P; Golovanenko, S A; Sukhanov, V E; Kazantsev, V F

    1983-12-01

    Investigation results of kinetics and quality of carbon steel joints with the steel 12Kh18N10T, obtained by pressure welding with superposition of ultrasonic oscillations with the frequency 16.5-18.0 kHz are given. The effect of ultrasonic oscillations on the process of physical contact development of the surfaces welded, formation of microstructure and impact viscosity of the compound, is shown.

  4. Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network.

    Science.gov (United States)

    Yoon, Jaehong; Lee, Jungnyun; Whang, Mincheol

    2018-01-01

    Feature of event-related potential (ERP) has not been completely understood and illiteracy problem remains unsolved. To this end, P300 peak has been used as the feature of ERP in most brain-computer interface applications, but subjects who do not show such peak are common. Recent development of convolutional neural network provides a way to analyze spatial and temporal features of ERP. Here, we train the convolutional neural network with 2 convolutional layers whose feature maps represented spatial and temporal features of event-related potential. We have found that nonilliterate subjects' ERP show high correlation between occipital lobe and parietal lobe, whereas illiterate subjects only show correlation between neural activities from frontal lobe and central lobe. The nonilliterates showed peaks in P300, P500, and P700, whereas illiterates mostly showed peaks in around P700. P700 was strong in both subjects. We found that P700 peak may be the key feature of ERP as it appears in both illiterate and nonilliterate subjects.

  5. SU-E-T-220: Computational Accuracy of Adaptive Convolution (AC) and Collapsed Cone Convolution (CCC) Algorithms in the Presence of Air Gaps

    Energy Technology Data Exchange (ETDEWEB)

    Oyewale, S [Cancer Centers of Southwest Oklahoma, Lawton, OK (United States); Pokharel, S [21st Century Oncology, Naples, FL (United States); Rana, S [ProCure Proton Therapy Center, Oklahoma City, OK (United States)

    2015-06-15

    Purpose: To compare the percentage depth dose (PDD) computational accuracy of Adaptive Convolution (AC) and Collapsed Cone Convolution (CCC) algorithms in the presence of air gaps. Methods: A 30×30×30 cm{sup 3} solid water phantom with two 5cm air gaps was scanned with a CT simulator unit and exported into the Phillips Pinnacle™ treatment planning system. PDDs were computed using the AC and CCC algorithms. Photon energy of 6 MV was used with field sizes of 3×3 cm{sup 2}, 5×5 cm{sup 2}, 10×10 cm{sup 2}, 15×15 cm{sup 2}, and 20×20 cm{sup 2}. Ionization chamber readings were taken at different depths in water for all the field sizes. The percentage differences in the PDDs were computed with normalization to the depth of maximum dose (dmax). The calculated PDDs were then compared with measured PDDs. Results: In the first buildup region, both algorithms overpredicted the dose for all field sizes and under-predicted for all other subsequent buildup regions. After dmax in the three water media, AC under-predicted the dose for field sizes 3×3 and 5×5 cm{sup 2} and overpredicted for larger field sizes, whereas CCC under-predicted for all field sizes. Upon traversing the first air gap, AC showed maximum differences of –3.9%, −1.4%, 2.0%, 2.5%, 2.9% and CCC had maximum differences of −3.9%, −3.0%,–3.1%, −2.7%, −1.8% for field sizes 3×3, 5×5, 10×10, 15×15, and 20×20 cm{sup 2} respectively. Conclusion: The effect of air gaps causes a significant difference in the PDDs computed by both the AC and CCC algorithms in secondary build-up regions. AC computed larger values for the PDDs except at smaller field sizes. For CCC, the size of the errors in prediction of the PDDs has an inverse relationship with respect to field size. These effects should be considered in treatment planning where significant air gaps are encountered.

  6. Renormalized G-convolution of n-point functions in quantum field theory. I. The Euclidean case

    International Nuclear Information System (INIS)

    Bros, Jacques; Manolessou-Grammaticou, Marietta.

    1977-01-01

    The notion of Feynman amplitude associated with a graph G in perturbative quantum field theory admits a generalized version in which each vertex v of G is associated with a general (non-perturbative) nsub(v)-point function Hsup(nsub(v)), nsub(v) denoting the number of lines which are incident to v in G. In the case where no ultraviolet divergence occurs, this has been performed directly in complex momentum space through Bros-Lassalle's G-convolution procedure. The authors propose a generalization of G-convolution which includes the case when the functions Hsup(nsub(v)) are not integrable at infinity but belong to a suitable class of slowly increasing functions. A finite part of the G-convolution integral is then defined through an algorithm which closely follows Zimmermann's renormalization scheme. The case of Euclidean four-momentum configurations is only treated

  7. Simulation Analysis of DC and Switching Impulse Superposition Circuit

    Science.gov (United States)

    Zhang, Chenmeng; Xie, Shijun; Zhang, Yu; Mao, Yuxiang

    2018-03-01

    Surge capacitors running between the natural bus and the ground are affected by DC and impulse superposition voltage during operation in the converter station. This paper analyses the simulation aging circuit of surge capacitors by PSCAD electromagnetic transient simulation software. This paper also analyses the effect of the DC voltage to the waveform of the impulse voltage generation. The effect of coupling capacitor to the test voltage waveform is also studied. Testing results prove that the DC voltage has little effect on the waveform of the output of the surge voltage generator, and the value of the coupling capacitor has little effect on the voltage waveform of the sample. Simulation results show that surge capacitor DC and impulse superimposed aging test is feasible.

  8. Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology.

    Science.gov (United States)

    Sharma, Harshita; Zerbe, Norman; Klempert, Iris; Hellwich, Olaf; Hufnagl, Peter

    2017-11-01

    Deep learning using convolutional neural networks is an actively emerging field in histological image analysis. This study explores deep learning methods for computer-aided classification in H&E stained histopathological whole slide images of gastric carcinoma. An introductory convolutional neural network architecture is proposed for two computerized applications, namely, cancer classification based on immunohistochemical response and necrosis detection based on the existence of tumor necrosis in the tissue. Classification performance of the developed deep learning approach is quantitatively compared with traditional image analysis methods in digital histopathology requiring prior computation of handcrafted features, such as statistical measures using gray level co-occurrence matrix, Gabor filter-bank responses, LBP histograms, gray histograms, HSV histograms and RGB histograms, followed by random forest machine learning. Additionally, the widely known AlexNet deep convolutional framework is comparatively analyzed for the corresponding classification problems. The proposed convolutional neural network architecture reports favorable results, with an overall classification accuracy of 0.6990 for cancer classification and 0.8144 for necrosis detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  10. ID card number detection algorithm based on convolutional neural network

    Science.gov (United States)

    Zhu, Jian; Ma, Hanjie; Feng, Jie; Dai, Leiyan

    2018-04-01

    In this paper, a new detection algorithm based on Convolutional Neural Network is presented in order to realize the fast and convenient ID information extraction in multiple scenarios. The algorithm uses the mobile device equipped with Android operating system to locate and extract the ID number; Use the special color distribution of the ID card, select the appropriate channel component; Use the image threshold segmentation, noise processing and morphological processing to take the binary processing for image; At the same time, the image rotation and projection method are used for horizontal correction when image was tilting; Finally, the single character is extracted by the projection method, and recognized by using Convolutional Neural Network. Through test shows that, A single ID number image from the extraction to the identification time is about 80ms, the accuracy rate is about 99%, It can be applied to the actual production and living environment.

  11. Evaluation of the accuracy of the calculation of irregular photon fields in a radiotherapy planning system

    International Nuclear Information System (INIS)

    Bax, D.P.; Verlinde, P.H.; Storchi, P.; Woudstra, E.; Puurunen, H.

    1995-01-01

    In the Cadplan external beam planning system, irregular fields are calculated using the pencil beam convolution model. This model uses measured PDDs, profiles, and peak scatter factor data to extract pencil beam kernels. The scatter kernel is convolved with a matrix describing the field shape to calculate the PDD of the irregular field. The boundary kernel is used in the calculation of the off-axis ratios. The field matrix used in the convolution has a value of 1 in the open parts of the field a value of zero outside the field and equals the transmission in the blocked part. The evaluation of the model was done using two test cases. Both test cases are based on a 20x20 cm 2 field. In the first case, the central part of the field is blocked by an 8x8 cm 2 block. On one side this block is connected to the field edge by another block with a width of 3 cm. In the second case, two corners of the field are blocked by an 8x8.5 cm 2 block. There is a gap of 3 cm between the two blocks. The two cases were measured in a waterphantom for open and 45 deg. wedged beams of 4 different energies. 4, 6 and 23 MV were measured on conventional accelerators using customized blocks. 25 MV was measured on a scanned beam MM50 using a multileaf collimator. For all fields PDDs, and profiles were measured centrally and at 2 or more off-axis distances. All the measured doses are relative to the normalisation dose of a 20x20 cm 2 open field at d max . The PDDs and profiles measured for the two test cases were compared to results of the calculation of the pencil beam convolution model. The profiles were compared at four depths: d max , 5, 10, and 20 cm. The difference found between measurement and calculation was within acceptable limits, making it possible to use the model in routine clinical planning

  12. Convolution-based estimation of organ dose in tube current modulated CT

    Science.gov (United States)

    Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan

    2016-05-01

    Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The

  13. Electroencephalography Based Fusion Two-Dimensional (2D-Convolution Neural Networks (CNN Model for Emotion Recognition System

    Directory of Open Access Journals (Sweden)

    Yea-Hoon Kwon

    2018-04-01

    Full Text Available The purpose of this study is to improve human emotional classification accuracy using a convolution neural networks (CNN model and to suggest an overall method to classify emotion based on multimodal data. We improved classification performance by combining electroencephalogram (EEG and galvanic skin response (GSR signals. GSR signals are preprocessed using by the zero-crossing rate. Sufficient EEG feature extraction can be obtained through CNN. Therefore, we propose a suitable CNN model for feature extraction by tuning hyper parameters in convolution filters. The EEG signal is preprocessed prior to convolution by a wavelet transform while considering time and frequency simultaneously. We use a database for emotion analysis using the physiological signals open dataset to verify the proposed process, achieving 73.4% accuracy, showing significant performance improvement over the current best practice models.

  14. Review of the convolution algorithm for evaluating service integrated systems

    DEFF Research Database (Denmark)

    Iversen, Villy Bæk

    1997-01-01

    In this paper we give a review of the applicability of the convolution algorithm. By this we are able to evaluate communication networks end--to--end with e.g. BPP multi-ratetraffic models insensitive to the holding time distribution. Rearrangement, minimum allocation, and maximum allocation...

  15. Training Convolutional Neural Networks for Translational Invariance on SAR ATR

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Engholm, Rasmus; Østergaard Pedersen, Morten

    2016-01-01

    In this paper we present a comparison of the robustness of Convolutional Neural Networks (CNN) to other classifiers in the presence of uncertainty of the objects localization in SAR image. We present a framework for simulating simple SAR images, translating the object of interest systematically...

  16. A deterministic partial differential equation model for dose calculation in electron radiotherapy.

    Science.gov (United States)

    Duclous, R; Dubroca, B; Frank, M

    2010-07-07

    High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g.Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung

  17. A deterministic partial differential equation model for dose calculation in electron radiotherapy

    Science.gov (United States)

    Duclous, R.; Dubroca, B.; Frank, M.

    2010-07-01

    High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g. Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung

  18. Calculation of the Doppler broadening function using Fourier analysis

    International Nuclear Information System (INIS)

    Goncalves, Alessandro da Cruz

    2010-01-01

    An efficient and precise method for calculation of Doppler broadening function is very important to obtain average group microscopic cross sections, self shielding factors, resonance integrals and others reactor physics parameter. In this thesis two different methods for calculation of Doppler broadening function and interference term will be presented. The main method is based on a new integral form for Doppler broadening function ψ(x,ζ) which gives a mathematical interpretation of the approximation proposed by Bethe and Placzek, as the convolution of the Lorentzian function with a Gaussian function. This interpretation besides leading to a new integral form for ψ(x,ζ), enables to obtain a simple analytic solution for the Doppler broadening function. (author)

  19. Learning Convolutional Text Representations for Visual Question Answering

    OpenAIRE

    Wang, Zhengyang; Ji, Shuiwang

    2017-01-01

    Visual question answering is a recently proposed artificial intelligence task that requires a deep understanding of both images and texts. In deep learning, images are typically modeled through convolutional neural networks, and texts are typically modeled through recurrent neural networks. While the requirement for modeling images is similar to traditional computer vision tasks, such as object recognition and image classification, visual question answering raises a different need for textual...

  20. Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks

    Directory of Open Access Journals (Sweden)

    Haiyang Yu

    2017-06-01

    Full Text Available Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs, for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs and long short-term memory (LSTM neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction.

  1. Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks.

    Science.gov (United States)

    Yu, Haiyang; Wu, Zhihai; Wang, Shuqin; Wang, Yunpeng; Ma, Xiaolei

    2017-06-26

    Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs), for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs) and long short-term memory (LSTM) neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction.

  2. Transforming Musical Signals through a Genre Classifying Convolutional Neural Network

    Science.gov (United States)

    Geng, S.; Ren, G.; Ogihara, M.

    2017-05-01

    Convolutional neural networks (CNNs) have been successfully applied on both discriminative and generative modeling for music-related tasks. For a particular task, the trained CNN contains information representing the decision making or the abstracting process. One can hope to manipulate existing music based on this 'informed' network and create music with new features corresponding to the knowledge obtained by the network. In this paper, we propose a method to utilize the stored information from a CNN trained on musical genre classification task. The network was composed of three convolutional layers, and was trained to classify five-second song clips into five different genres. After training, randomly selected clips were modified by maximizing the sum of outputs from the network layers. In addition to the potential of such CNNs to produce interesting audio transformation, more information about the network and the original music could be obtained from the analysis of the generated features since these features indicate how the network 'understands' the music.

  3. Convolution Model of a Queueing System with the cFIFO Service Discipline

    Directory of Open Access Journals (Sweden)

    Sławomir Hanczewski

    2016-01-01

    Full Text Available This article presents an approximate convolution model of a multiservice queueing system with the continuous FIFO (cFIFO service discipline. The model makes it possible to service calls sequentially with variable bit rate, determined by unoccupied (free resources of the multiservice server. As compared to the FIFO discipline, the cFIFO queue utilizes the resources of a multiservice server more effectively. The assumption in the model is that the queueing system is offered a mixture of independent multiservice Bernoulli-Poisson-Pascal (BPP call streams. The article also discusses the results of modelling a number of queueing systems to which different, non-Poissonian, call streams are offered. To verify the accuracy of the model, the results of the analytical calculations are compared with the results of simulation experiments for a number of selected queueing systems. The study has confirmed the accuracy of all adopted theoretical assumptions for the proposed analytical model.

  4. Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Jaehong Yoon

    2018-01-01

    Full Text Available Feature of event-related potential (ERP has not been completely understood and illiteracy problem remains unsolved. To this end, P300 peak has been used as the feature of ERP in most brain–computer interface applications, but subjects who do not show such peak are common. Recent development of convolutional neural network provides a way to analyze spatial and temporal features of ERP. Here, we train the convolutional neural network with 2 convolutional layers whose feature maps represented spatial and temporal features of event-related potential. We have found that nonilliterate subjects’ ERP show high correlation between occipital lobe and parietal lobe, whereas illiterate subjects only show correlation between neural activities from frontal lobe and central lobe. The nonilliterates showed peaks in P300, P500, and P700, whereas illiterates mostly showed peaks in around P700. P700 was strong in both subjects. We found that P700 peak may be the key feature of ERP as it appears in both illiterate and nonilliterate subjects.

  5. Constructing petal modes from the coherent superposition of Laguerre-Gaussian modes

    Science.gov (United States)

    Naidoo, Darryl; Forbes, Andrew; Ait-Ameur, Kamel; Brunel, Marc

    2011-03-01

    An experimental approach in generating Petal-like transverse modes, which are similar to what is seen in porro-prism resonators, has been successfully demonstrated. We hypothesize that the petal-like structures are generated from a coherent superposition of Laguerre-Gaussian modes of zero radial order and opposite azimuthal order. To verify this hypothesis, visually based comparisons such as petal peak to peak diameter and the angle between adjacent petals are drawn between experimental data and simulated data. The beam quality factor of the Petal-like transverse modes and an inner product interaction is also experimentally compared to numerical results.

  6. Adiabatic rotation, quantum search, and preparation of superposition states

    International Nuclear Information System (INIS)

    Siu, M. Stewart

    2007-01-01

    We introduce the idea of using adiabatic rotation to generate superpositions of a large class of quantum states. For quantum computing this is an interesting alternative to the well-studied 'straight line' adiabatic evolution. In ways that complement recent results, we show how to efficiently prepare three types of states: Kitaev's toric code state, the cluster state of the measurement-based computation model, and the history state used in the adiabatic simulation of a quantum circuit. We also show that the method, when adapted for quantum search, provides quadratic speedup as other optimal methods do with the advantages that the problem Hamiltonian is time independent and that the energy gap above the ground state is strictly nondecreasing with time. Likewise the method can be used for optimization as an alternative to the standard adiabatic algorithm

  7. Noise-based logic: Binary, multi-valued, or fuzzy, with optional superposition of logic states

    Energy Technology Data Exchange (ETDEWEB)

    Kish, Laszlo B. [Texas A and M University, Department of Electrical and Computer Engineering, College Station, TX 77843-3128 (United States)], E-mail: laszlo.kish@ece.tamu.edu

    2009-03-02

    A new type of deterministic (non-probabilistic) computer logic system inspired by the stochasticity of brain signals is shown. The distinct values are represented by independent stochastic processes: independent voltage (or current) noises. The orthogonality of these processes provides a natural way to construct binary or multi-valued logic circuitry with arbitrary number N of logic values by using analog circuitry. Moreover, the logic values on a single wire can be made a (weighted) superposition of the N distinct logic values. Fuzzy logic is also naturally represented by a two-component superposition within the binary case (N=2). Error propagation and accumulation are suppressed. Other relevant advantages are reduced energy dissipation and leakage current problems, and robustness against circuit noise and background noises such as 1/f, Johnson, shot and crosstalk noise. Variability problems are also non-existent because the logic value is an AC signal. A similar logic system can be built with orthogonal sinusoidal signals (different frequency or orthogonal phase) however that has an extra 1/N type slowdown compared to the noise-based logic system with increasing number of N furthermore it is less robust against time delay effects than the noise-based counterpart.

  8. Noise-based logic: Binary, multi-valued, or fuzzy, with optional superposition of logic states

    International Nuclear Information System (INIS)

    Kish, Laszlo B.

    2009-01-01

    A new type of deterministic (non-probabilistic) computer logic system inspired by the stochasticity of brain signals is shown. The distinct values are represented by independent stochastic processes: independent voltage (or current) noises. The orthogonality of these processes provides a natural way to construct binary or multi-valued logic circuitry with arbitrary number N of logic values by using analog circuitry. Moreover, the logic values on a single wire can be made a (weighted) superposition of the N distinct logic values. Fuzzy logic is also naturally represented by a two-component superposition within the binary case (N=2). Error propagation and accumulation are suppressed. Other relevant advantages are reduced energy dissipation and leakage current problems, and robustness against circuit noise and background noises such as 1/f, Johnson, shot and crosstalk noise. Variability problems are also non-existent because the logic value is an AC signal. A similar logic system can be built with orthogonal sinusoidal signals (different frequency or orthogonal phase) however that has an extra 1/N type slowdown compared to the noise-based logic system with increasing number of N furthermore it is less robust against time delay effects than the noise-based counterpart

  9. Noise-based logic: Binary, multi-valued, or fuzzy, with optional superposition of logic states

    Science.gov (United States)

    Kish, Laszlo B.

    2009-03-01

    A new type of deterministic (non-probabilistic) computer logic system inspired by the stochasticity of brain signals is shown. The distinct values are represented by independent stochastic processes: independent voltage (or current) noises. The orthogonality of these processes provides a natural way to construct binary or multi-valued logic circuitry with arbitrary number N of logic values by using analog circuitry. Moreover, the logic values on a single wire can be made a (weighted) superposition of the N distinct logic values. Fuzzy logic is also naturally represented by a two-component superposition within the binary case ( N=2). Error propagation and accumulation are suppressed. Other relevant advantages are reduced energy dissipation and leakage current problems, and robustness against circuit noise and background noises such as 1/f, Johnson, shot and crosstalk noise. Variability problems are also non-existent because the logic value is an AC signal. A similar logic system can be built with orthogonal sinusoidal signals (different frequency or orthogonal phase) however that has an extra 1/N type slowdown compared to the noise-based logic system with increasing number of N furthermore it is less robust against time delay effects than the noise-based counterpart.

  10. The Convolutional Visual Network for Identification and Reconstruction of NOvA Events

    Energy Technology Data Exchange (ETDEWEB)

    Psihas, Fernanda [Indiana U.

    2017-11-22

    In 2016 the NOvA experiment released results for the observation of oscillations in the vμ and ve channels as well as ve cross section measurements using neutrinos from Fermilab’s NuMI beam. These and other measurements in progress rely on the accurate identification and reconstruction of the neutrino flavor and energy recorded by our detectors. This presentation describes the first application of convolutional neural network technology for event identification and reconstruction in particle detectors like NOvA. The Convolutional Visual Network (CVN) Algorithm was developed for identification, categorization, and reconstruction of NOvA events. It increased the selection efficiency of the ve appearance signal by 40% and studies show potential impact to the vμ disappearance analysis.

  11. Image inpainting and super-resolution using non-local recursive deep convolutional network with skip connections

    Science.gov (United States)

    Liu, Miaofeng

    2017-07-01

    In recent years, deep convolutional neural networks come into use in image inpainting and super-resolution in many fields. Distinct to most of the former methods requiring to know beforehand the local information for corrupted pixels, we propose a 20-depth fully convolutional network to learn an end-to-end mapping a dataset of damaged/ground truth subimage pairs realizing non-local blind inpainting and super-resolution. As there often exist image with huge corruptions or inpainting on a low-resolution image that the existing approaches unable to perform well, we also share parameters in local area of layers to achieve spatial recursion and enlarge the receptive field. To avoid the difficulty of training this deep neural network, skip-connections between symmetric convolutional layers are designed. Experimental results shows that the proposed method outperforms state-of-the-art methods for diverse corrupting and low-resolution conditions, it works excellently when realizing super-resolution and image inpainting simultaneously

  12. Detection and recognition of bridge crack based on convolutional neural network

    Directory of Open Access Journals (Sweden)

    Honggong LIU

    2016-10-01

    Full Text Available Aiming at the backward artificial visual detection status of bridge crack in China, which has a great danger coefficient, a digital and intelligent detection method of improving the diagnostic efficiency and reducing the risk coefficient is studied. Combing with machine vision and convolutional neural network technology, Raspberry Pi is used to acquire and pre-process image, and the crack image is analyzed; the processing algorithm which has the best effect in detecting and recognizing is selected; the convolutional neural network(CNN for crack classification is optimized; finally, a new intelligent crack detection method is put forward. The experimental result shows that the system can find all cracks beyond the maximum limit, and effectively identify the type of fracture, and the recognition rate is above 90%. The study provides reference data for engineering detection.

  13. Experimental generation and application of the superposition of higher-order Bessel beams

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2009-07-01

    Full Text Available Academy of Sciences of Belarus 4 School of Physics, University of Stellenbosch Presented at the 2009 South African Institute of Physics Annual Conference University of KwaZulu-Natal Durban, South Africa 6-10 July 2009 Page 2 © CSIR 2008... www.csir.co.za Generation of Bessel Fields: • METHOD 1: Ring Slit Aperture • METHOD 2: Axicon Adaptation of method 1 to produce superpositions of higher-order Bessel beams: J. Durnin, J.J. Miceli and J.H. Eberly, Phys. Rev. Lett. 58 1499...

  14. Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network.

    Science.gov (United States)

    Urtnasan, Erdenebayar; Park, Jong-Uk; Joo, Eun-Yeon; Lee, Kyoung-Joung

    2018-04-23

    In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F 1 -score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.

  15. A MacWilliams Identity for Convolutional Codes : The General Case

    NARCIS (Netherlands)

    Gluesing-Luerssen, Heide; Schneider, Gert

    A MacWilliams Identity for convolutional codes will be established. It makes use of the weight adjacency matrices of the code and its dual, based on state space realizations (the controller canonical form) of the codes in question. The MacWilliams Identity applies to various notions of duality

  16. Deep convolutional neural networks for detection of rail surface defects

    NARCIS (Netherlands)

    Faghih Roohi, S.; Hajizadeh, S.; Nunez Vicencio, Alfredo; Babuska, R.; De Schutter, B.H.K.; Estevez, Pablo A.; Angelov, Plamen P.; Del Moral Hernandez, Emilio

    2016-01-01

    In this paper, we propose a deep convolutional neural network solution to the analysis of image data for the detection of rail surface defects. The images are obtained from many hours of automated video recordings. This huge amount of data makes it impossible to manually inspect the images and

  17. HETERO code, heterogeneous procedure for reactor calculation; Program Hetero, heterogeni postupak proracuna reaktora

    Energy Technology Data Exchange (ETDEWEB)

    Jovanovic, S M; Raisic, N M [Boris Kidric Institute of Nuclear Sciences Vinca, Beograd (Yugoslavia)

    1966-11-15

    This report describes the procedure for calculating the parameters of heterogeneous reactor system taking into account the interaction between fuel elements related to established geometry. First part contains the analysis of single fuel element in a diffusion medium, and criticality condition of the reactor system described by superposition of elements interactions. the possibility of performing such analysis by determination of heterogeneous system lattice is described in the second part. Computer code HETERO with the code KETAP (calculation of criticality factor {eta}{sub n} and flux distribution) is part of this report together with the example of RB reactor square lattice.

  18. Forecasting short-term data center network traffic load with convolutional neural networks

    Science.gov (United States)

    Ordozgoiti, Bruno; Gómez-Canaval, Sandra

    2018-01-01

    Efficient resource management in data centers is of central importance to content service providers as 90 percent of the network traffic is expected to go through them in the coming years. In this context we propose the use of convolutional neural networks (CNNs) to forecast short-term changes in the amount of traffic crossing a data center network. This value is an indicator of virtual machine activity and can be utilized to shape the data center infrastructure accordingly. The behaviour of network traffic at the seconds scale is highly chaotic and therefore traditional time-series-analysis approaches such as ARIMA fail to obtain accurate forecasts. We show that our convolutional neural network approach can exploit the non-linear regularities of network traffic, providing significant improvements with respect to the mean absolute and standard deviation of the data, and outperforming ARIMA by an increasingly significant margin as the forecasting granularity is above the 16-second resolution. In order to increase the accuracy of the forecasting model, we exploit the architecture of the CNNs using multiresolution input distributed among separate channels of the first convolutional layer. We validate our approach with an extensive set of experiments using a data set collected at the core network of an Internet Service Provider over a period of 5 months, totalling 70 days of traffic at the one-second resolution. PMID:29408936

  19. Forecasting short-term data center network traffic load with convolutional neural networks.

    Science.gov (United States)

    Mozo, Alberto; Ordozgoiti, Bruno; Gómez-Canaval, Sandra

    2018-01-01

    Efficient resource management in data centers is of central importance to content service providers as 90 percent of the network traffic is expected to go through them in the coming years. In this context we propose the use of convolutional neural networks (CNNs) to forecast short-term changes in the amount of traffic crossing a data center network. This value is an indicator of virtual machine activity and can be utilized to shape the data center infrastructure accordingly. The behaviour of network traffic at the seconds scale is highly chaotic and therefore traditional time-series-analysis approaches such as ARIMA fail to obtain accurate forecasts. We show that our convolutional neural network approach can exploit the non-linear regularities of network traffic, providing significant improvements with respect to the mean absolute and standard deviation of the data, and outperforming ARIMA by an increasingly significant margin as the forecasting granularity is above the 16-second resolution. In order to increase the accuracy of the forecasting model, we exploit the architecture of the CNNs using multiresolution input distributed among separate channels of the first convolutional layer. We validate our approach with an extensive set of experiments using a data set collected at the core network of an Internet Service Provider over a period of 5 months, totalling 70 days of traffic at the one-second resolution.

  20. Robust Vehicle Detection in Aerial Images Based on Cascaded Convolutional Neural Networks.

    Science.gov (United States)

    Zhong, Jiandan; Lei, Tao; Yao, Guangle

    2017-11-24

    Vehicle detection in aerial images is an important and challenging task. Traditionally, many target detection models based on sliding-window fashion were developed and achieved acceptable performance, but these models are time-consuming in the detection phase. Recently, with the great success of convolutional neural networks (CNNs) in computer vision, many state-of-the-art detectors have been designed based on deep CNNs. However, these CNN-based detectors are inefficient when applied in aerial image data due to the fact that the existing CNN-based models struggle with small-size object detection and precise localization. To improve the detection accuracy without decreasing speed, we propose a CNN-based detection model combining two independent convolutional neural networks, where the first network is applied to generate a set of vehicle-like regions from multi-feature maps of different hierarchies and scales. Because the multi-feature maps combine the advantage of the deep and shallow convolutional layer, the first network performs well on locating the small targets in aerial image data. Then, the generated candidate regions are fed into the second network for feature extraction and decision making. Comprehensive experiments are conducted on the Vehicle Detection in Aerial Imagery (VEDAI) dataset and Munich vehicle dataset. The proposed cascaded detection model yields high performance, not only in detection accuracy but also in detection speed.

  1. Shallow and deep convolutional networks for saliency prediction

    OpenAIRE

    Pan, Junting; Sayrol Clols, Elisa; Giró Nieto, Xavier; McGuinness, Kevin; O'Connor, Noel

    2016-01-01

    The prediction of salient areas in images has been traditionally addressed with hand-crafted features based on neuroscience principles. This paper, however, addresses the problem with a completely data-driven approach by training a convolutional neural network (convnet). The learning process is formulated as a minimization of a loss function that measures the Euclidean distance of the predicted saliency map with the provided ground truth. The recent publication of large datasets of saliency p...

  2. Optimized parallel convolutions for non-linear fluid models of tokamak ηi turbulence

    International Nuclear Information System (INIS)

    Milovich, J.L.; Tomaschke, G.; Kerbel, G.D.

    1993-01-01

    Non-linear computational fluid models of plasma turbulence based on spectral methods typically spend a large fraction of the total computing time evaluating convolutions. Usually these convolutions arise from an explicit or semi implicit treatment of the convective non-linearities in the problem. Often the principal convective velocity is perpendicular to magnetic field lines allowing a reduction of the convolution to two dimensions in an appropriate geometry, but beyond this, different models vary widely in the particulars of which mode amplitudes are selectively evolved to get the most efficient representation of the turbulence. As the number of modes in the problem, N, increases, the amount of computation required for this part of the evolution algorithm then scales as N 2 /timestep for a direct or analytic method and N ln N/timestep for a pseudospectral method. The constants of proportionality depend on the particulars of mode selection and determine the size problem for which the method will perform equally. For large enough N, the pseudospectral method performance is always superior, though some problems do not require correspondingly high resolution. Further, the Courant condition for numerical stability requires that the timestep size must decrease proportionately as N increases, thus accentuating the need to have fast methods for larger N problems. The authors have developed a package for the Cray system which performs these convolutions for a rather arbitrary mode selection scheme using either method. The package is highly optimized using a combination of macro and microtasking techniques, as well as vectorization and in some cases assembly coded routines. Parts of the package have also been developed and optimized for the CM200 and CM5 system. Performance comparisons with respect to problem size, parallelization, selection schemes and architecture are presented

  3. Study of the accuracy of radiation field calculations in media

    International Nuclear Information System (INIS)

    Bolyatko, V.V.; Vyrskij, M.Yu.; Ilyushkin, A.I.; Mashkovich, V.P.; Sakharov, V.K.; Stroganov, A.A.

    1981-01-01

    The sensitivity p of the radiation transport calculations to variations of input parameters Xsub(i) is theoretically analyzed, and the calculational errors induced by uncertainties of initial data are evaluated. Two calculational methods are considered: the direct substitution method using the ROZ-5 code and method using the linear perturbation theory. In order to calculate p(Xsub(i)) and bilinear convolutions of the conjugated transport equations the ZAKAT code has been developed. The calculations use the ZAKAT, ROZ-11 and APAMAKO-2F codes. As an example of practical use of the method proposed a shielding composition characteristic for fast reactors was analyzed. A plane monodirectional neutron beam of the BR-10 reactor falls onto a 5-layer stainless steel (1Kh18N10T)-carbon barrier. The sensitivily of the neutron dose absorbed in tissue to the cross sections of all the shielding constituents and to the source and detector representation functions has been calculated. A comparison of the calculations with experimental data proves the validity of the calculational method [ru

  4. Teleportation of a Coherent Superposition State Via a nonmaximally Entangled Coherent Xhannel

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    @@ We investigate the problemm of teleportation of a superposition coherent state with nonmaximally entangled coherent channel. Two strategies are considered to complete the task. The first one uses entanglement concentration to purify the channel to a maximally entangled one. The second one teleports the state through the nonmaximally entangled coherent channel directly. We find that the probabilities of successful teleportations for the two strategies are depend on the amplitudes of the coherent states and the mean fidelity of teleportation using the first strategy is always less than that of the second strategy.

  5. Discrete singular convolution method for the analysis of Mindlin plates on elastic foundations

    International Nuclear Information System (INIS)

    Civalek, Omer; Acar, Mustafa Hilmi

    2007-01-01

    The method of discrete singular convolution (DSC) is used for the bending analysis of Mindlin plates on two-parameter elastic foundations for the first time. Two different realizations of singular kernels, such as the regularized Shannon's delta (RSD) kernel and Lagrange delta sequence (LDS) kernel, are selected as singular convolution to illustrate the present algorithm. The methodology and procedures are presented and bending problems of thick plates on elastic foundations are studied for different boundary conditions. The influence of foundation parameters and shear deformation on the stress resultants and deflections of the plate have been investigated. Numerical studies are performed and the DSC results are compared well with other analytical solutions and some numerical results

  6. The quick convolution of galaxy profiles, with application to power-law intensity distributions

    International Nuclear Information System (INIS)

    Bailey, M.E.; Sparks, W.B.

    1983-01-01

    The two-dimensional convolution of a circularly symmetric galaxy model with a Gaussian point-spread function of dispersion σ reduces to a single integral. This is solved analytically for models with power-law intensity distributions and results are given which relate the apparent core radius to σ and the power-law index k. The convolution integral is also simplified for the case of a point-spread function corresponding to a circular aperture. Models of galactic nuclei with stellar density cusps can only be distinguished from alternatives with small core radii if both the brightness and seeing profiles are measured accurately. The results are applied to data on the light distribution at the Galactic Centre. (author)

  7. Paediatric frontal chest radiograph screening with fine-tuned convolutional neural networks

    CSIR Research Space (South Africa)

    Gerrand, Jonathan D

    2017-07-01

    Full Text Available of fine-tuned convolutional neural networks (CNN). We use two popular CNN models that are pre-trained on a large natural image dataset and two distinct datasets containing paediatric and adult radiographs respectively. Evaluation is performed using a 5...

  8. Convolution quotients in the production of heat in an infinite cylinder

    Energy Technology Data Exchange (ETDEWEB)

    Battig, A; Kalla, S L [Universidad Nacional de Tucuman (Argentina). Facultad de Ciencias Exactas y Tecnologia

    1974-12-01

    A solution of the problem of heat production in an infinite cylinder is considered by an appeal to the concept of convolution quotients and finite Hankel transforms. The result given by Erdelyi follows as a particular case of the result established here.

  9. Decoding LDPC Convolutional Codes on Markov Channels

    Directory of Open Access Journals (Sweden)

    Kashyap Manohar

    2008-01-01

    Full Text Available Abstract This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.

  10. Decoding LDPC Convolutional Codes on Markov Channels

    Directory of Open Access Journals (Sweden)

    Chris Winstead

    2008-04-01

    Full Text Available This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.

  11. Spectral-spatial classification of hyperspectral image using three-dimensional convolution network

    Science.gov (United States)

    Liu, Bing; Yu, Xuchu; Zhang, Pengqiang; Tan, Xiong; Wang, Ruirui; Zhi, Lu

    2018-01-01

    Recently, hyperspectral image (HSI) classification has become a focus of research. However, the complex structure of an HSI makes feature extraction difficult to achieve. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. The design of an improved 3-D convolutional neural network (3D-CNN) model for HSI classification is described. This model extracts features from both the spectral and spatial dimensions through the application of 3-D convolutions, thereby capturing the important discrimination information encoded in multiple adjacent bands. The designed model views the HSI cube data altogether without relying on any pre- or postprocessing. In addition, the model is trained in an end-to-end fashion without any handcrafted features. The designed model was applied to three widely used HSI datasets. The experimental results demonstrate that the 3D-CNN-based method outperforms conventional methods even with limited labeled training samples.

  12. A Novel Image Tag Completion Method Based on Convolutional Neural Transformation

    KAUST Repository

    Geng, Yanyan; Zhang, Guohui; Li, Weizhi; Gu, Yi; Liang, Ru-Ze; Liang, Gaoyuan; Wang, Jingbin; Wu, Yanbin; Patil, Nitin; Wang, Jing-Yan

    2017-01-01

    In the problems of image retrieval and annotation, complete textual tag lists of images play critical roles. However, in real-world applications, the image tags are usually incomplete, thus it is important to learn the complete tags for images. In this paper, we study the problem of image tag complete and proposed a novel method for this problem based on a popular image representation method, convolutional neural network (CNN). The method estimates the complete tags from the convolutional filtering outputs of images based on a linear predictor. The CNN parameters, linear predictor, and the complete tags are learned jointly by our method. We build a minimization problem to encourage the consistency between the complete tags and the available incomplete tags, reduce the estimation error, and reduce the model complexity. An iterative algorithm is developed to solve the minimization problem. Experiments over benchmark image data sets show its effectiveness.

  13. A Novel Image Tag Completion Method Based on Convolutional Neural Transformation

    KAUST Repository

    Geng, Yanyan

    2017-10-24

    In the problems of image retrieval and annotation, complete textual tag lists of images play critical roles. However, in real-world applications, the image tags are usually incomplete, thus it is important to learn the complete tags for images. In this paper, we study the problem of image tag complete and proposed a novel method for this problem based on a popular image representation method, convolutional neural network (CNN). The method estimates the complete tags from the convolutional filtering outputs of images based on a linear predictor. The CNN parameters, linear predictor, and the complete tags are learned jointly by our method. We build a minimization problem to encourage the consistency between the complete tags and the available incomplete tags, reduce the estimation error, and reduce the model complexity. An iterative algorithm is developed to solve the minimization problem. Experiments over benchmark image data sets show its effectiveness.

  14. Fission cross-section calculations and the multi-modal fission model

    International Nuclear Information System (INIS)

    Hambsch, F.J.

    2004-01-01

    New, self consistent, neutron-induced reaction cross section calculations for 235,238 U, 237 Np have been performed. The statistical model code STATIS was improved to take into account the multimodality of the fission process. The three most dominant fission modes, the two asymmetric standards I (S1) and standard II (S2) modes and the symmetric superlong (SL) mode have been taken into account. De-convoluted fission cross sections for those modes for 235,238 U(n,f) and 237 Np(n,f) based on experimental branching ratios, were calculated for the first time up to the second chance fission threshold. For 235 U(n,f), the calculations being made up to 28 MeV incident neutron energy, higher fission chances have been considered. This implied the need for additional calculations for the neighbouring isotopes. As a side product also mass yield distributions could be calculated at energies hitherto not accessible by experiment. Experimental validation of the predictions is being envisaged

  15. Weed Growth Stage Estimator Using Deep Convolutional Neural Networks.

    Science.gov (United States)

    Teimouri, Nima; Dyrmann, Mads; Nielsen, Per Rydahl; Mathiassen, Solvejg Kopp; Somerville, Gayle J; Jørgensen, Rasmus Nyholm

    2018-05-16

    This study outlines a new method of automatically estimating weed species and growth stages (from cotyledon until eight leaves are visible) of in situ images covering 18 weed species or families. Images of weeds growing within a variety of crops were gathered across variable environmental conditions with regards to soil types, resolution and light settings. Then, 9649 of these images were used for training the computer, which automatically divided the weeds into nine growth classes. The performance of this proposed convolutional neural network approach was evaluated on a further set of 2516 images, which also varied in term of crop, soil type, image resolution and light conditions. The overall performance of this approach achieved a maximum accuracy of 78% for identifying Polygonum spp. and a minimum accuracy of 46% for blackgrass. In addition, it achieved an average 70% accuracy rate in estimating the number of leaves and 96% accuracy when accepting a deviation of two leaves. These results show that this new method of using deep convolutional neural networks has a relatively high ability to estimate early growth stages across a wide variety of weed species.

  16. Deep-Learning Convolutional Neural Networks Accurately Classify Genetic Mutations in Gliomas.

    Science.gov (United States)

    Chang, P; Grinband, J; Weinberg, B D; Bardis, M; Khy, M; Cadena, G; Su, M-Y; Cha, S; Filippi, C G; Bota, D; Baldi, P; Poisson, L M; Jain, R; Chow, D

    2018-05-10

    The World Health Organization has recently placed new emphasis on the integration of genetic information for gliomas. While tissue sampling remains the criterion standard, noninvasive imaging techniques may provide complimentary insight into clinically relevant genetic mutations. Our aim was to train a convolutional neural network to independently predict underlying molecular genetic mutation status in gliomas with high accuracy and identify the most predictive imaging features for each mutation. MR imaging data and molecular information were retrospectively obtained from The Cancer Imaging Archives for 259 patients with either low- or high-grade gliomas. A convolutional neural network was trained to classify isocitrate dehydrogenase 1 ( IDH1 ) mutation status, 1p/19q codeletion, and O6-methylguanine-DNA methyltransferase ( MGMT ) promotor methylation status. Principal component analysis of the final convolutional neural network layer was used to extract the key imaging features critical for successful classification. Classification had high accuracy: IDH1 mutation status, 94%; 1p/19q codeletion, 92%; and MGMT promotor methylation status, 83%. Each genetic category was also associated with distinctive imaging features such as definition of tumor margins, T1 and FLAIR suppression, extent of edema, extent of necrosis, and textural features. Our results indicate that for The Cancer Imaging Archives dataset, machine-learning approaches allow classification of individual genetic mutations of both low- and high-grade gliomas. We show that relevant MR imaging features acquired from an added dimensionality-reduction technique demonstrate that neural networks are capable of learning key imaging components without prior feature selection or human-directed training. © 2018 by American Journal of Neuroradiology.

  17. Green function as an integral superposition of Gaussian beams in inhomogeneous anisotropic layered structures in Cartesian coordinates

    Czech Academy of Sciences Publication Activity Database

    Červený, V.; Pšenčík, Ivan

    2016-01-01

    Roč. 26 (2016), s. 131-153 ISSN 2336-3827 R&D Projects: GA ČR(CZ) GA16-05237S Institutional support: RVO:67985530 Keywords : elastodynamic Green function * inhomogeneous anisotropic media * integral superposition of Gaussian beams Subject RIV: DC - Siesmology, Volcanology, Earth Structure

  18. Defect detection and classification of galvanized stamping parts based on fully convolution neural network

    Science.gov (United States)

    Xiao, Zhitao; Leng, Yanyi; Geng, Lei; Xi, Jiangtao

    2018-04-01

    In this paper, a new convolution neural network method is proposed for the inspection and classification of galvanized stamping parts. Firstly, all workpieces are divided into normal and defective by image processing, and then the defective workpieces extracted from the region of interest (ROI) area are input to the trained fully convolutional networks (FCN). The network utilizes an end-to-end and pixel-to-pixel training convolution network that is currently the most advanced technology in semantic segmentation, predicts result of each pixel. Secondly, we mark the different pixel values of the workpiece, defect and background for the training image, and use the pixel value and the number of pixels to realize the recognition of the defects of the output picture. Finally, the defect area's threshold depended on the needs of the project is set to achieve the specific classification of the workpiece. The experiment results show that the proposed method can successfully achieve defect detection and classification of galvanized stamping parts under ordinary camera and illumination conditions, and its accuracy can reach 99.6%. Moreover, it overcomes the problem of complex image preprocessing and difficult feature extraction and performs better adaptability.

  19. Seismic analysis of structures of nuclear power plants by Lanczos mode superposition method

    International Nuclear Information System (INIS)

    Coutinho, A.L.G.A.; Alves, J.L.D.; Landau, L.; Lima, E.C.P. de; Ebecken, N.F.F.

    1986-01-01

    The Lanczos Mode Superposition Method is applied in the seismic analysis of nuclear power plants. The coordinate transformation matrix is generated by the Lanczos algorithm. It is shown that, through a convenient choice of the starting vector of the algorithm, modes with participation factors are automatically selected. It is performed the Response Spectra analysis of a typical reactor building. The obtained results are compared with those determined by the classical aproach stressing the remarkable computer effectiveness of the proposed methodology. (Author) [pt

  20. Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber

    International Nuclear Information System (INIS)

    Acciarri, R.; Adams, C.; An, R.; Asaadi, J.; Auger, M.

    2017-01-01

    Here, we present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. Lastly, we also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level.

  1. Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber

    Energy Technology Data Exchange (ETDEWEB)

    Acciarri, R.; Adams, C.; An, R.; Asaadi, J.; Auger, M.; Bagby, L.; Baller, B.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Bugel, L.; Camilleri, L.; Caratelli, D.; Carls, B.; Fernandez, R. Castillo; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anad?n, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Sanchez, L. Escudero; Esquivel, J.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; James, C.; de Vries, J. Jan; Jen, C. -M.; Jiang, L.; Johnson, R. A.; Jones, B. J. P.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Caicedo, D. A. Martinez; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; von Rohr, C. Rudolf; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Snider, E. L.; Soderberg, M.; S?ldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y. -T.; Tufanli, S.; Usher, T.; Van de Water, R. G.; Viren, B.; Weber, M.; Weston, J.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Zeller, G. P.; Zennamo, J.; Zhang, C.

    2017-03-01

    We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level.

  2. Adaptive decoding of convolutional codes

    Science.gov (United States)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  3. Classifying images using restricted Boltzmann machines and convolutional neural networks

    Science.gov (United States)

    Zhao, Zhijun; Xu, Tongde; Dai, Chenyu

    2017-07-01

    To improve the feature recognition ability of deep model transfer learning, we propose a hybrid deep transfer learning method for image classification based on restricted Boltzmann machines (RBM) and convolutional neural networks (CNNs). It integrates learning abilities of two models, which conducts subject classification by exacting structural higher-order statistics features of images. While the method transfers the trained convolutional neural networks to the target datasets, fully-connected layers can be replaced by restricted Boltzmann machine layers; then the restricted Boltzmann machine layers and Softmax classifier are retrained, and BP neural network can be used to fine-tuned the hybrid model. The restricted Boltzmann machine layers has not only fully integrated the whole feature maps, but also learns the statistical features of target datasets in the view of the biggest logarithmic likelihood, thus removing the effects caused by the content differences between datasets. The experimental results show that the proposed method has improved the accuracy of image classification, outperforming other methods on Pascal VOC2007 and Caltech101 datasets.

  4. Development of a morphological convolution operator for bearing fault detection

    Science.gov (United States)

    Li, Yifan; Liang, Xihui; Liu, Weiwei; Wang, Yan

    2018-05-01

    This paper presents a novel signal processing scheme, namely morphological convolution operator (MCO) lifted morphological undecimated wavelet (MUDW), for rolling element bearing fault detection. In this scheme, a MCO is first designed to fully utilize the advantage of the closing & opening gradient operator and the closing-opening & opening-closing gradient operator for feature extraction as well as the merit of excellent denoising characteristics of the convolution operator. The MCO is then introduced into MUDW for the purpose of improving the fault detection ability of the reported MUDWs. Experimental vibration signals collected from a train wheelset test rig and the bearing data center of Case Western Reserve University are employed to evaluate the effectiveness of the proposed MCO lifted MUDW on fault detection of rolling element bearings. The results show that the proposed approach has a superior performance in extracting fault features of defective rolling element bearings. In addition, comparisons are performed between two reported MUDWs and the proposed MCO lifted MUDW. The MCO lifted MUDW outperforms both of them in detection of outer race faults and inner race faults of rolling element bearings.

  5. Multineuron spike train analysis with R-convolution linear combination kernel.

    Science.gov (United States)

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Multi-focus image fusion with the all convolutional neural network

    Science.gov (United States)

    Du, Chao-ben; Gao, She-sheng

    2018-01-01

    A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image fusion effect, getting a decision map is very necessary and usually difficult to finish. In this letter, we address this problem with convolutional neural network (CNN), aiming to get a state-of-the-art decision map. The main idea is that the max-pooling of CNN is replaced by a convolution layer, the residuals are propagated backwards by gradient descent, and the training parameters of the individual layers of the CNN are updated layer by layer. Based on this, we propose a new all CNN (ACNN)-based multi-focus image fusion method in spatial domain. We demonstrate that the decision map obtained from the ACNN is reliable and can lead to high-quality fusion results. Experimental results clearly validate that the proposed algorithm can obtain state-of-the-art fusion performance in terms of both qualitative and quantitative evaluations.

  7. Multi-Branch Fully Convolutional Network for Face Detection

    KAUST Repository

    Bai, Yancheng

    2017-07-20

    Face detection is a fundamental problem in computer vision. It is still a challenging task in unconstrained conditions due to significant variations in scale, pose, expressions, and occlusion. In this paper, we propose a multi-branch fully convolutional network (MB-FCN) for face detection, which considers both efficiency and effectiveness in the design process. Our MB-FCN detector can deal with faces at all scale ranges with only a single pass through the backbone network. As such, our MB-FCN model saves computation and thus is more efficient, compared to previous methods that make multiple passes. For each branch, the specific skip connections of the convolutional feature maps at different layers are exploited to represent faces in specific scale ranges. Specifically, small faces can be represented with both shallow fine-grained and deep powerful coarse features. With this representation, superior improvement in performance is registered for the task of detecting small faces. We test our MB-FCN detector on two public face detection benchmarks, including FDDB and WIDER FACE. Extensive experiments show that our detector outperforms state-of-the-art methods on all these datasets in general and by a substantial margin on the most challenging among them (e.g. WIDER FACE Hard subset). Also, MB-FCN runs at 15 FPS on a GPU for images of size 640 x 480 with no assumption on the minimum detectable face size.

  8. Automatic segmentation of MR brain images with a convolutional neural network

    NARCIS (Netherlands)

    Moeskops, P.; Viergever, M.A.; Mendrik, A.M.; de Vries, L.S.; Benders, M.J.N.L.; Išgum, I.

    2016-01-01

    Automatic segmentation in MR brain images is important for quantitative analysis in large-scale studies with images acquired at all ages. This paper presents a method for the automatic segmentation of MR brain images into a number of tissue classes using a convolutional neural network. To ensure

  9. Intestinal absorption of radiocalcium. Measurement by the oral and intraveinous activity ratio and by the inverse convolution method

    International Nuclear Information System (INIS)

    Monnier, L.; Collet, H.; Suquet, P.; Mirouze, J.

    1975-01-01

    The intestinal absorption of calcium was measured by a double isotopic labelling method, the results being obtained by a mathematical deconvolution technique. This analytical method was compared with the simple measurement of the plasma radioactivity ratio for the two isotopes administered orally and intraveinously respectively. The study covered 29 determinations. It was possible to estimate the total fractional absorption of calcium (TFACa) by calculating the average of the 47 Ca/ 45 Ca quotients measured on the 3rd and 8th hour after simultaneous administration of 45 Ca intraveinously and 47 Ca by mouth. The advantages of this method are obvious: need for only two blood samplings, simplicity of calculations which nevertheless give TFACa values comparable to those obtained by deconvolution analysis. However the only information supplied by the quotients method is the total fractional absorption, whereas inverse convolution analysis provides several interesting parameters such as the maximum absorption and the mean transit time of radiocalcium through the intestinal wall [fr

  10. Calculation of the Reaction Cross Section for Several Actinides

    International Nuclear Information System (INIS)

    Hambsch, Franz-Josef; Oberstedt, Stephan; Vladuca, Gheorghita; Tudora, Anabella; Filipescu, Dan

    2005-01-01

    New, self-consistent, neutron-induced reaction cross-section calculations for 235,238U, 237Np, and 231,232,233Pa have been performed. The statistical model code STATIS was extended to take into account the multi-modality of the fission process. The three most dominant fission modes, the two asymmetric standard I (S1) and standard II (S2) modes, and the symmetric superlong (SL) mode have been taken into account. De-convoluted fission cross sections for these modes in 235,238U(n,f) and 237Np(n,f) based on experimental branching ratios, were calculated for the first time up to the second chance fission threshold. For 235U(n,f) and 233Pa(n,f), the calculations being made up to 50 MeV and 20 MeV incident neutron energy, respectively, higher fission chances have been considered. This implied the need for additional calculations for the neighbouring isotopes.As a side product also mass yield distributions could be calculated at energies hitherto not accessible by experiment. Experimental validation of the predictions is being envisaged

  11. Solving singular convolution equations using the inverse fast Fourier transform

    Czech Academy of Sciences Publication Activity Database

    Krajník, E.; Montesinos, V.; Zizler, P.; Zizler, Václav

    2012-01-01

    Roč. 57, č. 5 (2012), s. 543-550 ISSN 0862-7940 R&D Projects: GA AV ČR IAA100190901 Institutional research plan: CEZ:AV0Z10190503 Keywords : singular convolution equations * fast Fourier transform * tempered distribution Subject RIV: BA - General Mathematics Impact factor: 0.222, year: 2012 http://www.springerlink.com/content/m8437t3563214048/

  12. Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties

    Science.gov (United States)

    Xie, Tian; Grossman, Jeffrey C.

    2018-04-01

    The use of machine learning methods for accelerating the design of crystalline materials usually requires manually constructed feature vectors or complex transformation of atom coordinates to input the crystal structure, which either constrains the model to certain crystal types or makes it difficult to provide chemical insights. Here, we develop a crystal graph convolutional neural networks framework to directly learn material properties from the connection of atoms in the crystal, providing a universal and interpretable representation of crystalline materials. Our method provides a highly accurate prediction of density functional theory calculated properties for eight different properties of crystals with various structure types and compositions after being trained with 1 04 data points. Further, our framework is interpretable because one can extract the contributions from local chemical environments to global properties. Using an example of perovskites, we show how this information can be utilized to discover empirical rules for materials design.

  13. Relaxation Behavior by Time-Salt and Time-Temperature Superpositions of Polyelectrolyte Complexes from Coacervate to Precipitate

    Directory of Open Access Journals (Sweden)

    Samim Ali

    2018-01-01

    Full Text Available Complexation between anionic and cationic polyelectrolytes results in solid-like precipitates or liquid-like coacervate depending on the added salt in the aqueous medium. However, the boundary between these polymer-rich phases is quite broad and the associated changes in the polymer relaxation in the complexes across the transition regime are poorly understood. In this work, the relaxation dynamics of complexes across this transition is probed over a wide timescale by measuring viscoelastic spectra and zero-shear viscosities at varying temperatures and salt concentrations for two different salt types. We find that the complexes exhibit time-temperature superposition (TTS at all salt concentrations, while the range of overlapped-frequencies for time-temperature-salt superposition (TTSS strongly depends on the salt concentration (Cs and gradually shifts to higher frequencies as Cs is decreased. The sticky-Rouse model describes the relaxation behavior at all Cs. However, collective relaxation of polyelectrolyte complexes gradually approaches a rubbery regime and eventually exhibits a gel-like response as Cs is decreased and limits the validity of TTSS.

  14. Photon Counting Computed Tomography With Dedicated Sharp Convolution Kernels: Tapping the Potential of a New Technology for Stent Imaging.

    Science.gov (United States)

    von Spiczak, Jochen; Mannil, Manoj; Peters, Benjamin; Hickethier, Tilman; Baer, Matthias; Henning, André; Schmidt, Bernhard; Flohr, Thomas; Manka, Robert; Maintz, David; Alkadhi, Hatem

    2018-05-23

    The aims of this study were to assess the value of a dedicated sharp convolution kernel for photon counting detector (PCD) computed tomography (CT) for coronary stent imaging and to evaluate to which extent iterative reconstructions can compensate for potential increases in image noise. For this in vitro study, a phantom simulating coronary artery stenting was prepared. Eighteen different coronary stents were expanded in plastic tubes of 3 mm diameter. Tubes were filled with diluted contrast agent, sealed, and immersed in oil calibrated to an attenuation of -100 HU simulating epicardial fat. The phantom was scanned in a modified second generation 128-slice dual-source CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Erlangen, Germany) equipped with both a conventional energy integrating detector and PCD. Image data were acquired using the PCD part of the scanner with 48 × 0.25 mm slices, a tube voltage of 100 kVp, and tube current-time product of 100 mAs. Images were reconstructed using a conventional convolution kernel for stent imaging with filtered back-projection (B46) and with sinogram-affirmed iterative reconstruction (SAFIRE) at level 3 (I463). For comparison, a dedicated sharp convolution kernel with filtered back-projection (D70) and SAFIRE level 3 (Q703) and level 5 (Q705) was used. The D70 and Q70 kernels were specifically designed for coronary stent imaging with PCD CT by optimizing the image modulation transfer function and the separation of contrast edges. Two independent, blinded readers evaluated subjective image quality (Likert scale 0-3, where 3 = excellent), in-stent diameter difference, in-stent attenuation difference, mathematically defined image sharpness, and noise of each reconstruction. Interreader reliability was calculated using Goodman and Kruskal's γ and intraclass correlation coefficients (ICCs). Differences in image quality were evaluated using a Wilcoxon signed-rank test. Differences in in-stent diameter difference, in

  15. Quantum tele-amplification with a continuous-variable superposition state

    DEFF Research Database (Denmark)

    Neergaard-Nielsen, Jonas S.; Eto, Yujiro; Lee, Chang-Woo

    2013-01-01

    -enhanced functions such as coherent-state quantum computing (CSQC), quantum metrology and a quantum repeater could be realized in the networks. Optical cat states are now routinely generated in laboratories. An important next challenge is to use them for implementing the aforementioned functions. Here, we......Optical coherent states are classical light fields with high purity, and are essential carriers of information in optical networks. If these states could be controlled in the quantum regime, allowing for their quantum superposition (referred to as a Schrödinger-cat state), then novel quantum...... demonstrate a basic CSQC protocol, where a cat state is used as an entanglement resource for teleporting a coherent state with an amplitude gain. We also show how this can be extended to a loss-tolerant quantum relay of multi-ary phase-shift keyed coherent states. These protocols could be useful in both...

  16. Abnormality Detection in Mammography using Deep Convolutional Neural Networks

    OpenAIRE

    Xi, Pengcheng; Shu, Chang; Goubran, Rafik

    2018-01-01

    Breast cancer is the most common cancer in women worldwide. The most common screening technology is mammography. To reduce the cost and workload of radiologists, we propose a computer aided detection approach for classifying and localizing calcifications and masses in mammogram images. To improve on conventional approaches, we apply deep convolutional neural networks (CNN) for automatic feature learning and classifier building. In computer-aided mammography, deep CNN classifiers cannot be tra...

  17. General Dirichlet Series, Arithmetic Convolution Equations and Laplace Transforms

    Czech Academy of Sciences Publication Activity Database

    Glöckner, H.; Lucht, L.G.; Porubský, Štefan

    2009-01-01

    Roč. 193, č. 2 (2009), s. 109-129 ISSN 0039-3223 R&D Projects: GA ČR GA201/07/0191 Institutional research plan: CEZ:AV0Z10300504 Keywords : arithmetic function * Dirichlet convolution * polynomial equation * analytic equation * topological algebra * holomorphic functional calculus * implicit function theorem * Laplace transform * semigroup * complex measure Subject RIV: BA - General Mathematics Impact factor: 0.645, year: 2009 http://arxiv.org/abs/0712.3172

  18. CICAAR - Convolutive ICA with an Auto-Regressive Inverse Model

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Hansen, Lars Kai

    2004-01-01

    We invoke an auto-regressive IIR inverse model for convolutive ICA and derive expressions for the likelihood and its gradient. We argue that optimization will give a stable inverse. When there are more sensors than sources the mixing model parameters are estimated in a second step by least square...... estimation. We demonstrate the method on synthetic data and finally separate speech and music in a real room recording....

  19. Multiparticle quantum superposition and stimulated entanglement by parity selective amplification of entangled states

    International Nuclear Information System (INIS)

    Martini, F. de; Giuseppe, G. di

    2001-01-01

    A multiparticle quantum superposition state has been generated by a novel phase-selective parametric amplifier of an entangled two-photon state. This realization is expected to open a new field of investigations on the persistence of the validity of the standard quantum theory for systems of increasing complexity, in a quasi decoherence-free environment. Because of its nonlocal structure the new system is expected to play a relevant role in the modern endeavor on quantum information and in the basic physics of entanglement. (orig.)

  20. Synthetic bootstrapping of convolutional neural networks for semantic plant part segmentation

    NARCIS (Netherlands)

    Barth, R.; IJsselmuiden, J.; Hemming, J.; Henten, Van E.J.

    2017-01-01

    A current bottleneck of state-of-the-art machine learning methods for image segmentation in agriculture, e.g. convolutional neural networks (CNNs), is the requirement of large manually annotated datasets on a per-pixel level. In this paper, we investigated how related synthetic images can be used to