WorldWideScience

Sample records for acuros xb algorithm

  1. SU-F-T-431: Dosimetric Validation of Acuros XB Algorithm for Photon Dose Calculation in Water

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, L [Rajiv Gandhi Cancer Institute & Research Center, New Delhi, Delhi (India); Yadav, G; Kishore, V [Bundelkhand Institute of Engineering & Technology, Jhansi, Uttar pradesh (India); Bhushan, M; Samuvel, K; Suhail, M [Rajiv Gandhi Cancer Institute and Research Centre, New Delhi, Delhi (India)

    2016-06-15

    Purpose: To validate the Acuros XB algorithm implemented in Eclipse Treatment planning system version 11 (Varian Medical System, Inc., Palo Alto, CA, USA) for photon dose calculation. Methods: Acuros XB is a Linear Boltzmann transport equation (LBTE) solver that solves LBTE equation explicitly and gives result equivalent to Monte Carlo. 6MV photon beam from Varian Clinac-iX (2300CD) was used for dosimetric validation of Acuros XB. Percentage depth dose (PDD) and profiles (at dmax, 5, 10, 20 and 30 cm) measurements were performed in water for field size ranging from 2×2,4×4, 6×6, 10×10, 20×20, 30×30 and 40×40 cm{sup 2}. Acuros XB results were compared against measurements and anisotropic analytical algorithm (AAA) algorithm. Results: Acuros XB result shows good agreement with measurements, and were comparable to AAA algorithm. Result for PDD and profiles shows less than one percent difference from measurements, and from calculated PDD and profiles by AAA algorithm for all field size. TPS calculated Gamma error histogram values, average gamma errors in PDD curves before dmax and after dmax were 0.28, 0.15 for Acuros XB and 0.24, 0.17 for AAA respectively, average gamma error in profile curves in central region, penumbra region and outside field region were 0.17, 0.21, 0.42 for Acuros XB and 0.10, 0.22, 0.35 for AAA respectively. Conclusion: The dosimetric validation of Acuros XB algorithms in water medium was satisfactory. Acuros XB algorithm has potential to perform photon dose calculation with high accuracy, which is more desirable for modern radiotherapy environment.

  2. On the dosimetric impact of inhomogeneity management in the Acuros XB algorithm for breast treatment

    International Nuclear Information System (INIS)

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Cozzi, Luca

    2011-01-01

    A new algorithm for photon dose calculation, Acuros XB, has been recently introduced in the Eclipse, Varian treatment planning system, allowing, similarly to the classic Monte Carlo methods, for accurate modelling of dose deposition in media. Aim of the present study was the assessment of its behaviour in clinical cases. Datasets from ten breast patients scanned under different breathing conditions (free breathing and deep inspiration) were used to calculate dose plans using the simple two tangential field setting, with Acuros XB (in its versions 10 and 11) and the Anisotropic Analytical Algorithm (AAA) for a 6MV beam. Acuros XB calculations were performed as dose-to-medium distributions. This feature was investigated to appraise the capability of the algorithm to distinguish between different elemental compositions in the human body: lobular vs. adipose tissue in the breast, lower (deep inspiration condition) vs. higher (free breathing condition) densities in the lung. The analysis of the two breast structures presenting densities compatible with muscle and with adipose tissue showed an average difference in dose calculation between Acuros XB and AAA of 1.6%, with AAA predicting higher dose than Acuros XB, for the muscle tissue (the lobular breast); while the difference for adipose tissue was negligible. From histograms of the dose difference plans between AAA and Acuros XB (version 10), the dose of the lung portion inside the tangential fields presented an average difference of 0.5% in the free breathing conditions, increasing to 1.5% for the deep inspiration cases, with AAA predicting higher doses than Acuros XB. In lung tissue significant differences are found also between Acuros XB version 10 and 11 for lower density lung. Acuros XB, differently from AAA, is capable to distinguish between the different elemental compositions of the body, and suggests the possibility to further improve the accuracy of the dose plans computed for actual treatment of patients

  3. SU-E-T-516: Dosimetric Validation of AcurosXB Algorithm in Comparison with AAA & CCC Algorithms for VMAT Technique.

    Science.gov (United States)

    Kathirvel, M; Subramanian, V Sai; Arun, G; Thirumalaiswamy, S; Ramalingam, K; Kumar, S Ashok; Jagadeesh, K

    2012-06-01

    To dosimetrically validate AcurosXB algorithm for Volumetric Modulated Arc Therapy (VMAT) in comparison with standard clinical Anisotropic Analytic Algorithm(AAA) and Collapsed Cone Convolution(CCC) dose calculation algorithms. AcurosXB dose calculation algorithm is available with Varian Eclipse treatment planning system (V10). It uses grid-based Boltzmann equation solver to predict dose precisely in lesser time. This study was made to realize algorithms ability to predict dose accurately as its delivery for which five clinical cases each of Brain, Head&Neck, Thoracic, Pelvic and SBRT were taken. Verification plans were created on multicube phantom with iMatrixx-2D detector array and then dose prediction was done with AcurosXB, AAA & CCC (COMPASS System) algorithm and the same were delivered onto CLINAC-iX treatment machine. Delivered dose was captured in iMatrixx plane for all 25 plans. Measured dose was taken as reference to quantify the agreement between AcurosXB calculation algorithm against previously validated AAA and CCC algorithm. Gamma evaluation was performed with clinical criteria distance-to-agreement 3&2mm and dose difference 3&2% in omnipro-I'MRT software. Plans were evaluated in terms of correlation coefficient, quantitative area gamma and average gamma. Study shows good agreement between mean correlation 0.9979±0.0012, 0.9984±0.0009 & 0.9979±0.0011 for AAA, CCC & Acuros respectively. Mean area gamma for criteria 3mm/3% was found to be 98.80±1.04, 98.14±2.31, 98.08±2.01 and 2mm/2% was found to be 93.94±3.83, 87.17±10.54 & 92.36±5.46 for AAA, CCC & Acuros respectively. Mean average gamma for 3mm/3% was 0.26±0.07, 0.42±0.08, 0.28±0.09 and 2mm/2% was found to be 0.39±0.10, 0.64±0.11, 0.42±0.13 for AAA, CCC & Acuros respectively. This study demonstrated that the AcurosXB algorithm had a good agreement with the AAA & CCC in terms of dose prediction. In conclusion AcurosXB algorithm provides a valid, accurate and speedy alternative to AAA

  4. A comparison of two dose calculation algorithms-anisotropic analytical algorithm and Acuros XB-for radiation therapy planning of canine intranasal tumors.

    Science.gov (United States)

    Nagata, Koichi; Pethel, Timothy D

    2017-07-01

    Although anisotropic analytical algorithm (AAA) and Acuros XB (AXB) are both radiation dose calculation algorithms that take into account the heterogeneity within the radiation field, Acuros XB is inherently more accurate. The purpose of this retrospective method comparison study was to compare them and evaluate the dose discrepancy within the planning target volume (PTV). Radiation therapy (RT) plans of 11 dogs with intranasal tumors treated by radiation therapy at the University of Georgia were evaluated. All dogs were planned for intensity-modulated radiation therapy using nine coplanar X-ray beams that were equally spaced, then dose calculated with anisotropic analytical algorithm. The same plan with the same monitor units was then recalculated using Acuros XB for comparisons. Each dog's planning target volume was separated into air, bone, and tissue and evaluated. The mean dose to the planning target volume estimated by Acuros XB was 1.3% lower. It was 1.4% higher for air, 3.7% lower for bone, and 0.9% lower for tissue. The volume of planning target volume covered by the prescribed dose decreased by 21% when Acuros XB was used due to increased dose heterogeneity within the planning target volume. Anisotropic analytical algorithm relatively underestimates the dose heterogeneity and relatively overestimates the dose to the bone and tissue within the planning target volume for the radiation therapy planning of canine intranasal tumors. This can be clinically significant especially if the tumor cells are present within the bone, because it may result in relative underdosing of the tumor. © 2017 American College of Veterinary Radiology.

  5. Clinical implementation and evaluation of the Acuros dose calculation algorithm.

    Science.gov (United States)

    Yan, Chenyu; Combine, Anthony G; Bednarz, Greg; Lalonde, Ronald J; Hu, Bin; Dickens, Kathy; Wynn, Raymond; Pavord, Daniel C; Saiful Huq, M

    2017-09-01

    The main aim of this study is to validate the Acuros XB dose calculation algorithm for a Varian Clinac iX linac in our clinics, and subsequently compare it with the wildely used AAA algorithm. The source models for both Acuros XB and AAA were configured by importing the same measured beam data into Eclipse treatment planning system. Both algorithms were validated by comparing calculated dose with measured dose on a homogeneous water phantom for field sizes ranging from 6 cm × 6 cm to 40 cm × 40 cm. Central axis and off-axis points with different depths were chosen for the comparison. In addition, the accuracy of Acuros was evaluated for wedge fields with wedge angles from 15 to 60°. Similarly, variable field sizes for an inhomogeneous phantom were chosen to validate the Acuros algorithm. In addition, doses calculated by Acuros and AAA at the center of lung equivalent tissue from three different VMAT plans were compared to the ion chamber measured doses in QUASAR phantom, and the calculated dose distributions by the two algorithms and their differences on patients were compared. Computation time on VMAT plans was also evaluated for Acuros and AAA. Differences between dose-to-water (calculated by AAA and Acuros XB) and dose-to-medium (calculated by Acuros XB) on patient plans were compared and evaluated. For open 6 MV photon beams on the homogeneous water phantom, both Acuros XB and AAA calculations were within 1% of measurements. For 23 MV photon beams, the calculated doses were within 1.5% of measured doses for Acuros XB and 2% for AAA. Testing on the inhomogeneous phantom demonstrated that AAA overestimated doses by up to 8.96% at a point close to lung/solid water interface, while Acuros XB reduced that to 1.64%. The test on QUASAR phantom showed that Acuros achieved better agreement in lung equivalent tissue while AAA underestimated dose for all VMAT plans by up to 2.7%. Acuros XB computation time was about three times faster than AAA for VMAT plans, and

  6. SU-E-T-67: Clinical Implementation and Evaluation of the Acuros Dose Calculation Algorithm

    International Nuclear Information System (INIS)

    Yan, C; Combine, T; Dickens, K; Wynn, R; Pavord, D; Huq, M

    2014-01-01

    Purpose: The main aim of the current study is to present a detailed description of the implementation of the Acuros XB Dose Calculation Algorithm, and subsequently evaluate its clinical impacts by comparing it with AAA algorithm. Methods: The source models for both Acuros XB and AAA were configured by importing the same measured beam data into Eclipse treatment planning system. Both algorithms were evaluated by comparing calculated dose with measured dose on a homogeneous water phantom for field sizes ranging from 6cm × 6cm to 40cm × 40cm. Central axis and off-axis points with different depths were chosen for the comparison. Similarly, wedge fields with wedge angles from 15 to 60 degree were used. In addition, variable field sizes for a heterogeneous phantom were used to evaluate the Acuros algorithm. Finally, both Acuros and AAA were tested on VMAT patient plans for various sites. Does distributions and calculation time were compared. Results: On average, computation time is reduced by at least 50% by Acuros XB compared with AAA on single fields and VMAT plans. When used for open 6MV photon beams on homogeneous water phantom, both Acuros XB and AAA calculated doses were within 1% of measurement. For 23 MV photon beams, the calculated doses were within 1.5% of measured doses for Acuros XB and 2% for AAA. When heterogeneous phantom was used, Acuros XB also improved on accuracy. Conclusion: Compared with AAA, Acuros XB can improve accuracy while significantly reduce computation time for VMAT plans

  7. SU-F-SPS-01: Accuracy of the Small Field Dosimetry Using Acuros XB and AAA Dose Calculation Algorithms of Eclipse Treatment Planning System Within and Beyond Heterogeneous Media for Trubeam 2.0 Unit

    International Nuclear Information System (INIS)

    Codel, G; Serin, E; Pacaci, P; Sanli, E; Cebe, M; Mabhouti, H; Doyuran, M; Kucukmorkoc, E; Kucuk, N; Altinok, A; Canoglu, D; Acar, H; Caglar Ozkok, H

    2016-01-01

    Purpose: In this study, the comparison of dosimetric accuracy of Acuros XB and AAA algorithms were investigated for small radiation fields incident on homogeneous and heterogeneous geometries Methods: Small open fields of Truebeam 2.0 unit (1×1, 2×2, 3×3, 4×4 fields) were used for this study. The fields were incident on homogeneous phantom and in house phantom containing lung, air, and bone inhomogeneities. Using the same film batch, the net OD to dose calibration curve was obtaine dusing Trubeam 2.0 for 6 MV, 6 FFF, 10 MV, 10 FFF, 15 MV energies by delivering 0- 800 cGy. Films were scanned 48 hours after irradiation using an Epson 1000XL flatbed scanner. The dosimetric accuracy of Acuros XB and AAA algorithms in the presence of the inhomogeneities was compared against EBT3 film dosimetry Results: Open field tests in a homogeneous phantom showed good agreement betweent wo algorithms and measurement. For Acuros XB, minimum gamma analysis passin grates between measured and calculated dose distributions were 99.3% and 98.1% for homogeneousand inhomogeneous fields in thecase of lung and bone respectively. For AAA, minimum gamma analysis passingrates were 99.1% and 96.5% for homogeneous and inhomogeneous fields respectively for all used energies and field sizes.In the case of the air heterogeneity, the differences were larger for both calculations algorithms. Over all, when compared to measurement, theAcuros XB had beter agreement than AAA. Conclusion: The Acuros XB calculation algorithm in the TPS is an improvemen tover theexisting AAA algorithm. Dose discrepancies were observed for in the presence of air inhomogeneities.

  8. SU-F-SPS-01: Accuracy of the Small Field Dosimetry Using Acuros XB and AAA Dose Calculation Algorithms of Eclipse Treatment Planning System Within and Beyond Heterogeneous Media for Trubeam 2.0 Unit

    Energy Technology Data Exchange (ETDEWEB)

    Codel, G; Serin, E; Pacaci, P; Sanli, E; Cebe, M; Mabhouti, H; Doyuran, M; Kucukmorkoc, E; Kucuk, N; Altinok, A; Canoglu, D; Acar, H; Caglar Ozkok, H [Medipol University, Istanbul, Istanbul (Turkey)

    2016-06-15

    Purpose: In this study, the comparison of dosimetric accuracy of Acuros XB and AAA algorithms were investigated for small radiation fields incident on homogeneous and heterogeneous geometries Methods: Small open fields of Truebeam 2.0 unit (1×1, 2×2, 3×3, 4×4 fields) were used for this study. The fields were incident on homogeneous phantom and in house phantom containing lung, air, and bone inhomogeneities. Using the same film batch, the net OD to dose calibration curve was obtaine dusing Trubeam 2.0 for 6 MV, 6 FFF, 10 MV, 10 FFF, 15 MV energies by delivering 0- 800 cGy. Films were scanned 48 hours after irradiation using an Epson 1000XL flatbed scanner. The dosimetric accuracy of Acuros XB and AAA algorithms in the presence of the inhomogeneities was compared against EBT3 film dosimetry Results: Open field tests in a homogeneous phantom showed good agreement betweent wo algorithms and measurement. For Acuros XB, minimum gamma analysis passin grates between measured and calculated dose distributions were 99.3% and 98.1% for homogeneousand inhomogeneous fields in thecase of lung and bone respectively. For AAA, minimum gamma analysis passingrates were 99.1% and 96.5% for homogeneous and inhomogeneous fields respectively for all used energies and field sizes.In the case of the air heterogeneity, the differences were larger for both calculations algorithms. Over all, when compared to measurement, theAcuros XB had beter agreement than AAA. Conclusion: The Acuros XB calculation algorithm in the TPS is an improvemen tover theexisting AAA algorithm. Dose discrepancies were observed for in the presence of air inhomogeneities.

  9. Accuracy of Acuros XB and AAA dose calculation for small fields with reference to RapidArc stereotactic treatments

    International Nuclear Information System (INIS)

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Cozzi, Luca

    2011-01-01

    Purpose: To assess the accuracy against measurements of two photon dose calculation algorithms (Acuros XB and the Anisotropic Analytical algorithm AAA) for small fields usable in stereotactic treatments with particular focus on RapidArc. Methods: Acuros XB and AAA were configured for stereotactic use. Baseline accuracy was assessed on small jaw-collimated open fields for different values for the spot sizes parameter in the beam data: 0.0, 0.5, 1, and 2 mm. Data were calculated with a grid of 1 x 1 mm 2 . Investigated fields were: 3 x 3, 2 x 2, 1 x 1, and 0.8 x 0.8 cm 2 with a 6 MV photon beam generated from a Clinac2100iX (Varian, Palo Alto, CA). Profiles, PDD, and output factors were measured in water with a PTW diamond detector (detector size: 4 mm 2 , thickness 0.4 mm) and compared to calculations. Four RapidArc test plans were optimized, calculated and delivered with jaw settings J3 x 3, J2 x 2, and J1 x 1 cm 2 , the last was optimized twice to generate high (H) and low (L) modulation patterns. Each plan consisted of one partial arc (gantry 110 deg. to 250 deg.), and collimator 45 deg. Dose to isocenter was measured in a PTW Octavius phantom and compared to calculations. 2D measurements were performed by means of portal dosimetry with the GLAaS method developed at authors' institute. Analysis was performed with gamma pass-fail test with 3% dose difference and 2 mm distance to agreement thresholds. Results: Open square fields: penumbrae from open field profiles were in good agreement with diamond measurements for 1 mm spot size setting for Acuros XB, and between 0.5 and 1 mm for AAA. Maximum MU difference between calculations and measurements was 1.7% for Acuros XB (0.2% for fields greater than 1 x 1 cm 2 ) with 0.5 or 1 mm spot size. Agreement for AAA was within 0.7% (2.8%) for 0.5 (1 mm) spot size. RapidArc plans: doses were evaluated in a 4 mm diameter structure at isocenter and computed values differed from measurements by 0.0, -0.2, 5.5, and -3.4% for

  10. Accuracy of Acuros XB and AAA dose calculation for small fields with reference to RapidArc stereotactic treatments

    Energy Technology Data Exchange (ETDEWEB)

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Cozzi, Luca [Oncology Institute of Southern Switzerland, Medical Physics Unit, CH-6500 Bellinzona (Switzerland)

    2011-11-15

    Purpose: To assess the accuracy against measurements of two photon dose calculation algorithms (Acuros XB and the Anisotropic Analytical algorithm AAA) for small fields usable in stereotactic treatments with particular focus on RapidArc. Methods: Acuros XB and AAA were configured for stereotactic use. Baseline accuracy was assessed on small jaw-collimated open fields for different values for the spot sizes parameter in the beam data: 0.0, 0.5, 1, and 2 mm. Data were calculated with a grid of 1 x 1 mm{sup 2}. Investigated fields were: 3 x 3, 2 x 2, 1 x 1, and 0.8 x 0.8 cm{sup 2} with a 6 MV photon beam generated from a Clinac2100iX (Varian, Palo Alto, CA). Profiles, PDD, and output factors were measured in water with a PTW diamond detector (detector size: 4 mm{sup 2}, thickness 0.4 mm) and compared to calculations. Four RapidArc test plans were optimized, calculated and delivered with jaw settings J3 x 3, J2 x 2, and J1 x 1 cm{sup 2}, the last was optimized twice to generate high (H) and low (L) modulation patterns. Each plan consisted of one partial arc (gantry 110 deg. to 250 deg.), and collimator 45 deg. Dose to isocenter was measured in a PTW Octavius phantom and compared to calculations. 2D measurements were performed by means of portal dosimetry with the GLAaS method developed at authors' institute. Analysis was performed with gamma pass-fail test with 3% dose difference and 2 mm distance to agreement thresholds. Results: Open square fields: penumbrae from open field profiles were in good agreement with diamond measurements for 1 mm spot size setting for Acuros XB, and between 0.5 and 1 mm for AAA. Maximum MU difference between calculations and measurements was 1.7% for Acuros XB (0.2% for fields greater than 1 x 1 cm{sup 2}) with 0.5 or 1 mm spot size. Agreement for AAA was within 0.7% (2.8%) for 0.5 (1 mm) spot size. RapidArc plans: doses were evaluated in a 4 mm diameter structure at isocenter and computed values differed from measurements by 0.0, -0

  11. Evaluation of the dose calculation accuracy for small fields defined by jaw or MLC for AAA and Acuros XB algorithms.

    Science.gov (United States)

    Fogliata, Antonella; Lobefalo, Francesca; Reggiori, Giacomo; Stravato, Antonella; Tomatis, Stefano; Scorsetti, Marta; Cozzi, Luca

    2016-10-01

    Small field measurements are challenging, due to the physical characteristics coming from the lack of charged particle equilibrium, the partial occlusion of the finite radiation source, and to the detector response. These characteristics can be modeled in the dose calculations in the treatment planning systems. Aim of the present work is to evaluate the MU calculation accuracy for small fields, defined by jaw or MLC, for anisotropic analytical algorithm (AAA) and Acuros XB algorithms, relative to output measurements on the beam central axis. Single point output factor measurement was acquired with a PTW microDiamond detector for 6 MV, 6 and 10 MV unflattened beams generated by a Varian TrueBeam STx equipped with high definition-MLC. Fields defined by jaw or MLC apertures were set; jaw-defined: 0.6 × 0.6, 0.8 × 0.8, 1 × 1, 2 × 2, 3 × 3, 4 × 4, 5 × 5, and 10 × 10 cm 2 ; MLC-defined: 0.5 × 0.5 cm 2 to the maximum field defined by the jaw, with 0.5 cm stepping, and jaws set to: 2 × 2, 3 × 3, 4 × 4, 5 × 5, and 10 × 10 cm 2 . MU calculation was obtained with 1 mm grid in a virtual water phantom for the same fields, for AAA and Acuros algorithms implemented in the Varian eclipse treatment planning system (version 13.6). Configuration parameters as the effective spot size (ESS) and the dosimetric leaf gap (DLG) were varied to find the best parameter setting. Differences between calculated and measured doses were analyzed. Agreement better than 0.5% was found for field sizes equal to or larger than 2 × 2 cm 2 for both algorithms. A dose overestimation was present for smaller jaw-defined fields, with the best agreement, averaged over all the energies, of 1.6% and 4.6% for a 1 × 1 cm 2 field calculated by AAA and Acuros, respectively, for a configuration with ESS = 1 mm for both X and Y directions for AAA, and ESS = 1.5 and 0 mm for X and Y directions for Acuros. Conversely, a calculated dose underestimation was found for small MLC-defined fields, with the

  12. From AAA to Acuros XB-clinical implications of selecting either Acuros XB dose-to-water or dose-to-medium.

    Science.gov (United States)

    Zifodya, Jackson M; Challens, Cameron H C; Hsieh, Wen-Long

    2016-06-01

    When implementing Acuros XB (AXB) as a substitute for anisotropic analytic algorithm (AAA) in the Eclipse Treatment Planning System, one is faced with a dilemma of reporting either dose to medium, AXB-Dm or dose to water, AXB-Dw. To assist with decision making on selecting either AXB-Dm or AXB-Dw for dose reporting, a retrospective study of treated patients for head & neck (H&N), prostate, breast and lung is presented. Ten patients, previously treated using AAA plans, were selected for each site and re-planned with AXB-Dm and AXB-Dw. Re-planning was done with fixed monitor units (MU) as well as non-fixed MUs. Dose volume histograms (DVH) of targets and organs at risk (OAR), were analyzed in conjunction with ICRU-83 recommended dose reporting metrics. Additionally, comparisons of plan homogeneity indices (HI) and MUs were done to further highlight the differences between the algorithms. Results showed that, on average AAA overestimated dose to the target volume and OARs by less than 2.0 %. Comparisons between AXB-Dw and AXB-Dm, for all sites, also showed overall dose differences to be small (135 % of prescription dose) for target volumes with high density materials. Homogeneity indices showed that AAA planning and optimization templates would need to be adjusted only for the H&N and Lung sites. MU comparison showed insignificant differences between AXB-Dw relative to AAA and between AXB-Dw relative to AXB-Dm. However AXB-Dm MUs relative to AAA, showed an average difference of about 1.3 % signifying an underdosage by AAA. In conclusion, when dose is reported as AXB-Dw, the effect that high density structures in the PTV has on the dose distribution should be carefully considered. As the results show overall small dose differences between the algorithms, when transitioning from AAA to AXB, no significant change to existing prescription protocols is expected. As most of the clinical experience is dose-to-water based and calibration protocols and clinical trials are

  13. From AAA to Acuros XB-clinical implications of selecting either Acuros XB dose-to-water or dose-to-medium

    International Nuclear Information System (INIS)

    Zifoyda, Jackson M.; Challens, Cameron H.C.; Hsieh, Wen-Long

    2016-01-01

    When implementing Acuros XB (AXB) as a substitute for anisotropic analytic algorithm (AAA) in the Eclipse Treatment Planning System, one is faced with a dilemma of reporting either dose to medium, AXB-Dm or dose to water, AXB-Dw. To assist with decision making on selecting either AXB-Dm or AXB-Dw for dose reporting, a retrospective study of treated patients for head & neck (H&N), prostate, breast and lung is presented. Ten patients, previously treated using AAA plans, were selected for each site and re-planned with AXB-Dm and AXB-Dw. Re-planning was done with fixed monitor units (MU) as well as non-fixed MUs. Dose volume histograms (DVH) of targets and organs at risk (OAR), were analyzed in conjunction with ICRU-83 recommended dose reporting metrics. Additionally, comparisons of plan homogeneity indices (HI) and MUs were done to further highlight the differences between the algorithms. Results showed that, on average AAA overestimated dose to the target volume and OARs by less than 2.0 %. Comparisons between AXB-Dw and AXB-Dm, for all sites, also showed overall dose differences to be small (<1.5 %). However, in non-water biological media, dose differences between AXB-Dw and AXB-Dm, as large as 4.6 % were observed. AXB-Dw also tended to have unexpectedly high 3D maximum dose values (>135 % of prescription dose) for target volumes with high density materials. Homogeneity indices showed that AAA planning and optimization templates would need to be adjusted only for the H&N and Lung sites. MU comparison showed insignificant differences between AXB-Dw relative to AAA and between AXB-Dw relative to AXB-Dm. However AXB-Dm MUs relative to AAA, showed an average difference of about 1.3 % signifying an underdosage by AAA. In conclusion, when dose is reported as AXB-Dw, the effect that high density structures in the PTV has on the dose distribution should be carefully considered. As the results show overall small dose differences between the algorithms, when transitioning

  14. SU-E-T-339: Dosimetric Verification of Acuros XB Dose Calculation Algorithm On An Air Cavity for 6-MV Flattening Filter-Free Beam

    International Nuclear Information System (INIS)

    Kang, S; Suh, T; Chung, J

    2015-01-01

    Purpose: This study was to verify the accuracy of Acuros XB (AXB) dose calculation algorithm on an air cavity for a single radiation field using 6-MV flattening filter-free (FFF) beam. Methods: A rectangular slab phantom containing an air cavity was made for this study. The CT images of the phantom for dose calculation were scanned with and without film at measurement depths (4.5, 5.5, 6.5 and 7.5 cm). The central axis doses (CADs) and the off-axis doses (OADs) were measured by film and calculated with Analytical Anisotropic Algorithm (AAA) and AXB for field sizes ranging from 2 Χ 2 to 5 Χ 5 cm 2 of 6-MV FFF beams. Both algorithms were divided into AXB-w and AAA -w when included the film in phantom for dose calculation, and AXB-w/o and AAA-w/o in calculation without film. The calculated OADs for both algorithms were compared with the measured OADs and difference values were determined using root means squares error (RMSE) and gamma evaluation. Results: The percentage differences (%Diffs) between the measured and calculated CAD for AXB-w was most agreement than others. Compared to the %Diff with and without film, the %Diffs with film were decreased than without within both algorithms. The %Diffs for both algorithms were reduced with increasing field size and increased relative to the depth increment. RMSEs of CAD for AXB-w were within 10.32% for both inner-profile and penumbra, while the corresponding values of AAA-w appeared to 96.50%. Conclusion: This study demonstrated that the dose calculation with AXB within air cavity shows more accurate than with AAA compared to the measured dose. Furthermore, we found that the AXB-w was superior to AXB-w/o in this region when compared against the measurements

  15. Dosimetric impact of Acuros XB deterministic radiation transport algorithm for heterogeneous dose calculation in lung cancer

    International Nuclear Information System (INIS)

    Han Tao; Followill, David; Repchak, Roman; Molineu, Andrea; Howell, Rebecca; Salehpour, Mohammad; Mikell, Justin; Mourtada, Firas

    2013-01-01

    Purpose: The novel deterministic radiation transport algorithm, Acuros XB (AXB), has shown great potential for accurate heterogeneous dose calculation. However, the clinical impact between AXB and other currently used algorithms still needs to be elucidated for translation between these algorithms. The purpose of this study was to investigate the impact of AXB for heterogeneous dose calculation in lung cancer for intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). Methods: The thorax phantom from the Radiological Physics Center (RPC) was used for this study. IMRT and VMAT plans were created for the phantom in the Eclipse 11.0 treatment planning system. Each plan was delivered to the phantom three times using a Varian Clinac iX linear accelerator to ensure reproducibility. Thermoluminescent dosimeters (TLDs) and Gafchromic EBT2 film were placed inside the phantom to measure delivered doses. The measurements were compared with dose calculations from AXB 11.0.21 and the anisotropic analytical algorithm (AAA) 11.0.21. Two dose reporting modes of AXB, dose-to-medium in medium (D m,m ) and dose-to-water in medium (D w,m ), were studied. Point doses, dose profiles, and gamma analysis were used to quantify the agreement between measurements and calculations from both AXB and AAA. The computation times for AAA and AXB were also evaluated. Results: For the RPC lung phantom, AAA and AXB dose predictions were found in good agreement to TLD and film measurements for both IMRT and VMAT plans. TLD dose predictions were within 0.4%–4.4% to AXB doses (both D m,m and D w,m ); and within 2.5%–6.4% to AAA doses, respectively. For the film comparisons, the gamma indexes (±3%/3 mm criteria) were 94%, 97%, and 98% for AAA, AXB Dm,m , and AXB Dw,m , respectively. The differences between AXB and AAA in dose–volume histogram mean doses were within 2% in the planning target volume, lung, heart, and within 5% in the spinal cord. However

  16. SU-F-T-628: An Evaluation of Grid Size in Eclipse AcurosXB Dose Calculation Algorithm for SBRT Lung

    Energy Technology Data Exchange (ETDEWEB)

    Pokharel, S [21st Century Oncology, Naples, FL (United States); Rana, S [McLaren Proton Therapy Center, Karmanos Cancer Institute at McLaren-Flint, Flint, MI (United States)

    2016-06-15

    Purpose: purpose of this study is to evaluate the effect of grid size in Eclipse AcurosXB dose calculation algorithm for SBRT lung. Methods: Five cases of SBRT lung previously treated have been chosen for present study. Four of the plans were 5 fields conventional IMRT and one was Rapid Arc plan. All five cases have been calculated with five grid sizes (1, 1.5, 2, 2.5 and 3mm) available for AXB algorithm with same plan normalization. Dosimetric indices relevant to SBRT along with MUs and time have been recorded for different grid sizes. The maximum difference was calculated as a percentage of mean of all five values. All the plans were IMRT QAed with portal dosimetry. Results: The maximum difference of MUs was within 2%. The time increased was as high as 7 times from highest 3mm to lowest 1mm grid size. The largest difference of PTV minimum, maximum and mean dose were 7.7%, 1.5% and 1.6% respectively. The highest D2-Max difference was 6.1%. The highest difference in ipsilateral lung mean, V5Gy, V10Gy and V20Gy were 2.6%, 2.4%, 1.9% and 3.8% respectively. The maximum difference of heart, cord and esophagus dose were 6.5%, 7.8% and 4.02% respectively. The IMRT Gamma passing rate at 2%/2mm remains within 1.5% with at least 98% points passing with all grid sizes. Conclusion: This work indicates the lowest grid size of 1mm available in AXB is not necessarily required for accurate dose calculation. The IMRT passing rate was insignificant or not observed with the reduction of grid size less than 2mm. Although the maximum percentage difference of some of the dosimetric indices appear large, most of them are clinically insignificant in absolute dose values. So we conclude that 2mm grid size calculation is best compromise in light of dose calculation accuracy and time it takes to calculate dose.

  17. SU-E-T-122: Anisotropic Analytical Algorithm (AAA) Vs. Acuros XB (AXB) in Stereotactic Treatment Planning

    International Nuclear Information System (INIS)

    Mynampati, D; Scripes, P Godoy; Kuo, H; Yaparpalvi, R; Tome, W

    2015-01-01

    Purpose: To evaluate dosimetric differences between superposition beam model (AAA) and determinant photon transport solver (AXB) in lung SBRT and Cranial SRS dose computations. Methods: Ten Cranial SRS and ten Lung SBRT plans using Varian, AAA -11.0 were re-planned using Acuros -XB-11.0 with fixed MU. 6MV photon Beam model with HD120-MLC used for dose calculations. Four non-coplanar conformal arcs used to deliver 21Gy or 18Gy to SRS targets (0.4 to 6.2cc). 54Gy (3Fractions) or 50Gy (5Fractions) was planned for SBRT targets (7.3 to 13.9cc) using two VAMT non-coplanar arcs. Plan comparison parameters were dose to 1% PTV volume (D1), dose to 99% PTV volume( D99), Target mean (Dmean), Conformity index (ratio of prescription isodose volume to PTV), Homogeneity Index [ (D2%-D98%)/Dmean] and R50 (ratio of 50% of prescription isodose volume to PTV). OAR parameters were Brain volume receiving 12Gy dose (V12Gy) and maximum dose (D0.03) to Brainstem for SRS. For lung SBRT, maximum dose to Heart and Cord, Mean lung dose (MLD) and volume of lung receiving 20Gy (V20Gy) were computed. PTV parameters compared by percentage difference between AXB and AAA parameters. OAR parameters and HI compared by absolute difference between two calculations. For analysis, paired t-test performed over the parameters. Results: Compared to AAA, AXB SRS plans have on average 3.2% lower D99, 6.5% lower CI and 3cc less Brain-V12. However, AXB SBRT plans have higher D1, R50 and Dmean by 3.15%, 1.63% and 2.5%. For SRS and SBRT, AXB plans have average HI 2 % and 4.4% higher than AAA plans. In both techniques, all other parameters vary within 1% or 1Gy. In both sets only two parameters have P>0.05. Conclusion: Even though t-test results signify difference between AXB and AAA plans, dose differences in dose estimations by both algorithms are clinically insignificant

  18. SU-F-T-600: Influence of Acuros XB and AAA Dose Calculation Algorithms On Plan Quality Metrics and Normal Lung Doses in Lung SBRT

    International Nuclear Information System (INIS)

    Yaparpalvi, R; Mynampati, D; Kuo, H; Garg, M; Tome, W; Kalnicki, S

    2016-01-01

    Purpose: To study the influence of superposition-beam model (AAA) and determinant-photon transport-solver (Acuros XB) dose calculation algorithms on the treatment plan quality metrics and on normal lung dose in Lung SBRT. Methods: Treatment plans of 10 Lung SBRT patients were randomly selected. Patients were prescribed to a total dose of 50-54Gy in 3–5 fractions (10?5 or 18?3). Doses were optimized accomplished with 6-MV using 2-arcs (VMAT). Doses were calculated using AAA algorithm with heterogeneity correction. For each plan, plan quality metrics in the categories- coverage, homogeneity, conformity and gradient were quantified. Repeat dosimetry for these AAA treatment plans was performed using AXB algorithm with heterogeneity correction for same beam and MU parameters. Plan quality metrics were again evaluated and compared with AAA plan metrics. For normal lung dose, V_2_0 and V_5 to (Total lung- GTV) were evaluated. Results: The results are summarized in Supplemental Table 1. PTV volume was mean 11.4 (±3.3) cm"3. Comparing RTOG 0813 protocol criteria for conformality, AXB plans yielded on average, similar PITV ratio (individual PITV ratio differences varied from −9 to +15%), reduced target coverage (−1.6%) and increased R50% (+2.6%). Comparing normal lung doses, the lung V_2_0 (+3.1%) and V_5 (+1.5%) were slightly higher for AXB plans compared to AAA plans. High-dose spillage ((V105%PD - PTV)/ PTV) was slightly lower for AXB plans but the % low dose spillage (D2cm) was similar between the two calculation algorithms. Conclusion: AAA algorithm overestimates lung target dose. Routinely adapting to AXB for dose calculations in Lung SBRT planning may improve dose calculation accuracy, as AXB based calculations have been shown to be closer to Monte Carlo based dose predictions in accuracy and with relatively faster computational time. For clinical practice, revisiting dose-fractionation in Lung SBRT to correct for dose overestimates attributable to algorithm

  19. SU-F-T-600: Influence of Acuros XB and AAA Dose Calculation Algorithms On Plan Quality Metrics and Normal Lung Doses in Lung SBRT

    Energy Technology Data Exchange (ETDEWEB)

    Yaparpalvi, R; Mynampati, D; Kuo, H; Garg, M; Tome, W; Kalnicki, S [Montefiore Medical Center, Bronx, NY (United States)

    2016-06-15

    Purpose: To study the influence of superposition-beam model (AAA) and determinant-photon transport-solver (Acuros XB) dose calculation algorithms on the treatment plan quality metrics and on normal lung dose in Lung SBRT. Methods: Treatment plans of 10 Lung SBRT patients were randomly selected. Patients were prescribed to a total dose of 50-54Gy in 3–5 fractions (10?5 or 18?3). Doses were optimized accomplished with 6-MV using 2-arcs (VMAT). Doses were calculated using AAA algorithm with heterogeneity correction. For each plan, plan quality metrics in the categories- coverage, homogeneity, conformity and gradient were quantified. Repeat dosimetry for these AAA treatment plans was performed using AXB algorithm with heterogeneity correction for same beam and MU parameters. Plan quality metrics were again evaluated and compared with AAA plan metrics. For normal lung dose, V{sub 20} and V{sub 5} to (Total lung- GTV) were evaluated. Results: The results are summarized in Supplemental Table 1. PTV volume was mean 11.4 (±3.3) cm{sup 3}. Comparing RTOG 0813 protocol criteria for conformality, AXB plans yielded on average, similar PITV ratio (individual PITV ratio differences varied from −9 to +15%), reduced target coverage (−1.6%) and increased R50% (+2.6%). Comparing normal lung doses, the lung V{sub 20} (+3.1%) and V{sub 5} (+1.5%) were slightly higher for AXB plans compared to AAA plans. High-dose spillage ((V105%PD - PTV)/ PTV) was slightly lower for AXB plans but the % low dose spillage (D2cm) was similar between the two calculation algorithms. Conclusion: AAA algorithm overestimates lung target dose. Routinely adapting to AXB for dose calculations in Lung SBRT planning may improve dose calculation accuracy, as AXB based calculations have been shown to be closer to Monte Carlo based dose predictions in accuracy and with relatively faster computational time. For clinical practice, revisiting dose-fractionation in Lung SBRT to correct for dose overestimates

  20. Dosimetric comparison of peripheral NSCLC SBRT using Acuros XB and AAA calculation algorithms.

    Science.gov (United States)

    Ong, Chloe C H; Ang, Khong Wei; Soh, Roger C X; Tin, Kah Ming; Yap, Jerome H H; Lee, James C L; Bragg, Christopher M

    2017-01-01

    There is a concern for dose calculation in highly heterogenous environments such as the thorax region. This study compares the quality of treatment plans of peripheral non-small cell lung cancer (NSCLC) stereotactic body radiation therapy (SBRT) using 2 calculation algorithms, namely, Eclipse Anisotropic Analytical Algorithm (AAA) and Acuros External Beam (AXB), for 3-dimensional conformal radiation therapy (3DCRT) and volumetric-modulated arc therapy (VMAT). Four-dimensional computed tomography (4DCT) data from 20 anonymized patients were studied using Varian Eclipse planning system, AXB, and AAA version 10.0.28. A 3DCRT plan and a VMAT plan were generated using AAA and AXB with constant plan parameters for each patient. The prescription and dose constraints were benchmarked against Radiation Therapy Oncology Group (RTOG) 0915 protocol. Planning parameters of the plan were compared statistically using Mann-Whitney U tests. Results showed that 3DCRT and VMAT plans have a lower target coverage up to 8% when calculated using AXB as compared with AAA. The conformity index (CI) for AXB plans was 4.7% lower than AAA plans, but was closer to unity, which indicated better target conformity. AXB produced plans with global maximum doses which were, on average, 2% hotter than AAA plans. Both 3DCRT and VMAT plans were able to achieve D95%. VMAT plans were shown to be more conformal (CI = 1.01) and were at least 3.2% and 1.5% lower in terms of PTV maximum and mean dose, respectively. There was no statistically significant difference for doses received by organs at risk (OARs) regardless of calculation algorithms and treatment techniques. In general, the difference in tissue modeling for AXB and AAA algorithm is responsible for the dose distribution between the AXB and the AAA algorithms. The AXB VMAT plans could be used to benefit patients receiving peripheral NSCLC SBRT. Copyright © 2017 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights

  1. SU-E-T-290: Dosimetric Accuracy of Acuros XB and Analytical Anisotropic Algorithm in Stereotactic Ablative Radiotherapy of Small Lung Lesions

    International Nuclear Information System (INIS)

    Yu, Amy S; Yang, Y; Bush, K; Fahimian, B; Hsu, A

    2015-01-01

    Purpose: The accuracy of dose calculation for lung stereotactic ablative radiotherapy (SABR) of small lesions critically depends on the proper modeling of the lateral scatter in heterogeneous media. In recent years, grid-based Boltzman solvers such as Acuros XB (AXB) have been introduced for enhanced modeling of radiation transport in heterogeneous media. The purpose of this study is to evaluate the dosimetric impact of dose calculation between AXB and convolution-superposition algorithms such as analytical anisotropic algorithm (AAA) for small lesion sizes and different beam energies. Methods: Five lung SABR VMAT cases with GTV ranged from 0.8cm to 2.5cm in diameter were studied. For each case, doses were calculated by AAA, AXB (V11031) and Monte Carlo simulation (MC) with the same plan parameters for 10MV and 6MV. The dose calculation accuracy were evaluated by comparing DVHs and dose distributions with MC as the benchmark. The accuracy of calculated dose was also validated by EBT3 film measurement with a field size of 3cmx3cm in a thorax phantom. Results: For 10MV and GTV less than 1cm, dose calculated by AXB agrees well with MC compared to AAA. Dose difference calculated with AXB and AAA could be up to 30%. For GTV greater than 2cm, calculation results of AXB and AAA agree within 5% in GTV. For 6MV, the difference between calculated doses by AXB and AAA is less 10% for GTV less than 1cm. Based on film measurements, lung dose was overestimated 10% and 20% by AAA for 6MV and 10MV. Conclusion: Lateral scatter and transport is modeled more accurately by AXB than AAA in heterogeneous media, especially for small field size and high energy beams. The accuracy depends on the assigned material in calculation. If grid-based Boltzman solvers or MC are not available for calculation, lower energy beams should be used for treatment

  2. Dosimetric accuracy and clinical quality of Acuros XB and AAA dose calculation algorithm for stereotactic and conventional lung volumetric modulated arc therapy plans

    International Nuclear Information System (INIS)

    Kroon, Petra S; Hol, Sandra; Essers, Marion

    2013-01-01

    The main aim of the current study was to assess the dosimetric accuracy and clinical quality of volumetric modulated arc therapy (VMAT) plans for stereotactic (stage I) and conventional (stage III) lung cancer treatments planned with Eclipse version 10.0 Anisotropic Analytical Algorithm (AAA) and Acuros XB (AXB) algorithm. The dosimetric impact of using AAA instead of AXB, and grid size 2.5 mm instead of 1.0 mm for VMAT treatment plans was evaluated. The clinical plan quality of AXB VMAT was assessed using 45 stage I and 73 stage III patients, and was compared with published results, planned with VMAT and hybrid-VMAT techniques. The dosimetric impact on near-minimum PTV dose (D 98% ) using AAA instead of AXB was large (underdose up to 12.3%) for stage I and very small (underdose up to 0.8%) for stage III lung treatments. There were no significant differences for dose volume histogram (DVH) values between grid sizes. The calculation time was significantly higher for AXB grid size 1.0 than 2.5 mm (p < 0.01). The clinical quality of the VMAT plans was at least comparable with clinical qualities given in literature of lung treatment plans with VMAT and hybrid-VMAT techniques. The average mean lung dose (MLD), lung V 20Gy and V 5Gy in this study were respectively 3.6 Gy, 4.1% and 15.7% for 45 stage I patients and 12.4 Gy, 19.3% and 46.6% for 73 stage III lung patients. The average contra-lateral lung dose V 5Gy-cont was 35.6% for stage III patients. For stereotactic and conventional lung treatments, VMAT calculated with AXB grid size 2.5 mm resulted in accurate dose calculations. No hybrid technique was needed to obtain the dose constraints. AXB is recommended instead of AAA for avoiding serious overestimation of the minimum target doses compared to the actual delivered dose

  3. Evaluation of an analytic linear Boltzmann transport equation solver for high-density inhomogeneities

    Energy Technology Data Exchange (ETDEWEB)

    Lloyd, S. A. M.; Ansbacher, W. [Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 3P6 (Canada); Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 3P6 (Canada) and Department of Medical Physics, British Columbia Cancer Agency-Vancouver Island Centre, Victoria, British Columbia V8R 6V5 (Canada)

    2013-01-15

    Purpose: Acuros external beam (Acuros XB) is a novel dose calculation algorithm implemented through the ECLIPSE treatment planning system. The algorithm finds a deterministic solution to the linear Boltzmann transport equation, the same equation commonly solved stochastically by Monte Carlo methods. This work is an evaluation of Acuros XB, by comparison with Monte Carlo, for dose calculation applications involving high-density materials. Existing non-Monte Carlo clinical dose calculation algorithms, such as the analytic anisotropic algorithm (AAA), do not accurately model dose perturbations due to increased electron scatter within high-density volumes. Methods: Acuros XB, AAA, and EGSnrc based Monte Carlo are used to calculate dose distributions from 18 MV and 6 MV photon beams delivered to a cubic water phantom containing a rectangular high density (4.0-8.0 g/cm{sup 3}) volume at its center. The algorithms are also used to recalculate a clinical prostate treatment plan involving a unilateral hip prosthesis, originally evaluated using AAA. These results are compared graphically and numerically using gamma-index analysis. Radio-chromic film measurements are presented to augment Monte Carlo and Acuros XB dose perturbation data. Results: Using a 2% and 1 mm gamma-analysis, between 91.3% and 96.8% of Acuros XB dose voxels containing greater than 50% the normalized dose were in agreement with Monte Carlo data for virtual phantoms involving 18 MV and 6 MV photons, stainless steel and titanium alloy implants and for on-axis and oblique field delivery. A similar gamma-analysis of AAA against Monte Carlo data showed between 80.8% and 87.3% agreement. Comparing Acuros XB and AAA evaluations of a clinical prostate patient plan involving a unilateral hip prosthesis, Acuros XB showed good overall agreement with Monte Carlo while AAA underestimated dose on the upstream medial surface of the prosthesis due to electron scatter from the high-density material. Film measurements

  4. Experimental validation of deterministic Acuros XB algorithm for IMRT and VMAT dose calculations with the Radiological Physics Center's head and neck phantom

    International Nuclear Information System (INIS)

    Han Tao; Mourtada, Firas; Kisling, Kelly; Mikell, Justin; Followill, David; Howell, Rebecca

    2012-01-01

    Purpose: The purpose of this study was to verify the dosimetric performance of Acuros XB (AXB), a grid-based Boltzmann solver, in intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). Methods: The Radiological Physics Center (RPC) head and neck (H and N) phantom was used for all calculations and measurements in this study. Clinically equivalent IMRT and VMAT plans were created on the RPC H and N phantom in the Eclipse treatment planning system (version 10.0) by using RPC dose prescription specifications. The dose distributions were calculated with two different algorithms, AXB 11.0.03 and anisotropic analytical algorithm (AAA) 10.0.24. Two dose report modes of AXB were recorded: dose-to-medium in medium (D m,m ) and dose-to-water in medium (D w,m ). Each treatment plan was delivered to the RPC phantom three times for reproducibility by using a Varian Clinac iX linear accelerator. Absolute point dose and planar dose were measured with thermoluminescent dosimeters (TLDs) and GafChromic registered EBT2 film, respectively. Profile comparison and 2D gamma analysis were used to quantify the agreement between the film measurements and the calculated dose distributions from both AXB and AAA. The computation times for AAA and AXB were also evaluated. Results: Good agreement was observed between measured doses and those calculated with AAA or AXB. Both AAA and AXB calculated doses within 5% of TLD measurements in both the IMRT and VMAT plans. Results of AXB Dm,m (0.1% to 3.6%) were slightly better than AAA (0.2% to 4.6%) or AXB Dw,m (0.3% to 5.1%). The gamma analysis for both AAA and AXB met the RPC 7%/4 mm criteria (over 90% passed), whereas AXB Dm,m met 5%/3 mm criteria in most cases. AAA was 2 to 3 times faster than AXB for IMRT, whereas AXB was 4-6 times faster than AAA for VMAT. Conclusions: AXB was found to be satisfactorily accurate when compared to measurements in the RPC H and N phantom. Compared with AAA, AXB results were equal

  5. SU-E-T-101: Determination and Comparison of Correction Factors Obtained for TLDs in Small Field Lung Heterogenous Phantom Using Acuros XB and EGSnrc

    International Nuclear Information System (INIS)

    Soh, R; Lee, J; Harianto, F

    2014-01-01

    Purpose: To determine and compare the correction factors obtained for TLDs in 2 × 2cm 2 small field in lung heterogenous phantom using Acuros XB (AXB) and EGSnrc. Methods: This study will simulate the correction factors due to the perturbation of TLD-100 chips (Harshaw/Thermoscientific, 3 × 3 × 0.9mm 3 , 2.64g/cm 3 ) in small field lung medium for Stereotactic Body Radiation Therapy (SBRT). A physical lung phantom was simulated by a 14cm thick composite cork phantom (0.27g/cm 3 , HU:-743 ± 11) sandwiched between 4cm thick Plastic Water (CIRS,Norfolk). Composite cork has been shown to be a good lung substitute material for dosimetric studies. 6MV photon beam from Varian Clinac iX (Varian Medical Systems, Palo Alto, CA) with field size 2 × 2cm 2 was simulated. Depth dose profiles were obtained from the Eclipse treatment planning system Acuros XB (AXB) and independently from DOSxyznrc, EGSnrc. Correction factors was calculated by the ratio of unperturbed to perturbed dose. Since AXB has limitations in simulating actual material compositions, EGSnrc will also simulate the AXB-based material composition for comparison to the actual lung phantom. Results: TLD-100, with its finite size and relatively high density, causes significant perturbation in 2 × 2cm 2 small field in a low lung density phantom. Correction factors calculated by both EGSnrc and AXB was found to be as low as 0.9. It is expected that the correction factor obtained by EGSnrc wlll be more accurate as it is able to simulate the actual phantom material compositions. AXB have a limited material library, therefore it only approximates the composition of TLD, Composite cork and Plastic water, contributing to uncertainties in TLD correction factors. Conclusion: It is expected that the correction factors obtained by EGSnrc will be more accurate. Studies will be done to investigate the correction factors for higher energies where perturbation may be more pronounced

  6. SU-F-T-449: Dosimetric Comparison of Acuros XB, Adaptive Convolve in Intensity Modulated Radiotherapy for Head and Neck Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Uehara, R [National Cancer Center Hospital East, Kashiwa, Chiba (Japan); Tachibana, H [National Cancer Center, Kashiwa, Chiba (Japan)

    2016-06-15

    Purpose: There have been several publications focusing on dose calculation in lung for a new dose calculation algorithm of Acuros XB (AXB). AXB could contribute to dose calculation for high-density media for bone and dental prosthesis rather than in lung. We compared the dosimetric performance of AXB, Adaptive Convolve (AC) in head and neck IMRT plans. Methods: In a phantom study, the difference in depth profile between AXB and AC was evaluated using Kodak EDR2 film sandwiched with tough water phantoms. 6 MV x-ray using the TrueBeam was irradiated. In a patient study, 20 head and neck IMRT plans had been clinically approved in Pinnacle3 and were transferred to Eclipse. Dose distribution was recalculated using AXB in Eclipse while maintaining AC-calculated monitor units and MLC sequence planned in Pinnacle. Subsequently, both the dose-volumetric data obtained using the two different calculation algorithms were compared. Results: The results in the phantom evaluation for the shallow area ahead of the build-up region shows over-dose for AXB and under-dose for AC, respectively. In the patient plans, AXB shows more hot spots especially around the high-density media than AC in terms of PTV (Max difference: 4.0%) and OAR (Max. difference: 1.9%). Compared to AC, there were larger dose deviations in steep dose gradient region and higher skin-dose. Conclusion: In head and neck IMRT plans, AXB and AC show different dosimetric performance for the regions inside the target volume around high-density media, steep dose gradient regions and skin-surface. There are limitations in skin-dose and complex anatomic condition using even inhomogeneous anthropomorphic phantom Thus, there is the potential for an increase of hot-spot in AXB, and an underestimation of dose in substance boundaries and skin regions in AC.

  7. SU-F-T-449: Dosimetric Comparison of Acuros XB, Adaptive Convolve in Intensity Modulated Radiotherapy for Head and Neck Cancer

    International Nuclear Information System (INIS)

    Uehara, R; Tachibana, H

    2016-01-01

    Purpose: There have been several publications focusing on dose calculation in lung for a new dose calculation algorithm of Acuros XB (AXB). AXB could contribute to dose calculation for high-density media for bone and dental prosthesis rather than in lung. We compared the dosimetric performance of AXB, Adaptive Convolve (AC) in head and neck IMRT plans. Methods: In a phantom study, the difference in depth profile between AXB and AC was evaluated using Kodak EDR2 film sandwiched with tough water phantoms. 6 MV x-ray using the TrueBeam was irradiated. In a patient study, 20 head and neck IMRT plans had been clinically approved in Pinnacle3 and were transferred to Eclipse. Dose distribution was recalculated using AXB in Eclipse while maintaining AC-calculated monitor units and MLC sequence planned in Pinnacle. Subsequently, both the dose-volumetric data obtained using the two different calculation algorithms were compared. Results: The results in the phantom evaluation for the shallow area ahead of the build-up region shows over-dose for AXB and under-dose for AC, respectively. In the patient plans, AXB shows more hot spots especially around the high-density media than AC in terms of PTV (Max difference: 4.0%) and OAR (Max. difference: 1.9%). Compared to AC, there were larger dose deviations in steep dose gradient region and higher skin-dose. Conclusion: In head and neck IMRT plans, AXB and AC show different dosimetric performance for the regions inside the target volume around high-density media, steep dose gradient regions and skin-surface. There are limitations in skin-dose and complex anatomic condition using even inhomogeneous anthropomorphic phantom Thus, there is the potential for an increase of hot-spot in AXB, and an underestimation of dose in substance boundaries and skin regions in AC.

  8. Comparison of Acuros (AXB) and Anisotropic Analytical Algorithm (AAA) for dose calculation in treatment of oesophageal cancer: effects on modelling tumour control probability.

    Science.gov (United States)

    Padmanaban, Sriram; Warren, Samantha; Walsh, Anthony; Partridge, Mike; Hawkins, Maria A

    2014-12-23

    To investigate systematic changes in dose arising when treatment plans optimised using the Anisotropic Analytical Algorithm (AAA) are recalculated using Acuros XB (AXB) in patients treated with definitive chemoradiotherapy (dCRT) for locally advanced oesophageal cancers. We have compared treatment plans created using AAA with those recalculated using AXB. Although the Anisotropic Analytical Algorithm (AAA) is currently more widely used in clinical routine, Acuros XB (AXB) has been shown to more accurately calculate the dose distribution, particularly in heterogeneous regions. Studies to predict clinical outcome should be based on modelling the dose delivered to the patient as accurately as possible. CT datasets from ten patients were selected for this retrospective study. VMAT (Volumetric modulated arc therapy) plans with 2 arcs, collimator rotation ± 5-10° and dose prescription 50 Gy / 25 fractions were created using Varian Eclipse (v10.0). The initial dose calculation was performed with AAA, and AXB plans were created by re-calculating the dose distribution using the same number of monitor units (MU) and multileaf collimator (MLC) files as the original plan. The difference in calculated dose to organs at risk (OAR) was compared using dose-volume histogram (DVH) statistics and p values were calculated using the Wilcoxon signed rank test. The potential clinical effect of dosimetric differences in the gross tumour volume (GTV) was evaluated using three different TCP models from the literature. PTV Median dose was apparently 0.9 Gy lower (range: 0.5 Gy - 1.3 Gy; p AAA plans re-calculated with AXB and GTV mean dose was reduced by on average 1.0 Gy (0.3 Gy -1.5 Gy; p AAA plan (on average, dose reduction: lung 1.7%, heart 2.4%). Similar trends were seen for CRT plans. Differences in dose distribution are observed with VMAT and CRT plans recalculated with AXB particularly within soft tissue at the tumour/lung interface, where AXB has been shown to more

  9. Assistance tool commissioner of new algorithms of systems planning of therapy with ionizing

    International Nuclear Information System (INIS)

    Reinado, D.; Ricos, B.; Alonso, S.; Chinillach, N.; Bellido, P.; Tortosa, R.

    2013-01-01

    The Commissioner of a new scheduling algorithm is associated with a high number of hours of work and measures. In order to optimize the development of the Commissioner for the AAA algorithms and Acuros XB within planning Eclipse (V.10) system marketed by Varian and have developed a tool in Microsoft Excel format where the different tests have been included to perform. (Author)

  10. Dosimetric comparison of Acuros XB, AAA, and XVMC in stereotactic body radiotherapy for lung cancer

    International Nuclear Information System (INIS)

    Tsuruta, Yusuke; Nakata, Manabu; Higashimura, Kyoji; Nakamura, Mitsuhiro; Matsuo, Yukinori; Monzen, Hajime; Mizowaki, Takashi; Hiraoka, Masahiro

    2014-01-01

    Purpose: To compare the dosimetric performance of Acuros XB (AXB), anisotropic analytical algorithm (AAA), and x-ray voxel Monte Carlo (XVMC) in heterogeneous phantoms and lung stereotactic body radiotherapy (SBRT) plans. Methods: Water- and lung-equivalent phantoms were combined to evaluate the percentage depth dose and dose profile. The radiation treatment machine Novalis (BrainLab AG, Feldkirchen, Germany) with an x-ray beam energy of 6 MV was used to calculate the doses in the composite phantom at a source-to-surface distance of 100 cm with a gantry angle of 0°. Subsequently, the clinical lung SBRT plans for the 26 consecutive patients were transferred from the iPlan (ver. 4.1; BrainLab AG) to the Eclipse treatment planning systems (ver. 11.0.3; Varian Medical Systems, Palo Alto, CA). The doses were then recalculated with AXB and AAA while maintaining the XVMC-calculated monitor units and beam arrangement. Then the dose-volumetric data obtained using the three different radiation dose calculation algorithms were compared. Results: The results from AXB and XVMC agreed with measurements within ±3.0% for the lung-equivalent phantom with a 6 × 6 cm 2 field size, whereas AAA values were higher than measurements in the heterogeneous zone and near the boundary, with the greatest difference being 4.1%. AXB and XVMC agreed well with measurements in terms of the profile shape at the boundary of the heterogeneous zone. For the lung SBRT plans, AXB yielded lower values than XVMC in terms of the maximum doses of ITV and PTV; however, the differences were within ±3.0%. In addition to the dose-volumetric data, the dose distribution analysis showed that AXB yielded dose distribution calculations that were closer to those with XVMC than did AAA. Means ± standard deviation of the computation time was 221.6 ± 53.1 s (range, 124–358 s), 66.1 ± 16.0 s (range, 42–94 s), and 6.7 ± 1.1 s (range, 5–9 s) for XVMC, AXB, and AAA, respectively. Conclusions: In the phantom

  11. Dosimetric comparison of Acuros XB, AAA, and XVMC in stereotactic body radiotherapy for lung cancer.

    Science.gov (United States)

    Tsuruta, Yusuke; Nakata, Manabu; Nakamura, Mitsuhiro; Matsuo, Yukinori; Higashimura, Kyoji; Monzen, Hajime; Mizowaki, Takashi; Hiraoka, Masahiro

    2014-08-01

    To compare the dosimetric performance of Acuros XB (AXB), anisotropic analytical algorithm (AAA), and x-ray voxel Monte Carlo (XVMC) in heterogeneous phantoms and lung stereotactic body radiotherapy (SBRT) plans. Water- and lung-equivalent phantoms were combined to evaluate the percentage depth dose and dose profile. The radiation treatment machine Novalis (BrainLab AG, Feldkirchen, Germany) with an x-ray beam energy of 6 MV was used to calculate the doses in the composite phantom at a source-to-surface distance of 100 cm with a gantry angle of 0°. Subsequently, the clinical lung SBRT plans for the 26 consecutive patients were transferred from the iPlan (ver. 4.1; BrainLab AG) to the Eclipse treatment planning systems (ver. 11.0.3; Varian Medical Systems, Palo Alto, CA). The doses were then recalculated with AXB and AAA while maintaining the XVMC-calculated monitor units and beam arrangement. Then the dose-volumetric data obtained using the three different radiation dose calculation algorithms were compared. The results from AXB and XVMC agreed with measurements within ± 3.0% for the lung-equivalent phantom with a 6 × 6 cm(2) field size, whereas AAA values were higher than measurements in the heterogeneous zone and near the boundary, with the greatest difference being 4.1%. AXB and XVMC agreed well with measurements in terms of the profile shape at the boundary of the heterogeneous zone. For the lung SBRT plans, AXB yielded lower values than XVMC in terms of the maximum doses of ITV and PTV; however, the differences were within ± 3.0%. In addition to the dose-volumetric data, the dose distribution analysis showed that AXB yielded dose distribution calculations that were closer to those with XVMC than did AAA. Means ± standard deviation of the computation time was 221.6 ± 53.1 s (range, 124-358 s), 66.1 ± 16.0 s (range, 42-94 s), and 6.7 ± 1.1 s (range, 5-9 s) for XVMC, AXB, and AAA, respectively. In the phantom evaluations, AXB and XVMC agreed better with

  12. SU-E-T-91: Accuracy of Dose Calculation Algorithms for Patients Undergoing Stereotactic Ablative Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Tajaldeen, A [RMIT university, Docklands, Vic (Australia); Ramachandran, P [Peter MacCallum Cancer Centre, Bendigo (Australia); Geso, M [RMIT University, Bundoora, Melbourne (Australia)

    2015-06-15

    Purpose: The purpose of this study was to investigate and quantify the variation in dose distributions in small field lung cancer radiotherapy using seven different dose calculation algorithms. Methods: The study was performed in 21 lung cancer patients who underwent Stereotactic Ablative Body Radiotherapy (SABR). Two different methods (i) Same dose coverage to the target volume (named as same dose method) (ii) Same monitor units in all algorithms (named as same monitor units) were used for studying the performance of seven different dose calculation algorithms in XiO and Eclipse treatment planning systems. The seven dose calculation algorithms include Superposition, Fast superposition, Fast Fourier Transform ( FFT) Convolution, Clarkson, Anisotropic Analytic Algorithm (AAA), Acurous XB and pencil beam (PB) algorithms. Prior to this, a phantom study was performed to assess the accuracy of these algorithms. Superposition algorithm was used as a reference algorithm in this study. The treatment plans were compared using different dosimetric parameters including conformity, heterogeneity and dose fall off index. In addition to this, the dose to critical structures like lungs, heart, oesophagus and spinal cord were also studied. Statistical analysis was performed using Prism software. Results: The mean±stdev with conformity index for Superposition, Fast superposition, Clarkson and FFT convolution algorithms were 1.29±0.13, 1.31±0.16, 2.2±0.7 and 2.17±0.59 respectively whereas for AAA, pencil beam and Acurous XB were 1.4±0.27, 1.66±0.27 and 1.35±0.24 respectively. Conclusion: Our study showed significant variations among the seven different algorithms. Superposition and AcurosXB algorithms showed similar values for most of the dosimetric parameters. Clarkson, FFT convolution and pencil beam algorithms showed large differences as compared to superposition algorithms. Based on our study, we recommend Superposition and AcurosXB algorithms as the first choice of

  13. SU-E-T-91: Accuracy of Dose Calculation Algorithms for Patients Undergoing Stereotactic Ablative Radiotherapy

    International Nuclear Information System (INIS)

    Tajaldeen, A; Ramachandran, P; Geso, M

    2015-01-01

    Purpose: The purpose of this study was to investigate and quantify the variation in dose distributions in small field lung cancer radiotherapy using seven different dose calculation algorithms. Methods: The study was performed in 21 lung cancer patients who underwent Stereotactic Ablative Body Radiotherapy (SABR). Two different methods (i) Same dose coverage to the target volume (named as same dose method) (ii) Same monitor units in all algorithms (named as same monitor units) were used for studying the performance of seven different dose calculation algorithms in XiO and Eclipse treatment planning systems. The seven dose calculation algorithms include Superposition, Fast superposition, Fast Fourier Transform ( FFT) Convolution, Clarkson, Anisotropic Analytic Algorithm (AAA), Acurous XB and pencil beam (PB) algorithms. Prior to this, a phantom study was performed to assess the accuracy of these algorithms. Superposition algorithm was used as a reference algorithm in this study. The treatment plans were compared using different dosimetric parameters including conformity, heterogeneity and dose fall off index. In addition to this, the dose to critical structures like lungs, heart, oesophagus and spinal cord were also studied. Statistical analysis was performed using Prism software. Results: The mean±stdev with conformity index for Superposition, Fast superposition, Clarkson and FFT convolution algorithms were 1.29±0.13, 1.31±0.16, 2.2±0.7 and 2.17±0.59 respectively whereas for AAA, pencil beam and Acurous XB were 1.4±0.27, 1.66±0.27 and 1.35±0.24 respectively. Conclusion: Our study showed significant variations among the seven different algorithms. Superposition and AcurosXB algorithms showed similar values for most of the dosimetric parameters. Clarkson, FFT convolution and pencil beam algorithms showed large differences as compared to superposition algorithms. Based on our study, we recommend Superposition and AcurosXB algorithms as the first choice of

  14. Comparison of Acuros (AXB) and Anisotropic Analytical Algorithm (AAA) for dose calculation in treatment of oesophageal cancer: effects on modelling tumour control probability

    International Nuclear Information System (INIS)

    Padmanaban, Sriram; Warren, Samantha; Walsh, Anthony; Partridge, Mike; Hawkins, Maria A

    2014-01-01

    To investigate systematic changes in dose arising when treatment plans optimised using the Anisotropic Analytical Algorithm (AAA) are recalculated using Acuros XB (AXB) in patients treated with definitive chemoradiotherapy (dCRT) for locally advanced oesophageal cancers. We have compared treatment plans created using AAA with those recalculated using AXB. Although the Anisotropic Analytical Algorithm (AAA) is currently more widely used in clinical routine, Acuros XB (AXB) has been shown to more accurately calculate the dose distribution, particularly in heterogeneous regions. Studies to predict clinical outcome should be based on modelling the dose delivered to the patient as accurately as possible. CT datasets from ten patients were selected for this retrospective study. VMAT (Volumetric modulated arc therapy) plans with 2 arcs, collimator rotation ± 5-10° and dose prescription 50 Gy / 25 fractions were created using Varian Eclipse (v10.0). The initial dose calculation was performed with AAA, and AXB plans were created by re-calculating the dose distribution using the same number of monitor units (MU) and multileaf collimator (MLC) files as the original plan. The difference in calculated dose to organs at risk (OAR) was compared using dose-volume histogram (DVH) statistics and p values were calculated using the Wilcoxon signed rank test. The potential clinical effect of dosimetric differences in the gross tumour volume (GTV) was evaluated using three different TCP models from the literature. PTV Median dose was apparently 0.9 Gy lower (range: 0.5 Gy - 1.3 Gy; p < 0.05) for VMAT AAA plans re-calculated with AXB and GTV mean dose was reduced by on average 1.0 Gy (0.3 Gy −1.5 Gy; p < 0.05). An apparent difference in TCP of between 1.2% and 3.1% was found depending on the choice of TCP model. OAR mean dose was lower in the AXB recalculated plan than the AAA plan (on average, dose reduction: lung 1.7%, heart 2.4%). Similar trends were seen for CRT plans

  15. Dosimetric evaluation of photon dose calculation under jaw and MLC shielding

    International Nuclear Information System (INIS)

    Fogliata, A.; Clivio, A.; Vanetti, E.; Nicolini, G.; Belosi, M. F.; Cozzi, L.

    2013-01-01

    Purpose: The accuracy of photon dose calculation algorithms in out-of-field regions is often neglected, despite its importance for organs at risk and peripheral dose evaluation. The present work has assessed this for the anisotropic analytical algorithm (AAA) and the Acuros-XB algorithms implemented in the Eclipse treatment planning system. Specifically, the regions shielded by the jaw, or the MLC, or both MLC and jaw for flattened and unflattened beams have been studied.Methods: The accuracy in out-of-field dose under different conditions was studied for two different algorithms. Measured depth doses out of the field, for different field sizes and various distances from the beam edge were compared with the corresponding AAA and Acuros-XB calculations in water. Four volumetric modulated arc therapy plans (in the RapidArc form) were optimized in a water equivalent phantom, PTW Octavius, to obtain a region always shielded by the MLC (or MLC and jaw) during the delivery. Doses to different points located in the shielded region and in a target-like structure were measured with an ion chamber, and results were compared with the AAA and Acuros-XB calculations. Photon beams of 6 and 10 MV, flattened and unflattened were used for the tests.Results: Good agreement between calculated and measured depth doses was found using both algorithms for all points measured at depth greater than 3 cm. The mean dose differences (±1SD) were −8%± 16%, −3%± 15%, −16%± 18%, and −9%± 16% for measurements vs AAA calculations and −10%± 14%, −5%± 12%, −19%± 17%, and −13%± 14% for Acuros-XB, for 6X, 6 flattening-filter free (FFF), 10X, and 10FFF beams, respectively. The same figures for dose differences relative to the open beam central axis dose were: −0.1%± 0.3%, 0.0%± 0.4%, −0.3%± 0.3%, and −0.1%± 0.3% for AAA and −0.2%± 0.4%, −0.1%± 0.4%, −0.5%± 0.5%, and −0.3%± 0.4% for Acuros-XB. Buildup dose was overestimated with AAA, while Acuros-XB gave

  16. Assistance tool commissioner of new algorithms of systems planning of therapy with ionizing; Herramienta de asistencia en el comisionado de nuevos algoritmos de sistemas de planificacion de terapia con radiaciones ionizantes

    Energy Technology Data Exchange (ETDEWEB)

    Reinado, D.; Ricos, B.; Alonso, S.; Chinillach, N.; Bellido, P.; Tortosa, R.

    2013-07-01

    The Commissioner of a new scheduling algorithm is associated with a high number of hours of work and measures. In order to optimize the development of the Commissioner for the AAA algorithms and Acuros XB within planning Eclipse (V.10) system marketed by Varian and have developed a tool in Microsoft Excel format where the different tests have been included to perform. (Author)

  17. SU-F-T-413: Calculation Accuracy of AAA and Acuros Using Cerrobend Blocks for TBI at 400cm

    International Nuclear Information System (INIS)

    Lamichhane, N; Studenski, M

    2016-01-01

    Purpose: It is essential to assess the lung dose during TBI to reduce toxicity. Here we characterize the accuracy of the AAA and Acuros algorithms when using cerrobend lung shielding blocks at an extended distance for TBI. Methods: We positioned a 30×30×30 cm3 solid water slab phantom at 400 cm SSD and measured PDDs (Exradin A12 and PTW parallel plate ion chambers). A 2 cm thick, 10×10 cm2 cerrobend block was hung 2 cm in front of the phantom. This geometry was reproduced in the planning system for both AAA and Acuros. In AAA, the mass density of the cerrobend block was forced to 9.38 g/cm3 and in Acuros it was forced to 8.0 g/cm3 (limited to selecting stainless steel). Three different relative electron densities (RED) were tested for each algorithm; 4.97, 6.97, and 8.97. Results: PDDs from both Acuros and AAA underestimated the delivered dose. AAA calculated that depth dose was higher for RED of 4.97 as compared to 6.97 and 8.97 but still lower than measured. There was no change in the percent depth dose with changing relative electron densities for Acuros. Conclusion: Care should be taken before using AAA or Acuros with cerrobend blocks as the planning system underestimates dose. Acuros limits the ability to modify RED when compared to AAA.

  18. SU-F-T-413: Calculation Accuracy of AAA and Acuros Using Cerrobend Blocks for TBI at 400cm

    Energy Technology Data Exchange (ETDEWEB)

    Lamichhane, N; Studenski, M [University of Miami, Miami, FL (United States)

    2016-06-15

    Purpose: It is essential to assess the lung dose during TBI to reduce toxicity. Here we characterize the accuracy of the AAA and Acuros algorithms when using cerrobend lung shielding blocks at an extended distance for TBI. Methods: We positioned a 30×30×30 cm3 solid water slab phantom at 400 cm SSD and measured PDDs (Exradin A12 and PTW parallel plate ion chambers). A 2 cm thick, 10×10 cm2 cerrobend block was hung 2 cm in front of the phantom. This geometry was reproduced in the planning system for both AAA and Acuros. In AAA, the mass density of the cerrobend block was forced to 9.38 g/cm3 and in Acuros it was forced to 8.0 g/cm3 (limited to selecting stainless steel). Three different relative electron densities (RED) were tested for each algorithm; 4.97, 6.97, and 8.97. Results: PDDs from both Acuros and AAA underestimated the delivered dose. AAA calculated that depth dose was higher for RED of 4.97 as compared to 6.97 and 8.97 but still lower than measured. There was no change in the percent depth dose with changing relative electron densities for Acuros. Conclusion: Care should be taken before using AAA or Acuros with cerrobend blocks as the planning system underestimates dose. Acuros limits the ability to modify RED when compared to AAA.

  19. Experimental verification of the Acuros XB and AAA dose calculation adjacent to heterogeneous media for IMRT and RapidArc of nasopharygeal carcinoma.

    Science.gov (United States)

    Kan, Monica W K; Leung, Lucullus H T; So, Ronald W K; Yu, Peter K N

    2013-03-01

    To compare the doses calculated by the Acuros XB (AXB) algorithm and analytical anisotropic algorithm (AAA) with experimentally measured data adjacent to and within heterogeneous medium using intensity modulated radiation therapy (IMRT) and RapidArc(®) (RA) volumetric arc therapy plans for nasopharygeal carcinoma (NPC). Two-dimensional dose distribution immediately adjacent to both air and bone inserts of a rectangular tissue equivalent phantom irradiated using IMRT and RA plans for NPC cases were measured with GafChromic(®) EBT3 films. Doses near and within the nasopharygeal (NP) region of an anthropomorphic phantom containing heterogeneous medium were also measured with thermoluminescent dosimeters (TLD) and EBT3 films. The measured data were then compared with the data calculated by AAA and AXB. For AXB, dose calculations were performed using both dose-to-medium (AXB_Dm) and dose-to-water (AXB_Dw) options. Furthermore, target dose differences between AAA and AXB were analyzed for the corresponding real patients. The comparison of real patient plans was performed by stratifying the targets into components of different densities, including tissue, bone, and air. For the verification of planar dose distribution adjacent to air and bone using the rectangular phantom, the percentages of pixels that passed the gamma analysis with the ± 3%/3mm criteria were 98.7%, 99.5%, and 97.7% on the axial plane for AAA, AXB_Dm, and AXB_Dw, respectively, averaged over all IMRT and RA plans, while they were 97.6%, 98.2%, and 97.7%, respectively, on the coronal plane. For the verification of planar dose distribution within the NP region of the anthropomorphic phantom, the percentages of pixels that passed the gamma analysis with the ± 3%/3mm criteria were 95.1%, 91.3%, and 99.0% for AAA, AXB_Dm, and AXB_Dw, respectively, averaged over all IMRT and RA plans. Within the NP region where air and bone were present, the film measurements represented the dose close to unit density water

  20. Experimental verification of the Acuros XB and AAA dose calculation adjacent to heterogeneous media for IMRT and RapidArc of nasopharygeal carcinoma

    International Nuclear Information System (INIS)

    Kan, Monica W. K.; Leung, Lucullus H. T.; So, Ronald W. K.; Yu, Peter K. N.

    2013-01-01

    Purpose: To compare the doses calculated by the Acuros XB (AXB) algorithm and analytical anisotropic algorithm (AAA) with experimentally measured data adjacent to and within heterogeneous medium using intensity modulated radiation therapy (IMRT) and RapidArc ® (RA) volumetric arc therapy plans for nasopharygeal carcinoma (NPC). Methods: Two-dimensional dose distribution immediately adjacent to both air and bone inserts of a rectangular tissue equivalent phantom irradiated using IMRT and RA plans for NPC cases were measured with GafChromic ® EBT3 films. Doses near and within the nasopharygeal (NP) region of an anthropomorphic phantom containing heterogeneous medium were also measured with thermoluminescent dosimeters (TLD) and EBT3 films. The measured data were then compared with the data calculated by AAA and AXB. For AXB, dose calculations were performed using both dose-to-medium (AXB Dm ) and dose-to-water (AXB Dw ) options. Furthermore, target dose differences between AAA and AXB were analyzed for the corresponding real patients. The comparison of real patient plans was performed by stratifying the targets into components of different densities, including tissue, bone, and air. Results: For the verification of planar dose distribution adjacent to air and bone using the rectangular phantom, the percentages of pixels that passed the gamma analysis with the ± 3%/3mm criteria were 98.7%, 99.5%, and 97.7% on the axial plane for AAA, AXB Dm , and AXB Dw , respectively, averaged over all IMRT and RA plans, while they were 97.6%, 98.2%, and 97.7%, respectively, on the coronal plane. For the verification of planar dose distribution within the NP region of the anthropomorphic phantom, the percentages of pixels that passed the gamma analysis with the ± 3%/3mm criteria were 95.1%, 91.3%, and 99.0% for AAA, AXB Dm , and AXB Dw , respectively, averaged over all IMRT and RA plans. Within the NP region where air and bone were present, the film measurements represented the

  1. Dose-to-medium vs. dose-to-water: Dosimetric evaluation of dose reporting modes in Acuros XB for prostate, lung and breast cancer

    Directory of Open Access Journals (Sweden)

    Suresh Rana

    2014-12-01

    Full Text Available Purpose: Acuros XB (AXB dose calculation algorithm is available for external beam photon dose calculations in Eclipse treatment planning system (TPS. The AXB can report the absorbed dose in two modes: dose-to-water (Dw and dose-to-medium (Dm. The main purpose of this study was to compare the dosimetric results of the AXB_Dm with that of AXB_Dw on real patient treatment plans. Methods: Four groups of patients (prostate cancer, stereotactic body radiation therapy (SBRT lung cancer, left breast cancer, and right breast cancer were selected for this study, and each group consisted of 5 cases. The treatment plans of all cases were generated in the Eclipse TPS. For each case, treatment plans were computed using AXB_Dw and AXB_Dm for identical beam arrangements. Dosimetric evaluation was done by comparing various dosimetric parameters in the AXB_Dw plans with that of AXB_Dm plans for the corresponding patient case. Results: For the prostate cancer, the mean planning target volume (PTV dose in the AXB_Dw plans was higher by up to 1.0%, but the mean PTV dose was within ±0.3% for the SBRT lung cancer. The analysis of organs at risk (OAR results in the prostate cancer showed that AXB_Dw plans consistently produced higher values for the bladder and femoral heads but not for the rectum. In the case of SBRT lung cancer, a clear trend was seen for the heart mean dose and spinal cord maximum dose, with AXB_Dw plans producing higher values than the AXB_Dm plans. However, the difference in the lung doses between the AXB_Dm and AXB_Dw plans did not always produce a clear trend, with difference ranged from -1.4% to 2.9%. For both the left and right breast cancer, the AXB_Dm plans produced higher maximum dose to the PTV for all cases. The evaluation of the maximum dose to the skin showed higher values in the AXB_Dm plans for all 5 left breast cancer cases, whereas only 2 cases had higher maximum dose to the skin in the AXB_Dm plans for the right breast cancer

  2. [Comparison of dose calculation algorithms in stereotactic radiation therapy in lung].

    Science.gov (United States)

    Tomiyama, Yuki; Araki, Fujio; Kanetake, Nagisa; Shimohigashi, Yoshinobu; Tominaga, Hirofumi; Sakata, Jyunichi; Oono, Takeshi; Kouno, Tomohiro; Hioki, Kazunari

    2013-06-01

    Dose calculation algorithms in radiation treatment planning systems (RTPSs) play a crucial role in stereotactic body radiation therapy (SBRT) in the lung with heterogeneous media. This study investigated the performance and accuracy of dose calculation for three algorithms: analytical anisotropic algorithm (AAA), pencil beam convolution (PBC) and Acuros XB (AXB) in Eclipse (Varian Medical Systems), by comparison against the Voxel Monte Carlo algorithm (VMC) in iPlan (BrainLab). The dose calculations were performed with clinical lung treatments under identical planning conditions, and the dose distributions and the dose volume histogram (DVH) were compared among algorithms. AAA underestimated the dose in the planning target volume (PTV) compared to VMC and AXB in most clinical plans. In contrast, PBC overestimated the PTV dose. AXB tended to slightly overestimate the PTV dose compared to VMC but the discrepancy was within 3%. The discrepancy in the PTV dose between VMC and AXB appears to be due to differences in physical material assignments, material voxelization methods, and an energy cut-off for electron interactions. The dose distributions in lung treatments varied significantly according to the calculation accuracy of the algorithms. VMC and AXB are better algorithms than AAA for SBRT.

  3. SU-E-T-802: Verification of Implanted Cardiac Pacemaker Doses in Intensity-Modulated Radiation Therapy: Dose Prediction Accuracy and Reduction Effect of a Lead Sheet

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J [Dept. of Radiation Oncology, Konkuk University Medical Center, Seoul (Korea, Republic of); Chung, J [Dept. of Radiation Oncology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of)

    2015-06-15

    Purpose: To verify delivered doses on the implanted cardiac pacemaker, predicted doses with and without dose reduction method were verified using the MOSFET detectors in terms of beam delivery and dose calculation techniques in intensity-modulated radiation therapy (IMRT). Methods: The pacemaker doses for a patient with a tongue cancer were predicted according to the beam delivery methods [step-and-shoot (SS) and sliding window (SW)], intensity levels for dose optimization, and dose calculation algorithms. Dosimetric effects on the pacemaker were calculated three dose engines: pencil-beam convolution (PBC), analytical anisotropic algorithm (AAA), and Acuros-XB. A lead shield of 2 mm thickness was designed for minimizing irradiated doses to the pacemaker. Dose variations affected by the heterogeneous material properties of the pacemaker and effectiveness of the lead shield were predicted by the Acuros-XB. Dose prediction accuracy and the feasibility of the dose reduction strategy were verified based on the measured skin doses right above the pacemaker using mosfet detectors during the radiation treatment. Results: The Acuros-XB showed underestimated skin doses and overestimated doses by the lead-shield effect, even though the lower dose disagreement was observed. It led to improved dose prediction with higher intensity level of dose optimization in IMRT. The dedicated tertiary lead sheet effectively achieved reduction of pacemaker dose up to 60%. Conclusion: The current SS technique could deliver lower scattered doses than recommendation criteria, however, use of the lead sheet contributed to reduce scattered doses.Thin lead plate can be a useful tertiary shielder and it could not acuse malfunction or electrical damage of the implanted pacemaker in IMRT. It is required to estimate more accurate scattered doses of the patient with medical device to design proper dose reduction strategy.

  4. SU-E-T-802: Verification of Implanted Cardiac Pacemaker Doses in Intensity-Modulated Radiation Therapy: Dose Prediction Accuracy and Reduction Effect of a Lead Sheet

    International Nuclear Information System (INIS)

    Lee, J; Chung, J

    2015-01-01

    Purpose: To verify delivered doses on the implanted cardiac pacemaker, predicted doses with and without dose reduction method were verified using the MOSFET detectors in terms of beam delivery and dose calculation techniques in intensity-modulated radiation therapy (IMRT). Methods: The pacemaker doses for a patient with a tongue cancer were predicted according to the beam delivery methods [step-and-shoot (SS) and sliding window (SW)], intensity levels for dose optimization, and dose calculation algorithms. Dosimetric effects on the pacemaker were calculated three dose engines: pencil-beam convolution (PBC), analytical anisotropic algorithm (AAA), and Acuros-XB. A lead shield of 2 mm thickness was designed for minimizing irradiated doses to the pacemaker. Dose variations affected by the heterogeneous material properties of the pacemaker and effectiveness of the lead shield were predicted by the Acuros-XB. Dose prediction accuracy and the feasibility of the dose reduction strategy were verified based on the measured skin doses right above the pacemaker using mosfet detectors during the radiation treatment. Results: The Acuros-XB showed underestimated skin doses and overestimated doses by the lead-shield effect, even though the lower dose disagreement was observed. It led to improved dose prediction with higher intensity level of dose optimization in IMRT. The dedicated tertiary lead sheet effectively achieved reduction of pacemaker dose up to 60%. Conclusion: The current SS technique could deliver lower scattered doses than recommendation criteria, however, use of the lead sheet contributed to reduce scattered doses.Thin lead plate can be a useful tertiary shielder and it could not acuse malfunction or electrical damage of the implanted pacemaker in IMRT. It is required to estimate more accurate scattered doses of the patient with medical device to design proper dose reduction strategy

  5. SU-E-T-800: Verification of Acurose XB Dose Calculation Algorithm at Air Cavity-Tissue Interface Using Film Measurement for Small Fields of 6-MV Flattening Filter-Free Beams

    International Nuclear Information System (INIS)

    Kang, S; Suh, T; Chung, J

    2015-01-01

    Purpose: To verify the dose accuracy of Acuros XB (AXB) dose calculation algorithm at air-tissue interface using inhomogeneous phantom for 6-MV flattening filter-free (FFF) beams. Methods: An inhomogeneous phantom included air cavity was manufactured for verifying dose accuracy at the air-tissue interface. The phantom was composed with 1 and 3 cm thickness of air cavity. To evaluate the central axis doses (CAD) and dose profiles of the interface, the dose calculations were performed for 3 × 3 and 4 × 4 cm 2 fields of 6 MV FFF beams with AAA and AXB in Eclipse treatment plainning system. Measurements in this region were performed with Gafchromic film. The root mean square errors (RMSE) were analyzed with calculated and measured dose profile. Dose profiles were divided into inner-dose profile (>80%) and penumbra (20% to 80%) region for evaluating RMSE. To quantify the distribution difference, gamma evaluation was used and determined the agreement with 3%/3mm criteria. Results: The percentage differences (%Diffs) between measured and calculated CAD in the interface, AXB shows more agreement than AAA. The %Diffs were increased with increasing the thickness of air cavity size and it is similar for both algorithms. In RMSEs of inner-profile, AXB was more accurate than AAA. The difference was up to 6 times due to overestimation by AAA. RMSEs of penumbra appeared to high difference for increasing the measurement depth. Gamma agreement also presented that the passing rates decreased in penumbra. Conclusion: This study demonstrated that the dose calculation with AXB shows more accurate than with AAA for the air-tissue interface. The 2D dose distributions with AXB for both inner-profile and penumbra showed better agreement than with AAA relative to variation of the measurement depths and air cavity sizes

  6. SU-F-T-441: Dose Calculation Accuracy in CT Images Reconstructed with Artifact Reduction Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ng, C; Chan, S; Lee, F; Ngan, R [Queen Elizabeth Hospital (Hong Kong); Lee, V [University of Hong Kong, Hong Kong, HK (Hong Kong)

    2016-06-15

    Purpose: Accuracy of radiotherapy dose calculation in patients with surgical implants is complicated by two factors. First is the accuracy of CT number, second is the dose calculation accuracy. We compared measured dose with dose calculated on CT images reconstructed with FBP and an artifact reduction algorithm (OMAR, Philips) for a phantom with high density inserts. Dose calculation were done with Varian AAA and AcurosXB. Methods: A phantom was constructed with solid water in which 2 titanium or stainless steel rods could be inserted. The phantom was scanned with the Philips Brillance Big Bore CT. Image reconstruction was done with FBP and OMAR. Two 6 MV single field photon plans were constructed for each phantom. Radiochromic films were placed at different locations to measure the dose deposited. One plan has normal incidence on the titanium/steel rods. In the second plan, the beam is at almost glancing incidence on the metal rods. Measurements were then compared with dose calculated with AAA and AcurosXB. Results: The use of OMAR images slightly improved the dose calculation accuracy. The agreement between measured and calculated dose was best with AXB and image reconstructed with OMAR. Dose calculated on titanium phantom has better agreement with measurement. Large discrepancies were seen at points directly above and below the high density inserts. Both AAA and AXB underestimated the dose directly above the metal surface, while overestimated the dose below the metal surface. Doses measured downstream of metal were all within 3% of calculated values. Conclusion: When doing treatment planning for patients with metal implants, care must be taken to acquire correct CT images to improve dose calculation accuracy. Moreover, great discrepancies in measured and calculated dose were observed at metal/tissue interface. Care must be taken in estimating the dose in critical structures that come into contact with metals.

  7. SU-F-SPS-11: The Dosimetric Comparison of Truebeam 2.0 and Cyberknife M6 Treatment Plans for Brain SRS Treatment

    Energy Technology Data Exchange (ETDEWEB)

    Mabhouti, H; Sanli, E; Cebe, M; Codel, G; Pacaci, P; Serin, E; Kucuk, N; Kucukmorkoc, E; Doyuran, M; Canoglu, D; Altinok, A; Acar, H; Caglar Ozkok, H [Medipol University, Istanbul, Istanbul (Turkey)

    2016-06-15

    Purpose: Brain stereotactic radiosurgery involves the use of precisely directed, single session radiation to create a desired radiobiologic response within the brain target with acceptable minimal effects on surrounding structures or tissues. In this study, the dosimetric comparison of Truebeam 2.0 and Cyberknife M6 treatment plans were made. Methods: For Truebeam 2.0 machine, treatment planning were done using 2 full arc VMAT technique with 6 FFF beam on the CT scan of Randophantom simulating the treatment of sterotactic treatments for one brain metastasis. The dose distribution were calculated using Eclipse treatment planning system with Acuros XB algorithm. The treatment planning of the same target were also done for Cyberknife M6 machine with Multiplan treatment planning system using Monte Carlo algorithm. Using the same film batch, the net OD to dose calibration curve was obtained using both machine by delivering 0- 800 cGy. Films were scanned 48 hours after irradiation using an Epson 1000XL flatbed scanner. Dose distribution were measured using EBT3 film dosimeter. The measured and calculated doses were compared. Results: The dose distribution in the target and 2 cm beyond the target edge were calculated on TPSs and measured using EBT3 film. For cyberknife plans, the gamma analysis passing rates between measured and calculated dose distributions were 99.2% and 96.7% for target and peripheral region of target respectively. For Truebeam plans, the gamma analysis passing rates were 99.1% and 95.5% for target and peripheral region of target respectively. Conclusion: Although, target dose distribution calculated accurately by Acuros XB and Monte Carlo algorithms, Monte carlo calculation algorithm predicts dose distribution around the peripheral region of target more accurately than Acuros algorithm.

  8. SU-F-T-609: Impact of Dosimetric Variation for Prescription Dose Using Analytical Anisotropic Algorithm (AAA) in Lung SBRT

    Energy Technology Data Exchange (ETDEWEB)

    Kawai, D [Kanagawa Cancer Center, Yokohama, Kanagawa (Japan); Takahashi, R [Cancer Institute Hospital of Japanese Foundation for Cancer Research, Koto, Tokyo (Japan); Kamima, T [Cancer Institute Hospital Japanese Foundation for Cancer Research, Koto, Tokyo (Japan); Baba, H [The National Cancer Center Hospital East, Kshiwa, Chiba (Japan); Yamamoto, T; Kubo, Y [Otemae Hospital, Chuo-ku, Osaka (Japan); Ishibashi, S; Higuchi, Y [Sasebo City General Hospital, Sasebo, Nagasaki (Japan); Tani, K [St Luke’s International Hospital, Tokyo, Tokyo (Japan); Tachibana, H [National Cancer Center, Kashiwa, Chiba (Japan)

    2016-06-15

    Purpose: Actual irradiated prescription dose to patients cannot be verified. Thus, independent dose verification and second treatment planning system are used as the secondary check. AAA dose calculation engine has contributed to lung SBRT. We conducted a multi-institutional study to assess variation of prescription dose for lung SBRT when using AAA in reference to using Acuros XB and Clarkson algorithm. Methods: Six institutes in Japan participated in this study. All SBRT treatments were planed using AAA in Eclipse and Adaptive Convolve (AC) in Pinnacle3. All of the institutes used a same independent dose verification software program (Simple MU Analysis: SMU, Triangle Product, Ishikawa, Japan), which implemented a Clarkson-based dose calculation algorithm using CT image dataset. A retrospective analysis for lung SBRT plans (73 patients) was performed to compute the confidence limit (CL, Average±2SD) in dose between the AAA and the SMU. In one of the institutes, a additional analysis was conducted to evaluate the variations between the AAA and the Acuros XB (AXB). Results: The CL for SMU shows larger systematic and random errors of 8.7±9.9 % for AAA than the errors of 5.7±4.2 % for AC. The variations of AAA correlated with the mean CT values in the voxels of PTV (a correlation coefficient : −0.7) . The comparison of AXB vs. AAA shows smaller systematic and random errors of −0.7±1.7%. The correlation between dose variations for AXB and the mean CT values in PTV was weak (0.4). However, there were several plans with more than 2% deviation of AAPM TG114 (Maximum: −3.3 %). Conclusion: In comparison for AC, prescription dose calculated by AAA may be more variable in lung SBRT patient. Even AXB comparison shows unexpected variation. Care should be taken for the use of AAA in lung SBRT. This research is partially supported by Japan Agency for Medical Research and Development (AMED)

  9. SU-D-204-07: Retrospective Correlation of Dose Accuracy with Regions of Local Failure for Early Stage Lung Cancer Patients Treated with Stereotactic Body Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Devpura, S; Li, H; Liu, C; Fraser, C; Ajlouni, M; Movsas, B; Chetty, I [Henry Ford Health System, Detroit, MI (United States)

    2016-06-15

    Purpose: To correlate dose distributions computed using six algorithms for recurrent early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT), with outcome (local failure). Methods: Of 270 NSCLC patients treated with 12Gyx4, 20 were found to have local recurrence prior to the 2-year time point. These patients were originally planned with 1-D pencil beam (1-D PB) algorithm. 4D imaging was performed to manage tumor motion. Regions of local failures were determined from follow-up PET-CT scans. Follow-up CT images were rigidly fused to the planning CT (pCT), and recurrent tumor volumes (Vrecur) were mapped to the pCT. Dose was recomputed, retrospectively, using five algorithms: 3-D PB, collapsed cone convolution (CCC), anisotropic analytical algorithm (AAA), AcurosXB, and Monte Carlo (MC). Tumor control probability (TCP) was computed using the Marsden model (1,2). Patterns of failure were classified as central, in-field, marginal, and distant for Vrecur ≥95% of prescribed dose, 95–80%, 80–20%, and ≤20%, respectively (3). Results: Average PTV D95 (dose covering 95% of the PTV) for 3-D PB, CCC, AAA, AcurosXB, and MC relative to 1-D PB were 95.3±2.1%, 84.1±7.5%, 84.9±5.7%, 86.3±6.0%, and 85.1±7.0%, respectively. TCP values for 1-D PB, 3-D PB, CCC, AAA, AcurosXB, and MC were 98.5±1.2%, 95.7±3.0, 79.6±16.1%, 79.7±16.5%, 81.1±17.5%, and 78.1±20%, respectively. Patterns of local failures were similar for 1-D and 3D PB plans, which predicted that the majority of failures occur in centraldistal regions, with only ∼15% occurring distantly. However, with convolution/superposition and MC type algorithms, the majority of failures (65%) were predicted to be distant, consistent with the literature. Conclusion: Based on MC and convolution/superposition type algorithms, average PTV D95 and TCP were ∼15% lower than the planned 1-D PB dose calculation. Patterns of failure results suggest that MC and convolution

  10. Acuros XB: Impact dosimetric heterogeneous geometric appliance low density AXB represents a step forward in terms of algorithms of calculation for the planning of treatments; Acuros XB: impacto dosimetrico de artefactos geometricos heterogeneos de baja densidad

    Energy Technology Data Exchange (ETDEWEB)

    Jurado Bruggeman, D.; Munoz Montplet, C.; Agramunt Chaler, S.; Bueno Vizcarra, M.; Duch Guillen, M. A.

    2013-07-01

    In some situations, the dose distributions which provides are significantly different from those obtained with other marketed algorithms, which entails new challenges for evaluation and optimization of the treatment. (Author)

  11. AAA and AXB algorithms for the treatment of nasopharyngeal carcinoma using IMRT and RapidArc techniques.

    Science.gov (United States)

    Kamaleldin, Maha; Elsherbini, Nader A; Elshemey, Wael M

    2017-09-27

    The aim of this study is to evaluate the impact of anisotropic analytical algorithm (AAA) and 2 reporting systems (AXB-D m and AXB-D w ) of Acuros XB algorithm (AXB) on clinical plans of nasopharyngeal patients using intensity-modulated radiotherapy (IMRT) and RapidArc (RA) techniques. Six plans of different algorithm-technique combinations are performed for 10 patients to calculate dose-volume histogram (DVH) physical parameters for planning target volumes (PTVs) and organs at risk (OARs). The number of monitor units (MUs) and calculation time are also determined. Good coverage is reported for all algorithm-technique combination plans without exceeding the tolerance for OARs. Regardless of the algorithm, RA plans persistently reported higher D 2% values for PTV-70. All IMRT plans reported higher number of MUs (especially with AXB) than did RA plans. AAA-IMRT produced the minimum calculation time of all plans. Major differences between the investigated algorithm-technique combinations are reported only for the number of MUs and calculation time parameters. In terms of these 2 parameters, it is recommended to employ AXB in calculating RA plans and AAA in calculating IMRT plans to achieve minimum calculation times at reduced number of MUs. Copyright © 2017 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  12. An investigation into the accuracy of Acuros(TM) BV in heterogeneous phantoms for a (192)Ir HDR source using LiF TLDs.

    Science.gov (United States)

    Manning, Siobhan; Nyathi, Thulani

    2014-09-01

    The aim of this study was to evaluate the accuracy of the new Acuros(TM) BV algorithm using well characterized LiF:Mg,Ti TLD 100 in heterogeneous phantoms. TLDs were calibrated using an (192)Ir source and the AAPM TG-43 calculated dose. The Tölli and Johansson Large Cavity principle and Modified Bragg Gray principle methods confirm the dose calculated by TG-43 at a distance of 5 cm from the source to within 4 %. These calibrated TLDs were used to measure the dose in heterogeneous phantoms containing air, stainless steel, bone and titanium. The TLD results were compared with the AAPM TG-43 calculated dose and the Acuros calculated dose. Previous studies by other authors have shown a change in TLD response with depth when irradiated with an (192)Ir source. This TLD depth dependence was assessed by performing measurements at different depths in a water phantom with an (192)Ir source. The variation in the TLD response with depth in a water phantom was not found to be statistically significant for the distances investigated. The TLDs agreed with Acuros(TM) BV within 1.4 % in the air phantom, 3.2 % in the stainless steel phantom, 3 % in the bone phantom and 5.1 % in the titanium phantom. The TLDs showed a larger discrepancy when compared to TG-43 with a maximum deviation of 9.3 % in the air phantom, -11.1 % in the stainless steel phantom, -14.6 % in the bone phantom and -24.6 % in the titanium phantom. The results have shown that Acuros accounts for the heterogeneities investigated with a maximum deviation of -5.1 %. The uncertainty associated with the TLDs calibrated in the PMMA phantom is ±8.2 % (2SD).

  13. SU-F-T-545: Dosimetric and Radiobiological Evaluation of Dose Calculation Algorithms On Prostate Stereotactic Body Radiotherapy Using Conventional Flattened and Flattening-Filter-Free Beam

    International Nuclear Information System (INIS)

    Kang, S; Suh, T; Chung, J; Eom, K; Lee, J

    2016-01-01

    Purpose: The purpose of this study is to evaluate the dosimetric and radiobiological impact of Acuros XB (AXB) and Anisotropic Analytic Algorithm (AAA) dose calculation algorithms on prostate stereotactic body radiation therapy plans with both conventional flattened (FF) and flattening-filter free (FFF) modes. Methods: For thirteen patients with prostate cancer, SBRT planning was performed using 10-MV photon beam with FF and FFF modes. The total dose prescribed to the PTV was 42.7 Gy in 7 fractions. All plans were initially calculated using AAA algorithm in Eclipse treatment planning system (11.0.34), and then were re-calculated using AXB with the same MUs and MLC files. The four types of plans for different algorithms and beam energies were compared in terms of homogeneity and conformity. To evaluate the radiobiological impact, the tumor control probability (TCP) and normal tissue complication probability (NTCP) calculations were performed. Results: For PTV, both calculation algorithms and beam modes lead to comparable homogeneity and conformity. However, the averaged TCP values in AXB plans were always lower than in AAA plans with an average difference of 5.3% and 6.1% for 10-MV FFF and FF beam, respectively. In addition, the averaged NTCP values for organs at risk (OARs) were comparable. Conclusion: This study showed that prostate SBRT plan were comparable dosimetric results with different dose calculation algorithms as well as delivery beam modes. For biological results, even though NTCP values for both calculation algorithms and beam modes were similar, AXB plans produced slightly lower TCP compared to the AAA plans.

  14. SU-F-T-545: Dosimetric and Radiobiological Evaluation of Dose Calculation Algorithms On Prostate Stereotactic Body Radiotherapy Using Conventional Flattened and Flattening-Filter-Free Beam

    Energy Technology Data Exchange (ETDEWEB)

    Kang, S; Suh, T [The catholic university of Korea, Seoul (Korea, Republic of); Chung, J; Eom, K [Seoul National University Bundang Hospital (Korea, Republic of); Lee, J [Konkuk University Medical Center (Korea, Republic of)

    2016-06-15

    Purpose: The purpose of this study is to evaluate the dosimetric and radiobiological impact of Acuros XB (AXB) and Anisotropic Analytic Algorithm (AAA) dose calculation algorithms on prostate stereotactic body radiation therapy plans with both conventional flattened (FF) and flattening-filter free (FFF) modes. Methods: For thirteen patients with prostate cancer, SBRT planning was performed using 10-MV photon beam with FF and FFF modes. The total dose prescribed to the PTV was 42.7 Gy in 7 fractions. All plans were initially calculated using AAA algorithm in Eclipse treatment planning system (11.0.34), and then were re-calculated using AXB with the same MUs and MLC files. The four types of plans for different algorithms and beam energies were compared in terms of homogeneity and conformity. To evaluate the radiobiological impact, the tumor control probability (TCP) and normal tissue complication probability (NTCP) calculations were performed. Results: For PTV, both calculation algorithms and beam modes lead to comparable homogeneity and conformity. However, the averaged TCP values in AXB plans were always lower than in AAA plans with an average difference of 5.3% and 6.1% for 10-MV FFF and FF beam, respectively. In addition, the averaged NTCP values for organs at risk (OARs) were comparable. Conclusion: This study showed that prostate SBRT plan were comparable dosimetric results with different dose calculation algorithms as well as delivery beam modes. For biological results, even though NTCP values for both calculation algorithms and beam modes were similar, AXB plans produced slightly lower TCP compared to the AAA plans.

  15. Evaluation of six TPS algorithms in computing entrance and exit doses

    Science.gov (United States)

    Metwaly, Mohamed; Glegg, Martin; Baggarley, Shaun P.; Elliott, Alex

    2014-01-01

    Entrance and exit doses are commonly measured in in vivo dosimetry for comparison with expected values, usually generated by the treatment planning system (TPS), to verify accuracy of treatment delivery. This report aims to evaluate the accuracy of six TPS algorithms in computing entrance and exit doses for a 6 MV beam. The algorithms tested were: pencil beam convolution (Eclipse PBC), analytical anisotropic algorithm (Eclipse AAA), AcurosXB (Eclipse AXB), FFT convolution (XiO Convolution), multigrid superposition (XiO Superposition), and Monte Carlo photon (Monaco MC). Measurements with ionization chamber (IC) and diode detector in water phantoms were used as a reference. Comparisons were done in terms of central axis point dose, 1D relative profiles, and 2D absolute gamma analysis. Entrance doses computed by all TPS algorithms agreed to within 2% of the measured values. Exit doses computed by XiO Convolution, XiO Superposition, Eclipse AXB, and Monaco MC agreed with the IC measured doses to within 2%‐3%. Meanwhile, Eclipse PBC and Eclipse AAA computed exit doses were higher than the IC measured doses by up to 5.3% and 4.8%, respectively. Both algorithms assume that full backscatter exists even at the exit level, leading to an overestimation of exit doses. Despite good agreements at the central axis for Eclipse AXB and Monaco MC, 1D relative comparisons showed profiles mismatched at depths beyond 11.5 cm. Overall, the 2D absolute gamma (3%/3 mm) pass rates were better for Monaco MC, while Eclipse AXB failed mostly at the outer 20% of the field area. The findings of this study serve as a useful baseline for the implementation of entrance and exit in vivo dosimetry in clinical departments utilizing any of these six common TPS algorithms for reference comparison. PACS numbers: 87.55.‐x, 87.55.D‐, 87.55.N‐, 87.53.Bn PMID:24892349

  16. SU-F-SPS-04: Dosimetric Evaluation of the Dose Calculation Accuracy of Different Algorithms for Two Different Treatment Techniques During Whole Breast Irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Pacaci, P; Cebe, M; Mabhouti, H; Codel, G; Serin, E; Sanli, E; Kucukmorkoc, E; Doyuran, M; Kucuk, N; Canoglu, D; Altinok, A; Acar, H; Caglar Ozkok, H [Medipol University, Istanbul, Istanbul (Turkey)

    2016-06-15

    Purpose: In this study, dosimetric comparison of field in field (FIF) and intensity modulated radiation therapy (IMRT) techniques used for treatment of whole breast radiotherapy (WBRT) were made. The dosimetric accuracy of treatment planning system (TPS) for Anisotropic Analytical Algorithm (AAA) and Acuros XB (AXB) algorithms in predicting PTV and OAR doses was also investigated. Methods: Two different treatment planning techniques of left-sided breast cancer were generated for rando phantom. FIF and IMRT plans were compared for doses in PTV and OAR volumes including ipsilateral lung, heart, left ascending coronary artery, contralateral lung and the contralateral breast. PTV and OARs doses and homogeneity and conformality indexes were compared between two techniques. The accuracy of TPS dose calculation algorithms was tested by comparing PTV and OAR doses measured by thermoluminescent dosimetry with the dose calculated by the TPS using AAA and AXB for both techniques. Results: IMRT plans had better conformality and homogeneity indexes than FIF technique and it spared OARs better than FIF. While both algorithms overestimated PTV doses they underestimated all OAR doses. For IMRT plan, PTV doses, overestimation up to 2.5 % was seen with AAA algorithm but it decreased to 1.8 % when AXB algorithm was used. Based on the results of the anthropomorphic measurements for OAR doses, underestimation greater than 7 % is possible by the AAA. The results from the AXB are much better than the AAA algorithm. However, underestimations of 4.8 % were found in some of the points even for AXB. For FIF plan, similar trend was seen for PTV and OARs doses in both algorithm. Conclusion: When using the Eclipse TPS for breast cancer, AXB the should be used instead of the AAA algorithm, bearing in mind that the AXB may still underestimate all OAR doses.

  17. SU-F-T-452: Influence of Dose Calculation Algorithm and Heterogeneity Correction On Risk Categorization of Patients with Cardiac Implanted Electronic Devices Undergoing Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Iwai, P; Lins, L Nadler [AC Camargo Cancer Center, Sao Paulo (Brazil)

    2016-06-15

    Purpose: There is a lack of studies with significant cohort data about patients using pacemaker (PM), implanted cardioverter defibrillator (ICD) or cardiac resynchronization therapy (CRT) device undergoing radiotherapy. There is no literature comparing the cumulative doses delivered to those cardiac implanted electronic devices (CIED) calculated by different algorithms neither studies comparing doses with heterogeneity correction or not. The aim of this study was to evaluate the influence of the algorithms Pencil Beam Convolution (PBC), Analytical Anisotropic Algorithm (AAA) and Acuros XB (AXB) as well as heterogeneity correction on risk categorization of patients. Methods: A retrospective analysis of 19 3DCRT or IMRT plans of 17 patients was conducted, calculating the dose delivered to CIED using three different calculation algorithms. Doses were evaluated with and without heterogeneity correction for comparison. Risk categorization of the patients was based on their CIED dependency and cumulative dose in the devices. Results: Total estimated doses at CIED calculated by AAA or AXB were higher than those calculated by PBC in 56% of the cases. In average, the doses at CIED calculated by AAA and AXB were higher than those calculated by PBC (29% and 4% higher, respectively). The maximum difference of doses calculated by each algorithm was about 1 Gy, either using heterogeneity correction or not. Values of maximum dose calculated with heterogeneity correction showed that dose at CIED was at least equal or higher in 84% of the cases with PBC, 77% with AAA and 67% with AXB than dose obtained with no heterogeneity correction. Conclusion: The dose calculation algorithm and heterogeneity correction did not change the risk categorization. Since higher estimated doses delivered to CIED do not compromise treatment precautions to be taken, it’s recommend that the most sophisticated algorithm available should be used to predict dose at the CIED using heterogeneity correction.

  18. SU-F-T-452: Influence of Dose Calculation Algorithm and Heterogeneity Correction On Risk Categorization of Patients with Cardiac Implanted Electronic Devices Undergoing Radiotherapy

    International Nuclear Information System (INIS)

    Iwai, P; Lins, L Nadler

    2016-01-01

    Purpose: There is a lack of studies with significant cohort data about patients using pacemaker (PM), implanted cardioverter defibrillator (ICD) or cardiac resynchronization therapy (CRT) device undergoing radiotherapy. There is no literature comparing the cumulative doses delivered to those cardiac implanted electronic devices (CIED) calculated by different algorithms neither studies comparing doses with heterogeneity correction or not. The aim of this study was to evaluate the influence of the algorithms Pencil Beam Convolution (PBC), Analytical Anisotropic Algorithm (AAA) and Acuros XB (AXB) as well as heterogeneity correction on risk categorization of patients. Methods: A retrospective analysis of 19 3DCRT or IMRT plans of 17 patients was conducted, calculating the dose delivered to CIED using three different calculation algorithms. Doses were evaluated with and without heterogeneity correction for comparison. Risk categorization of the patients was based on their CIED dependency and cumulative dose in the devices. Results: Total estimated doses at CIED calculated by AAA or AXB were higher than those calculated by PBC in 56% of the cases. In average, the doses at CIED calculated by AAA and AXB were higher than those calculated by PBC (29% and 4% higher, respectively). The maximum difference of doses calculated by each algorithm was about 1 Gy, either using heterogeneity correction or not. Values of maximum dose calculated with heterogeneity correction showed that dose at CIED was at least equal or higher in 84% of the cases with PBC, 77% with AAA and 67% with AXB than dose obtained with no heterogeneity correction. Conclusion: The dose calculation algorithm and heterogeneity correction did not change the risk categorization. Since higher estimated doses delivered to CIED do not compromise treatment precautions to be taken, it’s recommend that the most sophisticated algorithm available should be used to predict dose at the CIED using heterogeneity correction.

  19. Superficial dose evaluation of four dose calculation algorithms

    Science.gov (United States)

    Cao, Ying; Yang, Xiaoyu; Yang, Zhen; Qiu, Xiaoping; Lv, Zhiping; Lei, Mingjun; Liu, Gui; Zhang, Zijian; Hu, Yongmei

    2017-08-01

    Accurate superficial dose calculation is of major importance because of the skin toxicity in radiotherapy, especially within the initial 2 mm depth being considered more clinically relevant. The aim of this study is to evaluate superficial dose calculation accuracy of four commonly used algorithms in commercially available treatment planning systems (TPS) by Monte Carlo (MC) simulation and film measurements. The superficial dose in a simple geometrical phantom with size of 30 cm×30 cm×30 cm was calculated by PBC (Pencil Beam Convolution), AAA (Analytical Anisotropic Algorithm), AXB (Acuros XB) in Eclipse system and CCC (Collapsed Cone Convolution) in Raystation system under the conditions of source to surface distance (SSD) of 100 cm and field size (FS) of 10×10 cm2. EGSnrc (BEAMnrc/DOSXYZnrc) program was performed to simulate the central axis dose distribution of Varian Trilogy accelerator, combined with measurements of superficial dose distribution by an extrapolation method of multilayer radiochromic films, to estimate the dose calculation accuracy of four algorithms in the superficial region which was recommended in detail by the ICRU (International Commission on Radiation Units and Measurement) and the ICRP (International Commission on Radiological Protection). In superficial region, good agreement was achieved between MC simulation and film extrapolation method, with the mean differences less than 1%, 2% and 5% for 0°, 30° and 60°, respectively. The relative skin dose errors were 0.84%, 1.88% and 3.90%; the mean dose discrepancies (0°, 30° and 60°) between each of four algorithms and MC simulation were (2.41±1.55%, 3.11±2.40%, and 1.53±1.05%), (3.09±3.00%, 3.10±3.01%, and 3.77±3.59%), (3.16±1.50%, 8.70±2.84%, and 18.20±4.10%) and (14.45±4.66%, 10.74±4.54%, and 3.34±3.26%) for AXB, CCC, AAA and PBC respectively. Monte Carlo simulation verified the feasibility of the superficial dose measurements by multilayer Gafchromic films. And the rank

  20. Percentage depth dose calculation accuracy of model based algorithms in high energy photon small fields through heterogeneous media and comparison with plastic scintillator dosimetry.

    Science.gov (United States)

    Alagar, Ananda Giri Babu; Mani, Ganesh Kadirampatti; Karunakaran, Kaviarasu

    2016-01-08

    Small fields smaller than 4 × 4 cm2 are used in stereotactic and conformal treatments where heterogeneity is normally present. Since dose calculation accuracy in both small fields and heterogeneity often involves more discrepancy, algorithms used by treatment planning systems (TPS) should be evaluated for achieving better treatment results. This report aims at evaluating accuracy of four model-based algorithms, X-ray Voxel Monte Carlo (XVMC) from Monaco, Superposition (SP) from CMS-Xio, AcurosXB (AXB) and analytical anisotropic algorithm (AAA) from Eclipse are tested against the measurement. Measurements are done using Exradin W1 plastic scintillator in Solid Water phantom with heterogeneities like air, lung, bone, and aluminum, irradiated with 6 and 15 MV photons of square field size ranging from 1 to 4 cm2. Each heterogeneity is introduced individually at two different depths from depth-of-dose maximum (Dmax), one setup being nearer and another farther from the Dmax. The central axis percentage depth-dose (CADD) curve for each setup is measured separately and compared with the TPS algorithm calculated for the same setup. The percentage normalized root mean squared deviation (%NRMSD) is calculated, which represents the whole CADD curve's deviation against the measured. It is found that for air and lung heterogeneity, for both 6 and 15 MV, all algorithms show maximum deviation for field size 1 × 1 cm2 and gradually reduce when field size increases, except for AAA. For aluminum and bone, all algorithms' deviations are less for 15 MV irrespective of setup. In all heterogeneity setups, 1 × 1 cm2 field showed maximum deviation, except in 6MV bone setup. All algorithms in the study, irrespective of energy and field size, when any heterogeneity is nearer to Dmax, the dose deviation is higher compared to the same heterogeneity far from the Dmax. Also, all algorithms show maximum deviation in lower-density materials compared to high-density materials.

  1. Measured Sonic Boom Signatures Above and Below the XB-70 Airplane Flying at Mach 1.5 and 37,000 Feet

    Science.gov (United States)

    Maglieri, Domenic J.; Henderson, Herbert R.; Tinetti, Ana F.

    2011-01-01

    During the 1966-67 Edwards Air Force Base (EAFB) National Sonic Boom Evaluation Program, a series of in-flight flow-field measurements were made above and below the USAF XB-70 using an instrumented NASA F-104 aircraft with a specially designed nose probe. These were accomplished in the three XB-70 flights at about Mach 1.5 at about 37,000 ft. and gross weights of about 350,000 lbs. Six supersonic passes with the F-104 probe aircraft were made through the XB-70 shock flow-field; one above and five below the XB-70. Separation distances ranged from about 3000 ft. above and 7000 ft. to the side of the XB-70 and about 2000 ft. and 5000 ft. below the XB-70. Complex near-field "sawtooth-type" signatures were observed in all cases. At ground level, the XB-70 shock waves had not coalesced into the two-shock classical sonic boom N-wave signature, but contained three shocks. Included in this report is a description of the generating and probe airplanes, the in-flight and ground pressure measuring instrumentation, the flight test procedure and aircraft positioning, surface and upper air weather observations, and the six in-flight pressure signatures from the three flights.

  2. Sci-Sat AM: Radiation Dosimetry and Practical Therapy Solutions - 04: On 3D Fabrication of Phantoms and Experimental Verification of Patient Dose Computation Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Khan, Rao; Zavan, Rodolfo; McGeachy, Philip; Madamesila, Joseph; Villarreal-Barajas, Jose Eduardo [Radiation Oncology, Washington University School of Medicine, St Louis, MO, USA, Instituto de Física de São Carlos, Universidade de São Paulo, SP, Brazil, CancerCare Manitoba, Winnipeg,Manitoba, Physics and Astronomy, University of Calgary, Calgary, Alberta (Canada)

    2016-08-15

    Purpose: Transport based dose calculation algorithm Acuros XB (AXB) has been shown to accurately account for heterogeneities mostly through comparisons with Monte Carlo simulations. This study aims at providing additional experimental verification for AXB for flattened and unflattened clinical energies in low density phantoms of the same material. Materials and Methods: Polystyrene slabs were created using a bench-top 3D printer. Six slabs were printed at varying densities from 0.23 g/cm{sup 3} to 0.68 g/cm{sup 3}, corresponding to different density humanoid tissues. The slabs were used to form different single and multilayer geometries. Dose was calculated with AXB 11.0.31 for 6MV, 15MV flattened and 6FFF (flattening filter free) energies for field sizes of 2×2 cm{sup 2} and 5×5 cm{sup 2}. The phantoms containing radiochromic EBT3 films were irradiated. Absolute dose profiles and 2D gamma analyses were performed for 96 dose planes. Results: For all single slab, multislab configurations and energies, absolute dose differences between the AXB calculation and film measurements remained <3% for both fields, with slightly poor disagreement in penumbra. The gamma index at 2% / 2mm averaged 98% in all combinations of fields, phantoms and photon energies. Conclusions: The transport based dose algorithm AXB is in good agreement with the experimental measurements for small field sizes using 6MV, 6FFF and 15MV beams adjacent to low density heterogeneous media. This work provides sufficient experimental ground to support the use of AXB for heterogeneous dose calculation purposes.

  3. 2,3,7,8-tetrachlorodibenzo-p-dioxin: examination of biochemical effects involved in the proliferation and differentiation of XB cells

    International Nuclear Information System (INIS)

    Knutson, J.C.; Poland, A.

    1984-01-01

    XB, a cell line derived form a mouse teratoma, differentiates into stratified squamous epithelium when incubated with 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD). To examine the mediators of this response the effects produced by TCDD and those elicited by other compounds which stimulated epidermal proliferation and/or differentiation in mice were compared, XB/3T3 cultures keratinize when incubated with cholera toxin, epidermal growth factor (EGF), or TCDD , but not 12-0-tetradecanoylphorbol-13-acetate (TPA). Incubation of XB cells with TCDD for 48 hours produces an increase in thymidine incorporation, a response which is neither as large nor as rapid as that produced by cholera toxin, TPA, or EGF. Although both cholera toxin and TCDD stimulate differentiation and thymidine incorporation in XB/3T3 cultures, cholera toxin increases cAMP 30-fold in these cells, while TCDD does not affect cAMP accumulation. Inhibitors of arachidonic acid metabolism, which block epidermal proliferative responses to TPA in vivo, do not prevent the differentiation of XB cells in response to TCDD. In XB/3T3 cultures, TPA stimulates arachidonic acid release at all times tested (1,6, and 24 hours) and increases the incorporation of 32 P/sub i/ into total phospholipids and phosphatidyl-choline after 3 hours. In contrast, D affects neither arachidonic acid release nor the turnover of phosphatidylinositol, or phosphatidylcholine at any of the times tested. Although biochemical effects which have been suggested as part of the mechanism of TCDD and produced by other epidermal proliferative compounds in XB cells were examined, no mediator of the TCDD-produced differentiation of XB/3T3 cultures was observed

  4. Overexpression of Rice Auxilin-Like Protein, XB21, Induces Necrotic Lesions, up-Regulates Endocytosis-Related Genes, and Confers Enhanced Resistance to Xanthomonas oryzae pv. oryzae.

    Science.gov (United States)

    Park, Chang-Jin; Wei, Tong; Sharma, Rita; Ronald, Pamela C

    2017-12-01

    The rice immune receptor XA21 confers resistance to the bacterial pathogen, Xanthomonas oryzae pv. oryzae (Xoo). To elucidate the mechanism of XA21-mediated immunity, we previously performed a yeast two-hybrid screening for XA21 interactors and identified XA21 binding protein 21 (XB21). Here, we report that XB21 is an auxilin-like protein predicted to function in clathrin-mediated endocytosis. We demonstrate an XA21/XB21 in vivo interaction using co-immunoprecipitation in rice. Overexpression of XB21 in rice variety Kitaake and a Kitaake transgenic line expressing XA21 confers a necrotic lesion phenotype and enhances resistance to Xoo. RNA sequencing reveals that XB21 overexpression results in the differential expression of 8735 genes (4939 genes up- and 3846 genes down-regulated) (≥2-folds, FDR ≤0.01). The up-regulated genes include those predicted to be involved in 'cell death' and 'vesicle-mediated transport'. These results indicate that XB21 plays a role in the plant immune response and in regulation of cell death. The up-regulation of genes controlling 'vesicle-mediated transport' in XB21 overexpression lines is consistent with a functional role for XB21 as an auxilin.

  5. Dosimetric comparison of lung stereotactic body radiotherapy treatment plans using averaged computed tomography and end-exhalation computed tomography images: Evaluation of the effect of different dose-calculation algorithms and prescription methods

    Energy Technology Data Exchange (ETDEWEB)

    Mitsuyoshi, Takamasa; Nakamura, Mitsuhiro, E-mail: m_nkmr@kuhp.kyoto-u.ac.jp; Matsuo, Yukinori; Ueki, Nami; Nakamura, Akira; Iizuka, Yusuke; Mampuya, Wambaka Ange; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    The purpose of this article is to quantitatively evaluate differences in dose distributions calculated using various computed tomography (CT) datasets, dose-calculation algorithms, and prescription methods in stereotactic body radiotherapy (SBRT) for patients with early-stage lung cancer. Data on 29 patients with early-stage lung cancer treated with SBRT were retrospectively analyzed. Averaged CT (Ave-CT) and expiratory CT (Ex-CT) images were reconstructed for each patient using 4-dimensional CT data. Dose distributions were initially calculated using the Ave-CT images and recalculated (in the same monitor units [MUs]) by employing Ex-CT images with the same beam arrangements. The dose-volume parameters, including D{sub 95}, D{sub 90}, D{sub 50}, and D{sub 2} of the planning target volume (PTV), were compared between the 2 image sets. To explore the influence of dose-calculation algorithms and prescription methods on the differences in dose distributions evident between Ave-CT and Ex-CT images, we calculated dose distributions using the following 3 different algorithms: x-ray Voxel Monte Carlo (XVMC), Acuros XB (AXB), and the anisotropic analytical algorithm (AAA). We also used 2 different dose-prescription methods; the isocenter prescription and the PTV periphery prescription methods. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data were within 3 percentage points (%pts) employing the isocenter prescription method, and within 1.5%pts using the PTV periphery prescription method, irrespective of which of the 3 algorithms (XVMC, AXB, and AAA) was employed. The frequencies of dose-volume parameters differing by >1%pt when the XVMC and AXB were used were greater than those associated with the use of the AAA, regardless of the dose-prescription method employed. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data on patients who underwent lung SBRT were within 3%pts, regardless of the dose

  6. Dosimetric comparison of lung stereotactic body radiotherapy treatment plans using averaged computed tomography and end-exhalation computed tomography images: Evaluation of the effect of different dose-calculation algorithms and prescription methods

    International Nuclear Information System (INIS)

    Mitsuyoshi, Takamasa; Nakamura, Mitsuhiro; Matsuo, Yukinori; Ueki, Nami; Nakamura, Akira; Iizuka, Yusuke; Mampuya, Wambaka Ange; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    The purpose of this article is to quantitatively evaluate differences in dose distributions calculated using various computed tomography (CT) datasets, dose-calculation algorithms, and prescription methods in stereotactic body radiotherapy (SBRT) for patients with early-stage lung cancer. Data on 29 patients with early-stage lung cancer treated with SBRT were retrospectively analyzed. Averaged CT (Ave-CT) and expiratory CT (Ex-CT) images were reconstructed for each patient using 4-dimensional CT data. Dose distributions were initially calculated using the Ave-CT images and recalculated (in the same monitor units [MUs]) by employing Ex-CT images with the same beam arrangements. The dose-volume parameters, including D 95 , D 90 , D 50 , and D 2 of the planning target volume (PTV), were compared between the 2 image sets. To explore the influence of dose-calculation algorithms and prescription methods on the differences in dose distributions evident between Ave-CT and Ex-CT images, we calculated dose distributions using the following 3 different algorithms: x-ray Voxel Monte Carlo (XVMC), Acuros XB (AXB), and the anisotropic analytical algorithm (AAA). We also used 2 different dose-prescription methods; the isocenter prescription and the PTV periphery prescription methods. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data were within 3 percentage points (%pts) employing the isocenter prescription method, and within 1.5%pts using the PTV periphery prescription method, irrespective of which of the 3 algorithms (XVMC, AXB, and AAA) was employed. The frequencies of dose-volume parameters differing by >1%pt when the XVMC and AXB were used were greater than those associated with the use of the AAA, regardless of the dose-prescription method employed. All differences in PTV dose-volume parameters calculated using Ave-CT and Ex-CT data on patients who underwent lung SBRT were within 3%pts, regardless of the dose-calculation algorithm or the

  7. SU-E-T-131: Dosimetric Impact and Evaluation of Different Heterogenity Algorithm in Volumetric Modulated Arc Therapy Plan for Stereotactic Ablative Radiotherapy Lung Treatment with the Flattening Filter Free Beam

    Energy Technology Data Exchange (ETDEWEB)

    Chung, J; Kim, J [Seoul National University Bundang Hospital, Seongnam, Kyeonggi-do (Korea, Republic of); Lee, J [Konkuk University Medical Center, Seoul, Seoul (Korea, Republic of); Kim, Y [Choonhae College of Health Sciences, Ulsan (Korea, Republic of)

    2014-06-01

    Purpose: The present study aimed to investigate the dosimetric impacts of the anisotropic analytic algorithm (AAA) and the Acuros XB (AXB) plan for lung stereotactic ablative radiation therapy using flattening filter-free (FFF) beam. We retrospectively analyzed 10 patients. Methods: We retrospectively analyzed 10 patients. The dosimetric parameters for the target and organs at risk (OARs) from the treatment plans calculated with these dose calculation algorithms were compared. The technical parameters, such as the computation times and the total monitor units (MUs), were also evaluated. Results: A comparison of DVHs from AXB and AAA showed that the AXB plan produced a high maximum PTV dose by average 4.40% with a statistical significance but slightly lower mean PTV dose by average 5.20% compared to the AAA plans. The maximum dose to the lung was slightly higher in the AXB compared to the AAA. For both algorithms, the values of V5, V10 and V20 for ipsilateral lung were higher in the AXB plan more than those of AAA. However, these parameters for contralateral lung were comparable. The differences of maximum dose for the spinal cord and heart were also small. The computation time of AXB was found fast with the relative difference of 13.7% than those of AAA. The average of monitor units (MUs) for all patients was higher in AXB plans than in the AAA plans. These results indicated that the difference between AXB and AAA are large in heterogeneous region with low density. Conclusion: The AXB provided the advantages such as the accuracy of calculations and the reduction of the computation time in lung stereotactic ablative radiotherapy (SABR) with using FFF beam, especially for VMAT planning. In dose calculation with the media of different density, therefore, the careful attention should be taken regarding the impacts of different heterogeneity correction algorithms. The authors report no conflicts of interest.

  8. SU-E-T-131: Dosimetric Impact and Evaluation of Different Heterogenity Algorithm in Volumetric Modulated Arc Therapy Plan for Stereotactic Ablative Radiotherapy Lung Treatment with the Flattening Filter Free Beam

    International Nuclear Information System (INIS)

    Chung, J; Kim, J; Lee, J; Kim, Y

    2014-01-01

    Purpose: The present study aimed to investigate the dosimetric impacts of the anisotropic analytic algorithm (AAA) and the Acuros XB (AXB) plan for lung stereotactic ablative radiation therapy using flattening filter-free (FFF) beam. We retrospectively analyzed 10 patients. Methods: We retrospectively analyzed 10 patients. The dosimetric parameters for the target and organs at risk (OARs) from the treatment plans calculated with these dose calculation algorithms were compared. The technical parameters, such as the computation times and the total monitor units (MUs), were also evaluated. Results: A comparison of DVHs from AXB and AAA showed that the AXB plan produced a high maximum PTV dose by average 4.40% with a statistical significance but slightly lower mean PTV dose by average 5.20% compared to the AAA plans. The maximum dose to the lung was slightly higher in the AXB compared to the AAA. For both algorithms, the values of V5, V10 and V20 for ipsilateral lung were higher in the AXB plan more than those of AAA. However, these parameters for contralateral lung were comparable. The differences of maximum dose for the spinal cord and heart were also small. The computation time of AXB was found fast with the relative difference of 13.7% than those of AAA. The average of monitor units (MUs) for all patients was higher in AXB plans than in the AAA plans. These results indicated that the difference between AXB and AAA are large in heterogeneous region with low density. Conclusion: The AXB provided the advantages such as the accuracy of calculations and the reduction of the computation time in lung stereotactic ablative radiotherapy (SABR) with using FFF beam, especially for VMAT planning. In dose calculation with the media of different density, therefore, the careful attention should be taken regarding the impacts of different heterogeneity correction algorithms. The authors report no conflicts of interest

  9. Induction of truncated form of tenascin-X (XB-S) through dissociation of HDAC1 from SP-1/HDAC1 complex in response to hypoxic conditions

    International Nuclear Information System (INIS)

    Kato, Akari; Endo, Toshiya; Abiko, Shun; Ariga, Hiroyoshi; Matsumoto, Ken-ichi

    2008-01-01

    ABSTRACT: XB-S is an amino-terminal truncated protein of tenascin-X (TNX) in humans. The levels of the XB-S transcript, but not those of TNX transcripts, were increased upon hypoxia. We identified a critical hypoxia-responsive element (HRE) localized to a GT-rich element positioned from - 1410 to - 1368 in the XB-S promoter. Using an electrophoretic mobility shift assay (EMSA), we found that the HRE forms a DNA-protein complex with Sp1 and that GG positioned in - 1379 and - 1378 is essential for the binding of the nuclear complex. Transfection experiments in SL2 cells, an Sp1-deficient model system, with an Sp1 expression vector demonstrated that the region from - 1380 to - 1371, an HRE, is sufficient for efficient activation of the XB-S promoter upon hypoxia. The EMSA and a chromatin immunoprecipitation (ChIP) assay showed that Sp1 together with the transcriptional repressor histone deacetylase 1 (HDAC1) binds to the HRE of the XB-S promoter under normoxia and that hypoxia causes dissociation of HDAC1 from the Sp1/HDAC1 complex. The HRE promoter activity was induced in the presence of a histone deacetylase inhibitor, trichostatin A, even under normoxia. Our results indicate that the hypoxia-induced activation of the XB-S promoter is regulated through dissociation of HDAC1 from an Sp1-binding HRE site

  10. Force Tests of the Boeing XB-47 Full-Scale Empennage in the Ames 40- by 80-Foot Wind Tunnel

    Science.gov (United States)

    Hunton, Lynn W.

    1947-01-01

    A wind-tunnel investigation of the Boeing XB-47 full-scale empennage was conducted to provide, prior to flight tests, data required on the effectiveness of the elevator and rudder. The XB-47 airplane is a jet-propelled medium bomber having wing and tail surfaces swept back 35 degrees. The investigation included tests of the effectiveness of the elevator with normal straight sides, with a buldged trailing edge, and with a modified hinge-line gap and tests of the effectiveness of the rudder with a normal straight-sided tab and with a bulged tab.

  11. Review of semileptonic $b$-hadron decays excluding the $| V_{ xb} |$ and $R$$(D^{(*)})$ measurements

    CERN Document Server

    Owen, Patrick

    2017-01-01

    A review of other semileptonic b -hadron decays that are not related to R ( D ( ∗ ) ) and | V xb | anomalies are presented. A couple of long-standing puzzles in B → D ∗∗ ` ν decays are revisited and potential issues and advantages of studying B 0 s and Λ 0 b semileptonic decays are discussed.

  12. Radiobiological impact of dose calculation algorithms on biologically optimized IMRT lung stereotactic body radiation therapy plans

    International Nuclear Information System (INIS)

    Liang, X.; Penagaricano, J.; Zheng, D.; Morrill, S.; Zhang, X.; Corry, P.; Griffin, R. J.; Han, E. Y.; Hardee, M.; Ratanatharathom, V.

    2016-01-01

    The aim of this study is to evaluate the radiobiological impact of Acuros XB (AXB) vs. Anisotropic Analytic Algorithm (AAA) dose calculation algorithms in combined dose-volume and biological optimized IMRT plans of SBRT treatments for non-small-cell lung cancer (NSCLC) patients. Twenty eight patients with NSCLC previously treated SBRT were re-planned using Varian Eclipse (V11) with combined dose-volume and biological optimization IMRT sliding window technique. The total dose prescribed to the PTV was 60 Gy with 12 Gy per fraction. The plans were initially optimized using AAA algorithm, and then were recomputed using AXB using the same MUs and MLC files to compare with the dose distribution of the original plans and assess the radiobiological as well as dosimetric impact of the two different dose algorithms. The Poisson Linear-Quadatric (PLQ) and Lyman-Kutcher-Burman (LKB) models were used for estimating the tumor control probability (TCP) and normal tissue complication probability (NTCP), respectively. The influence of the model parameter uncertainties on the TCP differences and the NTCP differences between AAA and AXB plans were studied by applying different sets of published model parameters. Patients were grouped into peripheral and centrally-located tumors to evaluate the impact of tumor location. PTV dose was lower in the re-calculated AXB plans, as compared to AAA plans. The median differences of PTV(D 95% ) were 1.7 Gy (range: 0.3, 6.5 Gy) and 1.0 Gy (range: 0.6, 4.4 Gy) for peripheral tumors and centrally-located tumors, respectively. The median differences of PTV(mean) were 0.4 Gy (range: 0.0, 1.9 Gy) and 0.9 Gy (range: 0.0, 4.3 Gy) for peripheral tumors and centrally-located tumors, respectively. TCP was also found lower in AXB-recalculated plans compared with the AAA plans. The median (range) of the TCP differences for 30 month local control were 1.6 % (0.3 %, 5.8 %) for peripheral tumors and 1.3 % (0.5 %, 3.4 %) for centrally located tumors. The lower

  13. SU-D-BRB-07: Lipiodol Impact On Dose Distribution in Liver SBRT After TACE

    International Nuclear Information System (INIS)

    Kawahara, D; Ozawa, S; Hioki, K; Suzuki, T; Lin, Y; Okumura, T; Ochi, Y; Nakashima, T; Ohno, Y; Kimura, T; Murakami, Y; Nagata, Y

    2015-01-01

    Purpose: Stereotactic body radiotherapy (SBRT) combining transarterial chemoembolization (TACE) with Lipiodol is expected to improve local control. This study aims to evaluate the impact of Lipiodol on dose distribution by comparing the dosimetric performance of the Acuros XB (AXB) algorithm, anisotropic analytical algorithm (AAA), and Monte Carlo (MC) method using a virtual heterogeneous phantom and a treatment plan for liver SBRT after TACE. Methods: The dose distributions calculated using AAA and AXB algorithm, both in Eclipse (ver. 11; Varian Medical Systems, Palo Alto, CA), and EGSnrc-MC were compared. First, the inhomogeneity correction accuracy of the AXB algorithm and AAA was evaluated by comparing the percent depth dose (PDD) obtained from the algorithms with that from the MC calculations using a virtual inhomogeneity phantom, which included water and Lipiodol. Second, the dose distribution of a liver SBRT patient treatment plan was compared between the calculation algorithms. Results In the virtual phantom, compared with the MC calculations, AAA underestimated the doses just before and in the Lipiodol region by 5.1% and 9.5%, respectively, and overestimated the doses behind the region by 6.0%. Furthermore, compared with the MC calculations, the AXB algorithm underestimated the doses just before and in the Lipiodol region by 4.5% and 10.5%, respectively, and overestimated the doses behind the region by 4.2%. In the SBRT plan, the AAA and AXB algorithm underestimated the maximum doses in the Lipiodol region by 9.0% in comparison with the MC calculations. In clinical cases, the dose enhancement in the Lipiodol region can approximately 10% increases in tumor dose without increase of dose to normal tissue. Conclusion: The MC method demonstrated a larger increase in the dose in the Lipiodol region than the AAA and AXB algorithm. Notably, dose enhancement were observed in the tumor area; this may lead to a clinical benefit

  14. SU-D-BRB-07: Lipiodol Impact On Dose Distribution in Liver SBRT After TACE

    Energy Technology Data Exchange (ETDEWEB)

    Kawahara, D; Ozawa, S; Hioki, K; Suzuki, T; Lin, Y; Okumura, T; Ochi, Y; Nakashima, T; Ohno, Y; Kimura, T; Murakami, Y; Nagata, Y [Hiroshima University, Hiroshima, Hiroshima (Japan)

    2015-06-15

    Purpose: Stereotactic body radiotherapy (SBRT) combining transarterial chemoembolization (TACE) with Lipiodol is expected to improve local control. This study aims to evaluate the impact of Lipiodol on dose distribution by comparing the dosimetric performance of the Acuros XB (AXB) algorithm, anisotropic analytical algorithm (AAA), and Monte Carlo (MC) method using a virtual heterogeneous phantom and a treatment plan for liver SBRT after TACE. Methods: The dose distributions calculated using AAA and AXB algorithm, both in Eclipse (ver. 11; Varian Medical Systems, Palo Alto, CA), and EGSnrc-MC were compared. First, the inhomogeneity correction accuracy of the AXB algorithm and AAA was evaluated by comparing the percent depth dose (PDD) obtained from the algorithms with that from the MC calculations using a virtual inhomogeneity phantom, which included water and Lipiodol. Second, the dose distribution of a liver SBRT patient treatment plan was compared between the calculation algorithms. Results In the virtual phantom, compared with the MC calculations, AAA underestimated the doses just before and in the Lipiodol region by 5.1% and 9.5%, respectively, and overestimated the doses behind the region by 6.0%. Furthermore, compared with the MC calculations, the AXB algorithm underestimated the doses just before and in the Lipiodol region by 4.5% and 10.5%, respectively, and overestimated the doses behind the region by 4.2%. In the SBRT plan, the AAA and AXB algorithm underestimated the maximum doses in the Lipiodol region by 9.0% in comparison with the MC calculations. In clinical cases, the dose enhancement in the Lipiodol region can approximately 10% increases in tumor dose without increase of dose to normal tissue. Conclusion: The MC method demonstrated a larger increase in the dose in the Lipiodol region than the AAA and AXB algorithm. Notably, dose enhancement were observed in the tumor area; this may lead to a clinical benefit.

  15. Updating the orbital ephemeris of the dipping source XB 1254-690 and the distance to the source

    Science.gov (United States)

    Gambino, Angelo F.; Iaria, Rosario; Di Salvo, Tiziana; Matranga, Marco; Burderi, Luciano; Pintore, Fabio; Riggio, Alessandro; Sanna, Andrea

    2017-09-01

    XB 1254-690 is a dipping low mass X-ray binary system hosting a neutron star and showing type I X-ray bursts. We aim at obtaining a more accurate orbital ephemeris and at constraining the orbital period derivative of the system for the first time. In addition, we want to better constrain the distance to the source in order to locate the system in a well defined evolutive scenario. We apply, for the first time, an orbital timing technique to XB 1254-690, using the arrival times of the dips present in the light curves that have been collected during 26 yr of X-ray pointed observations acquired from different space missions. We estimate the dip arrival times using a statistical method that weights the count-rate inside the dip with respect to the level of persistent emission outside the dip. We fit the obtained delays as a function of the orbital cycles both with a linear and a quadratic function. We infer the orbital ephemeris of XB 1254-690, improving the accuracy of the orbital period with respect to previous estimates. We infer a mass of M 2 = 0.42 ± 0.04 M ʘ for the donor star, in agreement with estimations already present in literature, assuming that the star is in thermal equilibrium while it transfers part of its mass via the inner Lagrangian point, and assuming a neutron star mass of 1.4 M ʘ. Using these assumptions, we also constrain the distance to the source, finding a value of 7.6 ± 0.8 kpc. Finally, we discuss the evolution of the system, suggesting that it is compatible with a conservative mass transfer driven by magnetic braking.

  16. NNLO QCD corrections to the polarized top quark decay t (↑)→Xb+W+

    Science.gov (United States)

    Czarnecki, A.; Groote, S.; Körner, J. G.; Piclum, J. H.

    2018-05-01

    We compute the next-to-next-to-leading order (NNLO) QCD corrections to the decay t (↑)→Xb+W+ of a polarized top quark. The spin-momentum correlation in this quasi two-body decay is described by the polar angle distribution d Γ /d cos θP=Γ/2 (1 +PtαPcos θP) , where Pt is the polarization of the top quark and αP denotes the asymmetry parameter of the decay. For the latter we find αPNNLO=0.3792 ±0.0037 .

  17. A phantom study on fetal dose reducing factors in pregnant patients with breast cancer during radiotherapy treatment

    Directory of Open Access Journals (Sweden)

    Akın Ogretici

    2017-01-01

    Full Text Available Purpose: This study aims to investigate the factors that reduce fetal dose in pregnant patients with breast cancer throughout their radiation treatment. Two main factors in a standard radiation oncology center are considered as the treatment planning systems (TPSs and simple shielding for intensity modulated radiation therapy technique. Materials and Methods: TPS factor was evaluated with two different planning algorithms: Anisotropic analytical algorithm and Acuros XB (external beam. To evaluate the shielding factor, a standard radiological purpose lead apron was chosen. For both studies, thermoluminescence dosimeters were used to measure the point dose, and an Alderson RANDO-phantom was used to simulate a female pregnant patient in this study. Thirteen measurement points were chosen in the 32nd slice of the phantom to cover all possible locations of a fetus up to 8th week of gestation. Results: The results show that both of the TPS algorithms are incapable of calculating the fetal doses, therefore, unable to reduce them at the planning stage. Shielding with a standard lead apron, however, showed a slight radiation protection (about 4.7% to the fetus decreasing the mean fetal dose from 84.8 mGy to 80.8 mGy, which cannot be disregarded in case of fetal irradiation. Conclusions: Using a lead apron for shielding the abdominal region of a pregnant patient during breast irradiation showed a minor advantage; however, its possible side effects (i.e., increased scattered radiation and skin dose should also be investigated further to solidify its benefits.

  18. Collinear factorization for deep inelastic scattering structure functions at large Bjorken xB

    International Nuclear Information System (INIS)

    Accardi, Alberto; Qiu, Jian-Wei

    2008-01-01

    http://dx.doi.org/10.1088/1126-6708/2008/07/090 We examine the uncertainty of perturbative QCD factorization for hadron structure functions in deep inelastic scattering at a large value of the Bjorken variable xB. We analyze the target mass correction to the structure functions by using the collinear factorization approach in the momentum space. We express the long distance physics of structure functions and the leading target mass corrections in terms of parton distribution functions with the standard operator definition. We compare our result with existing work on the target mass correction. We also discuss the impact of a final-state jet function on the extraction of parton distributions at large fractional momentum x.

  19. Altitude-Wind-Tunnel Investigation of the 19B-2, 19B-8 and 19XB-1 Jet- Propulsion Engines. 4; Analysis of Compressor Performance

    Science.gov (United States)

    Dietz, Robert O.; Kuenzig, John K.

    1947-01-01

    Investigations were conducted in the Cleveland altitude wind tunnel to determine the performance and operational characteristics of the 19B-2, 19B-8, and 19XS-1 turbojet engines. One objective was to determine the effect of altitude, flight Mach number, and tail-pipe-nozzle area on the performance characteristics of the six-stage and ten-stage axial-flow compressors of the 19B-8 and 19XB-1 engines, respectively, The data were obtained over a range of simulated altitudes and flight Mach numbers. At each simulated flight condition the engine was run over its full operable range of speeds. Performance characteristics of the 19B-8 and 19XB-1 compressors for the range of operation obtainable in the turboJet-engine installation are presented. Compressor characteristics are presented as functions of air flow corrected to sea-level conditions, compressor Mach number, and compressor load coefficient. For the range of compressor operation investigated, changes in Reynolds number had no measurable effect on the relations among compressor Mach number, corrected air flow, compressor load coefficient, compressor pressure ratio, and compressor efficiency. The operating lines for the 19B-8 compressor lay on the low-air-flow side of the region of maximum compressor efficiency; the 19B-8 compressor operated at higher average pressure coefficients per stage and produced a lower over-all pressure ratio than did the 19XB-1 compressor.

  20. Structure and superconductivity of double-doped Mg1-x(Al0.5Li0.5)xB2

    DEFF Research Database (Denmark)

    Xu, G.J.; Grivel, Jean-Claude; Abrahamsen, A.B.

    2003-01-01

    A series of polycrystalline samples of Mg1-x(Al0.5Li0.5)(x)B-2 (0less than or equal toxless than or equal to0.6) were prepared by a solid state reaction method and their structure, superconducting transition temperature and magneto-transport properties were investigated by means of X-ray diffract......A series of polycrystalline samples of Mg1-x(Al0.5Li0.5)(x)B-2 (0less than or equal toxless than or equal to0.6) were prepared by a solid state reaction method and their structure, superconducting transition temperature and magneto-transport properties were investigated by means of X......-ray diffraction (XRD), ac-susceptibility and resistance in varied magnetic fields. The double doping leads to decreases in both the lattice parameters a and c. The superconducting transition temperature (T-c) decreases with double doping, but the T-c is systematically higher than that of the single Al......-doped samples. It is suggested that the hole band filling has little effect on T-c at high doping level, while the disorder induced by doping plays an important role in suppressing T-c. A systematic comparison with Al-doped MgB2 of the structure, superconducting transition and irreversibility field is made. (C...

  1. Moessbauer spectroscopy on amorphous Fe/sub x/Ni/sub 80-x/B20 after neutron irradiation

    International Nuclear Information System (INIS)

    Sitek, J.; Miglierini, M.

    1985-01-01

    Amorphous Fe/sub x/Ni/sub 80-x/B 20 glassy alloys (x = 40, 50, 60, and 70) irradiated with fast neutrons in a fluence range of 10 14 to 10 19 cm -2 were investigated by Moessbauer spectroscopy. There were some significant changes in the Moessbauer spectrum parameters of the 10 19 cm -2 irradiated samples except Fe 40 Ni 40 B 20 . This corresponds to a change in the direction of the easy axis of magnetization. The measurements show that the resistance of the Fe-Ni-B system against neutron irradiation improves with increasing Ni content up to a certain point

  2. Xylan degradation by the human gut Bacteroides xylanisolvens XB1A(T) involves two distinct gene clusters that are linked at the transcriptional level.

    Science.gov (United States)

    Despres, Jordane; Forano, Evelyne; Lepercq, Pascale; Comtet-Marre, Sophie; Jubelin, Gregory; Chambon, Christophe; Yeoman, Carl J; Berg Miller, Margaret E; Fields, Christopher J; Martens, Eric; Terrapon, Nicolas; Henrissat, Bernard; White, Bryan A; Mosoni, Pascale

    2016-05-04

    Plant cell wall (PCW) polysaccharides and especially xylans constitute an important part of human diet. Xylans are not degraded by human digestive enzymes in the upper digestive tract and therefore reach the colon where they are subjected to extensive degradation by some members of the symbiotic microbiota. Xylanolytic bacteria are the first degraders of these complex polysaccharides and they release breakdown products that can have beneficial effects on human health. In order to understand better how these bacteria metabolize xylans in the colon, this study was undertaken to investigate xylan breakdown by the prominent human gut symbiont Bacteroides xylanisolvens XB1A(T). Transcriptomic analyses of B. xylanisolvens XB1A(T) grown on insoluble oat-spelt xylan (OSX) at mid- and late-log phases highlighted genes in a polysaccharide utilization locus (PUL), hereafter called PUL 43, and genes in a fragmentary remnant of another PUL, hereafter referred to as rPUL 70, which were highly overexpressed on OSX relative to glucose. Proteomic analyses supported the up-regulation of several genes belonging to PUL 43 and showed the important over-production of a CBM4-containing GH10 endo-xylanase. We also show that PUL 43 is organized in two operons and that the knockout of the PUL 43 sensor/regulator HTCS gene blocked the growth of the mutant on insoluble OSX and soluble wheat arabinoxylan (WAX). The mutation not only repressed gene expression in the PUL 43 operons but also repressed gene expression in rPUL 70. This study shows that xylan degradation by B. xylanisolvens XB1A(T) is orchestrated by one PUL and one PUL remnant that are linked at the transcriptional level. Coupled to studies on other xylanolytic Bacteroides species, our data emphasize the importance of one peculiar CBM4-containing GH10 endo-xylanase in xylan breakdown and that this modular enzyme may be used as a functional marker of xylan degradation in the human gut. Our results also suggest that B. xylanisolvens

  3. Use of computational simulation for evaluation of 3D printed phantoms for application in clinical dosimetry

    International Nuclear Information System (INIS)

    Valeriano, Caio César Santos

    2017-01-01

    The purpose of a phantom is to represent the change in the radiation field caused by absorption and scattering in a given tissue or organ of interest. Its geometrical characteristics and composition should be as close as possible to the values associated with its natural analogue. Anatomical structures can be transformed into 3D virtual objects by medical imaging techniques (e.g. Computed Tomography) and printed by rapid prototyping using materials, for example, polylactic acid. Its production for specific patients requires fulfilling requirements such as geometric accuracy with the individual's anatomy and tissue equivalence, so that usable measurements can be made, and be insensitive to the radiation effects. The objective of this work was to evaluate the behavior of 3D printed materials when exposed to different photon beams, with emphasis on the quality of radiotherapy (6 MV), aiming its application in clinical dosimetry. For this, 30 thermoluminescent dosimeters of LiF:Mg,Ti were used. The equivalence between the PMMA and the printed PLA for the thermoluminescent response of 30 dosimeters of CaSO 4 : Dy was also analyzed. The irradiations with radiotherapy photon beams were simulated using the Eclipse TM treatment planning system,with the Anisotropic Analytical Algorithm and the Acuros ® XB Advanced Dose Calculation algorithm. In addition to the use of Eclipse TM and dosimetric tests, computational simulations were realized using the MCNP5 code. Simulations with the MCNP5 code were performed to calculate the attenuation coefficient of printed plates exposed to different radiodiagnosis X-rays qualities and to develop a computational model of 3D printed plates. (author)

  4. Use of computational simulation for evaluation of 3D printed phantoms for application in clinical dosimetry; Emprego de simulação computacional para avaliação de objetos simuladores impressos 3D para aplicação em dosimetria clínica

    Energy Technology Data Exchange (ETDEWEB)

    Valeriano, Caio César Santos

    2017-07-01

    The purpose of a phantom is to represent the change in the radiation field caused by absorption and scattering in a given tissue or organ of interest. Its geometrical characteristics and composition should be as close as possible to the values associated with its natural analogue. Anatomical structures can be transformed into 3D virtual objects by medical imaging techniques (e.g. Computed Tomography) and printed by rapid prototyping using materials, for example, polylactic acid. Its production for specific patients requires fulfilling requirements such as geometric accuracy with the individual's anatomy and tissue equivalence, so that usable measurements can be made, and be insensitive to the radiation effects. The objective of this work was to evaluate the behavior of 3D printed materials when exposed to different photon beams, with emphasis on the quality of radiotherapy (6 MV), aiming its application in clinical dosimetry. For this, 30 thermoluminescent dosimeters of LiF:Mg,Ti were used. The equivalence between the PMMA and the printed PLA for the thermoluminescent response of 30 dosimeters of CaSO{sub 4}: Dy was also analyzed. The irradiations with radiotherapy photon beams were simulated using the Eclipse{sup TM} treatment planning system,with the Anisotropic Analytical Algorithm and the Acuros ® XB Advanced Dose Calculation algorithm. In addition to the use of Eclipse{sup TM} and dosimetric tests, computational simulations were realized using the MCNP5 code. Simulations with the MCNP5 code were performed to calculate the attenuation coefficient of printed plates exposed to different radiodiagnosis X-rays qualities and to develop a computational model of 3D printed plates. (author)

  5. WE-AB-207B-05: Correlation of Normal Lung Density Changes with Dose After Stereotactic Body Radiotherapy (SBRT) for Early Stage Lung Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Q; Devpura, S; Feghali, K; Liu, C; Ajlouni, M; Movsas, B; Chetty, I [Henry Ford Health System, Detroit, MI (United States)

    2016-06-15

    Purpose: To investigate correlation of normal lung CT density changes with dose accuracy and outcome after SBRT for patients with early stage lung cancer. Methods: Dose distributions for patients originally planned and treated using a 1-D pencil beam-based (PB-1D) dose algorithm were retrospectively recomputed using algorithms: 3-D pencil beam (PB-3D), and model-based Methods: AAA, Acuros XB (AXB), and Monte Carlo (MC). Prescription dose was 12 Gy × 4 fractions. Planning CT images were rigidly registered to the followup CT datasets at 6–9 months after treatment. Corresponding dose distributions were mapped from the planning to followup CT images. Following the method of Palma et al .(1–2), Hounsfield Unit (HU) changes in lung density in individual, 5 Gy, dose bins from 5–45 Gy were assessed in the peri-tumor region, defined as a uniform, 3 cm expansion around the ITV(1). Results: There is a 10–15% displacement of the high dose region (40–45 Gy) with the model-based algorithms, relative to the PB method, due to the electron scattering of dose away from the tumor into normal lung tissue (Fig.1). Consequently, the high-dose lung region falls within the 40–45 Gy dose range, causing an increase in HU change in this region, as predicted by model-based algorithms (Fig.2). The patient with the highest HU change (∼110) had mild radiation pneumonitis, and the patient with HU change of ∼80–90 had shortness of breath. No evidence of pneumonitis was observed for the 3 patients with smaller CT density changes (<50 HU). Changes in CT densities, and dose-response correlation, as computed with model-based algorithms, are in excellent agreement with the findings of Palma et al. (1–2). Conclusion: Dose computed with PB (1D or 3D) algorithms was poorly correlated with clinically relevant CT density changes, as opposed to model-based algorithms. A larger cohort of patients is needed to confirm these results. This work was supported in part by a grant from Varian

  6. The {P,Q,k+1}-Reflexive Solution to System of Matrix Equations AX=C, XB=D

    Directory of Open Access Journals (Sweden)

    Chang-Zhou Dong

    2015-01-01

    Full Text Available Let P∈Cm×m and Q∈Cn×n be Hermitian and {k+1}-potent matrices; that is, Pk+1=P=P⁎ and Qk+1=Q=Q⁎, where ·⁎ stands for the conjugate transpose of a matrix. A matrix X∈Cm×n is called {P,Q,k+1}-reflexive (antireflexive if PXQ=X (PXQ=-X. In this paper, the system of matrix equations AX=C and XB=D subject to {P,Q,k+1}-reflexive and antireflexive constraints is studied by converting into two simpler cases: k=1 and k=2. We give the solvability conditions and the general solution to this system; in addition, the least squares solution is derived; finally, the associated optimal approximation problem for a given matrix is considered.

  7. Plasma heating due to X-B mode conversion in a cylindrical ECR plasma system

    Energy Technology Data Exchange (ETDEWEB)

    Yadav, V.K.; Bora, D. [Institute for Plasma Research, Bhat, Gandhinagar, Gujarat (India)

    2004-07-01

    Extra Ordinary (X) mode conversion to Bernstein wave near Upper Hybrid Resonance (UHR) layer plays an important role in plasma heating through cyclotron resonance. Wave generation at UHR and parametric decay at high power has been observed during Electron Cyclotron Resonance (ECR) heating experiments in toroidal magnetic fusion devices. A small linear system with ECR and UHR layer within the system has been used to conduct experiments on X-B conversion and parametric decay process as a function of system parameters. Direct probing in situ is conducted and plasma heating is evidenced by soft x-ray emission measurement. Experiments are performed with hydrogen plasma produced with 160-800 W microwave power at 2.45 GHz of operating frequency at 10{sup -3} mbar pressure. The axial magnetic field required for ECR is such that the resonant surface (B = 875 G) is situated at the geometrical axis of the plasma system. Experimental results will be presented in the paper. (authors)

  8. Superconducting properties of Zn and Al double-doped Mg1-x(Zn0.5Al0.5)xB2

    DEFF Research Database (Denmark)

    Xu, G.J.; Grivel, Jean-Claude; Abrahamsen, A.B.

    2004-01-01

    (XRD), ac susceptibility, magnetization and resistivity. The double doping leads to decreases in both the lattice parameters a and c, and the T-c decreases with increasing dopant content. A systematical comparison with Al doped- and Li, Al double doped MgB2 of structure, superconducting transition......A series of polycrystalline samples of Mg1-x(Zn0.5Al0.5)(x)B-2 (0less than or equal toxless than or equal to0.8) were prepared by solid state reaction method and their structure, superconducting transition temperature (T-c) and transport properties were investigated by means of X-ray diffraction...

  9. Structure, elastic stiffness, and hardness of Os 1- xRu xB 2 solid solution transition-metal diborides

    KAUST Repository

    Kanoun, Mohammed; Hermet, Patrick; Goumri-Said, Souraya

    2012-01-01

    On the basis of recent experiments, the solid solution transition-metal diborides were proposed to be new ultra-incompressible hard materials. We investigate using density functional theory based methods the structural and mechanical properties, electronic structure, and hardness of Os 1-xRu xB 2 solid solutions. A difference in chemical bonding occurs between OsB 2 and RuB 2 diborides, leading to significantly different elastic properties: a large bulk, shear moduli, and hardness for Os-rich diborides and relatively small bulk, shear moduli, and hardness for Ru-rich diborides. The electronic structure and bonding characterization are also analyzed as a function of Ru-dopant concentration in the OsB 2 lattice. © 2012 American Chemical Society.

  10. Structure, elastic stiffness, and hardness of Os 1- xRu xB 2 solid solution transition-metal diborides

    KAUST Repository

    Kanoun, Mohammed

    2012-05-31

    On the basis of recent experiments, the solid solution transition-metal diborides were proposed to be new ultra-incompressible hard materials. We investigate using density functional theory based methods the structural and mechanical properties, electronic structure, and hardness of Os 1-xRu xB 2 solid solutions. A difference in chemical bonding occurs between OsB 2 and RuB 2 diborides, leading to significantly different elastic properties: a large bulk, shear moduli, and hardness for Os-rich diborides and relatively small bulk, shear moduli, and hardness for Ru-rich diborides. The electronic structure and bonding characterization are also analyzed as a function of Ru-dopant concentration in the OsB 2 lattice. © 2012 American Chemical Society.

  11. The low-lying quartet electronic states of group 14 diatomic borides XB (X = C, Si, Ge, Sn, Pb)

    Science.gov (United States)

    Pontes, Marcelo A. P.; de Oliveira, Marcos H.; Fernandes, Gabriel F. S.; Da Motta Neto, Joaquim D.; Ferrão, Luiz F. A.; Machado, Francisco B. C.

    2018-04-01

    The present work focuses in the characterization of the low-lying quartet electronic and spin-orbit states of diatomic borides XB, in which X is an element of group 14 (C, Si, Ge, Sn, PB). The wavefunction was obtained at the CASSCF/MRCI level with a quintuple-ζ quality basis set. Scalar relativistic effects were also taken into account. A systematic and comparative analysis of the spectroscopic properties for the title molecular series was carried out, showing that the (1)4Π→X4Σ- transition band is expected to be measurable by emission spectroscopy to the GeB, SnB and PbB molecules, as already observed for the lighter CB and SiB species.

  12. T -odd correlations in polarized top quark decays in the sequential decay t (↑)→Xb+W+(→ℓ++νℓ) and in the quasi-three-body decay t (↑)→ Xb+ℓ++νℓ

    Science.gov (United States)

    Fischer, M.; Groote, S.; Körner, J. G.

    2018-05-01

    We identify the T -odd structure functions that appear in the description of polarized top quark decays in the sequential decay t (↑)→Xb+W+(→ℓ++νℓ) (two structure functions) and the quasi-three-body decay t (↑)→X b+ℓ++νℓ (one structure function). A convenient measure of the magnitude of the T -odd structure functions is the contribution of the imaginary part Im gR of the right-chiral tensor coupling gR to the T -odd structure functions which we work out. Contrary to the case of QCD, the NLO electroweak corrections to polarized top quark decays admit absorptive one-loop vertex contributions. We analytically calculate the imaginary parts of the relevant four electroweak one-loop triangle vertex diagrams and determine their contributions to the T -odd helicity structure functions that appear in the description of polarized top quark decays.

  13. Compass model-based quality assurance for stereotactic VMAT treatment plans.

    Science.gov (United States)

    Valve, Assi; Keyriläinen, Jani; Kulmala, Jarmo

    2017-12-01

    To use Compass as a model-based quality assurance (QA) tool for stereotactic body radiation therapy (SBRT) and stereotactic radiation therapy (SRT) volumetric modulated arc therapy (VMAT) treatment plans calculated with Eclipse treatment planning system (TPS). Twenty clinical stereotactic VMAT SBRT and SRT treatment plans were blindly selected for evaluation. Those plans included four different treatment sites: prostate, brain, lung and body. The plans were evaluated against dose-volume histogram (DVH) parameters and 2D and 3D gamma analysis. The dose calculated with Eclipse treatment planning system (TPS) was compared to Compass calculated dose (CCD) and Compass reconstructed dose (CRD). The maximum differences in mean dose of planning target volume (PTV) were 2.7 ± 1.0% between AAA and Acuros XB calculation algorithm TPS dose, -7.6 ± 3.5% between Eclipse TPS dose and CCD dose and -5.9 ± 3.7% between Eclipse TPS dose and CRD dose for both Eclipse calculation algorithms, respectively. 2D gamma analysis was not able to identify all the cases that 3D gamma analysis specified for further verification. Compass is suitable for QA of SBRT and SRT treatment plans. However, the QA process should include wide set of DVH-based dose parameters and 3D gamma analysis should be the preferred method when performing clinical patient QA. The results suggest that the Compass should not be used for smaller field sizes than 3 × 3 cm 2 or the beam model should be adjusted separately for both small (FS ≤ 3 cm) and large (FS > 3 cm) field sizes. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  14. Enhanced stability of magic clusters: A case study of icosahedric Al12X, X=B, Al, Ga, C, Si, Ge, Ti, As

    International Nuclear Information System (INIS)

    Gong, X.G.; Kumar, V.

    1992-10-01

    We present results of the electronic structure and stability of some 40 valence electron icosahedric Al 12 X (X=B, Al, Ga, C, Si, Ge, Ti and As) clusters within the local spin density functional theory. It is shown that the stability of Al 13 cluster can be substantially enhanced by proper doping. For neutral clusters, substitution of C at the center of the icosahedron leads to the largest gain in energy. However, Al 12 B - is the most bounded in this family. These results are in agreement with the recent experiments which also find Al 12 B - to be highly abundant. (author). 12 refs, 4 figs, 2 tabs

  15. Competing anisotropies on 3d sub-lattice of YNi{sub 4–x}Co{sub x}B compounds

    Energy Technology Data Exchange (ETDEWEB)

    Caraballo Vivas, R. J.; Rocco, D. L.; Reis, M. S. [Instituto de Física, Universidade Federal Fluminense, Av. Gal. Milton Tavares de Souza s/n, 24210-346 Niterói, RJ (Brazil); Costa Soares, T. [Instituto de Física, Universidade Federal Fluminense, Av. Gal. Milton Tavares de Souza s/n, 24210-346 Niterói, RJ (Brazil); IF Sudeste MG Campus de Juiz de Fora-Núcleo de Física, 36080-001 Juiz de Fora, MG (Brazil); Caldeira, L. [IF Sudeste MG Campus de Juiz de Fora-Núcleo de Física, 36080-001 Juiz de Fora, MG (Brazil); Coelho, A. A. [Instituto de Física Gleb Wataghin, Universidade Estadual de Campinas-Unicamp, Caixa postal 6165, 13083-859 Campinas, SP (Brazil)

    2014-08-14

    The magnetic anisotropy of 3d sub-lattices has an important rule on the overall magnetic properties of hard magnets. Intermetallics alloys with boron (R-Co/Ni-B, for instance) belong to those hard magnets family and are useful objects to help to understand the magnetic behavior of 3d sub-lattice, specially when the rare earth ions R do not have magnetic nature, like YCo{sub 4}B ferromagnetic material. Interestingly, YNi{sub 4}B is a paramagnetic material and Ni ions do not contribute to the magnetic anisotropy. We focused therefore our attention to YNi{sub 4–x}Co{sub x}B series, with x = 0, 1, 2, 3, and 4. The magnetic anisotropy of these compounds is deeper described using statistical and preferential models of Co occupation among the possible Wyckoff positions into the CeCo{sub 4}B type hexagonal structure. We found that the preferential model is the most suitable to explain the magnetization experimental data.

  16. An ant colony optimization algorithm for phylogenetic estimation under the minimum evolution principle

    Directory of Open Access Journals (Sweden)

    Milinkovitch Michel C

    2007-11-01

    Full Text Available Abstract Background Distance matrix methods constitute a major family of phylogenetic estimation methods, and the minimum evolution (ME principle (aiming at recovering the phylogeny with shortest length is one of the most commonly used optimality criteria for estimating phylogenetic trees. The major difficulty for its application is that the number of possible phylogenies grows exponentially with the number of taxa analyzed and the minimum evolution principle is known to belong to the NP MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGacaGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8xdX7Kaeeiuaafaaa@3888@-hard class of problems. Results In this paper, we introduce an Ant Colony Optimization (ACO algorithm to estimate phylogenies under the minimum evolution principle. ACO is an optimization technique inspired from the foraging behavior of real ant colonies. This behavior is exploited in artificial ant colonies for the search of approximate solutions to discrete optimization problems. Conclusion We show that the ACO algorithm is potentially competitive in comparison with state-of-the-art algorithms for the minimum evolution principle. This is the first application of an ACO algorithm to the phylogenetic estimation problem.

  17. Feasibility study of entrance and exit dose measurements at the contra lateral breast with alanine/electron spin resonance dosimetry in volumetric modulated radiotherapy of breast cancer

    Science.gov (United States)

    Wagner, Daniela M.; Hüttenrauch, Petra; Anton, Mathias; von Voigts-Rhetz, Philip; Zink, Klemens; Wolff, Hendrik A.

    2017-07-01

    The Physikalisch-Technische Bundesanstalt has established a secondary standard measurement system for the dose to water, D W, based on alanine/ESR (Anton et al 2013 Phys. Med. Biol. 58 3259-82). The aim of this study was to test the established measurement system for the out-of-field measurements of inpatients with breast cancer. A set of five alanine pellets were affixed to the skin of each patient at the contra lateral breast beginning at the sternum and extending over the mammilla to the distal surface. During 28 fractions with 2.2 Gy per fraction, the accumulated dose was measured in four patients. A cone beam computer tomography (CBCT) scan was generated for setup purposes before every treatment. The reference CT dataset was registered rigidly and deformably to the CBCT dataset for 28 fractions. To take the actual alanine pellet position into account, the dose distribution was calculated for every fraction using the Acuros XB algorithm. The results of the ESR measurements were compared to the calculated doses. The maximum dose measured at the sternum was 19.9 Gy  ±  0.4 Gy, decreasing to 6.8 Gy  ±  0.2 Gy at the mammilla and 4.5 Gy  ±  0.1 Gy at the distal surface of the contra lateral breast. The absolute differences between the calculated and measured doses ranged from  -1.9 Gy to 0.9 Gy. No systematic error could be seen. It was possible to achieve a combined standard uncertainty of 1.63% for D W  =  5 Gy for the measured dose. The alanine/ESR method is feasible for in vivo measurements.

  18. Search for the Xb and other hidden-beauty states in the π+π−ϒ(1S channel at ATLAS

    Directory of Open Access Journals (Sweden)

    G. Aad

    2015-01-01

    Full Text Available This Letter presents a search for a hidden-beauty counterpart of the X(3872 in the mass ranges of 10.05–10.31 GeV and 10.40–11.00 GeV, in the channel Xb→π+π−ϒ(1S(→μ+μ−, using 16.2 fb−1 of s=8 TeV pp collision data collected by the ATLAS detector at the LHC. No evidence for new narrow states is found, and upper limits are set on the product of the Xb cross section and branching fraction, relative to those of the ϒ(2S, at the 95% confidence level using the CLS approach. These limits range from 0.8% to 4.0%, depending on mass. For masses above 10.1 GeV, the expected upper limits from this analysis are the most restrictive to date. Searches for production of the ϒ(13DJ, ϒ(10860, and ϒ(11020 states also reveal no significant signals.

  19. Fast index based algorithms and software for matching position specific scoring matrices

    Directory of Open Access Journals (Sweden)

    Homann Robert

    2006-08-01

    XatLxBI9gBamrtHrhAL1wy0L2yHvtyaeHbnfgDOvwBHrxAJfwnaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaWaaeGaeaaakeaaimaacqWFaeFqaaa@3821@|m + m - 1, where m is the length of the PSSM and A MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamrtHrhAL1wy0L2yHvtyaeHbnfgDOvwBHrxAJfwnaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaWaaeGaeaaakeaaimaacqWFaeFqaaa@3821@ a finite alphabet. In practice, ESAsearch shows superior performance over the most widely used programs, especially for DNA sequences. The new algorithm for accurate on-the-fly calculations of thresholds has the potential to replace formerly used approximation approaches. Beyond the algorithmic contributions, we provide a robust, well documented, and easy to use software package, implementing the ideas and algorithms presented in this manuscript.

  20. Stereotactic Ablative Radiation Therapy for Subcentimeter Lung Tumors: Clinical, Dosimetric, and Image Guidance Considerations

    International Nuclear Information System (INIS)

    Louie, Alexander V.; Senan, Suresh; Dahele, Max; Slotman, Ben J.; Verbakel, Wilko F.A.R.

    2014-01-01

    Purpose: Use of stereotactic ablative radiation therapy (SABR) for subcentimeter lung tumors is controversial. We report our outcomes for tumors with diameter ≤1 cm and their visibility on cone beam computed tomography (CBCT) scans and retrospectively evaluate the planned dose using a deterministic dose calculation algorithm (Acuros XB [AXB]). Methods and Materials: We identified subcentimeter tumors from our institutional SABR database. Tumor size was remeasured on an artifact-free phase of the planning 4-dimensional (4D)-CT. Clinical plan doses were generated using either a pencil beam convolution or an anisotropic analytic algorithm (AAA). All AAA plans were recalculated using AXB, and differences among D95 and mean dose for internal target volume (ITV) and planning target volume (PTV) on the average intensity CT dataset, as well as for gross tumor volume (GTV) on the end respiratory phases were reported. For all AAA patients, CBCT scans acquired during each treatment fraction were evaluated for target visibility. Progression-free and overall survival rates were calculated using the Kaplan-Meier method. Results: Thirty-five patients with 37 subcentimeter tumors were eligible for analysis. For the 22 AAA plans recalculated using AXB, Mean D95 ± SD values were 2.2 ± 4.4% (ITV) and 2.5 ± 4.8% (PTV) lower using AXB; whereas mean doses were 2.9 ± 4.9% (ITV) and 3.7 ± 5.1% (PTV) lower. Calculated AXB doses were significantly lower in one patient (difference in mean ITV and PTV doses, as well as in mean ITV and PTV D95 ranged from 22%-24%). However, the end respiratory phase GTV received at least 95% of the prescription dose. Review of 92 CBCT scans from all AAA patients revealed that the tumor was visualized in 82 images, and its position could be inferred in other images. The 2-year local progression-free survival was 100%. Conclusions: Patients with subcentimeter lung tumors are good candidates for SABR, given the dosimetry, ability to localize

  1. A new intelligent approach for air traffic control using gravitational ...

    Indian Academy of Sciences (India)

    Therefore, poor management of this congestion may lead to a lot of flight delays, increase of operational errors by air traffic control personnel ... the PLT [8–11], and decreasing the duration of scheduling. [12, 13]. Hansen [3], Hu ...... [14] Hu X-B and Paolo E D 2009 An efficient genetic algorithm with uniform crossover for air ...

  2. SU-F-T-236: Comparison of Two IMRT/VMAT QA Systems Using Gamma Index Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Dogan, N [University of Miami, Miami, FL (United States); Denissova, S [Sylvester Comprehensive Cancer Center Deerfield, Weston, FL (United States)

    2016-06-15

    Purpose: The goal of this study is to assess differences in the Gamma index pass rates when using two commercial QA systems and provide optimum Gamma index parameters for pre-treatment patient specific QA. Methods: Twenty-two VMAT cases that consisted of prostate, lung, head and neck, spine, brain and pancreas, were included in this study. The verification plans have been calculated using AcurosXB(V11) algorithm for different dose grids (1.5mm, 2.5mm, 3mm). The measurements were performed on TrueBeam(Varian) accelerator using both EPID(S1000) portal imager and ArcCheck(SunNuclearCorp) devices. Gamma index criteria variation of 3%/3mm, 2%/3mm, 2%/2mm and threshold (TH) doses of 5% to 50% were used in analysis. Results: The differences in Gamma pass rates between two devices are not statistically significant for 3%/3mm, yielding pass rate higher than 95%. Increase of lower dose TH showed reduced pass rates for both devices. ArcCheck’s more pronounced effect can be attributed to higher contribution of lower dose region spread. As expected, tightening criteria to 2%/2mm (TH: 10%) decreased Gamma pass rates below 95%. Higher EPID (92%) pass rates compared to ArcCheck (86%) probably due to better spatial resolution. Portal Dosimetry results showed lower Gamma pass rates for composite plans compared to individual field pass rates. This may be due to the expansion in the analyzed region which includes pixels not included in the separate field analysis. Decreasing dose grid size from 2.5mm to 1.5mm did not show statistically significant (p<0.05) differences in Gamma pass rates for both QA devices. Conclusion: Overall, both system measurements agree well with calculated dose when using gamma index criteria of 3%/3mm for a variety of VMAT cases. Variability between two systems increases using different dose GRID, TH and tighter gamma criteria and must be carefully assessed prior to clinical use.

  3. Local skin friction coefficients and boundary layer profiles obtained in flight from the XB-70-1 airplane at Mach numbers up to 2.5

    Science.gov (United States)

    Fisher, D. F.; Saltzman, E. J.

    1973-01-01

    Boundary-layer and local friction data for Mach numbers up to 2.5 and Reynolds numbers up to 3.6 x 10 to the 8th power were obtained in flight at three locations on the XB-70-1 airplane: the lower forward fuselage centerline (nose), the upper rear fuselage centerline, and the upper surface of the right wing. Local skin friction coefficients were derived at each location by using (1) a skin friction force balance, (2) a Preston probe, and (3) an adaptation of Clauser's method which derives skin friction from the rake velocity profile. These three techniques provided consistent results that agreed well with the von Karman-Schoenherr relationship for flow conditions that are quasi-two-dimensional. At the lower angles of attack, the nose-boom and flow-direction vanes are believed to have caused the momentum thickness at the nose to be larger than at the higher angles of attack. The boundary-layer data and local skin friction coefficients are tabulated. The wind-tunnel-model surface-pressure distribution ahead of the three locations and the flight surface-pressure distribution ahead of the wing location are included.

  4. SU-E-T-496: A Study of Two Commercial Dose Calculation Algorithms in Low Density Phantom

    International Nuclear Information System (INIS)

    Lim, S; Lovelock, D; Yorke, E; Kuo, L; LoSasso, T

    2014-01-01

    Purpose: Some lung cancer patients have very low lung density due to comorbidities. We investigate calculation accuracy of Eclipse AAA and Acuros(AXB) using a phantom that simulates this situation. Methods: A 2.5 x 5.0 x 5 cm (long) solid water inhomogeneity positioned 10 cm deep in a Balsa lung phantom (density 0.099 gm/cc) was irradiated with an off-center field such that the central axis was parallel to one side of the inhomogeneity. Radiochromic films were placed at 2.5cm(S1) and 5cm(S2) depths. After CT scanning, Hounsfield Units(HU) were converted to electron(ρe) and mass(ρm) density using in-house(IH) and vendor-supplied(V) calibration curves. IH electron densities were generated using a commercial electron density phantom. The phantom was exposed to 6 MV 3x3 and 20x20 fields. Dose distributions were calculated using the AAA and AXB algorithms. Results: The HU of BW is -910±40 which translates to ρe of 0.088±0.050(IH) and 0.090±0.050(V), and ρm of 0.101±0.045(IH) and 0.103±0.039(V). Both ρe(V) and ρm(V) are higher than ρe(IH) and ρm(IH) respectively by 1.4-5.3% and 0.5-12.3%. The average calculated dose inside the solid water ‘tumor’ are within 3.7% and 2.4% of measurements for both calibrations and field sizes using AAA and AXB. Within 10mm outside the ‘tumor’, AAA on average underestimates by 18.3% and 17.0% respectively for 3x3 using IH and V. AXB underestimates by 5.9%(S1)-6.6%(S2) and 13.1%(S1)-16.0%(S2) respectively using IH and V. For 20x20, AAA and AXB underestimate by 2.8%(S1)-4.4%(S2) and 0.3%(S1)-1.4%(S2) respectively with either calibration. Conclusion: The difference in the HU calibration between V and IH is not of clinical significance in normal field sizes. In the low density region of small fields, the calculations from both algorithms differ significantly from measurements. This may be attributed to the insufficient lateral electron transport modeled by two algorithms resulting in the over-estimation in penumbra

  5. Nonhydrostatic and surfbeat model predictions of extreme wave run-up in fringing reef environments

    Science.gov (United States)

    Lashley, Christopher H.; Roelvink, Dano; van Dongeren, Ap R.; Buckley, Mark L.; Lowe, Ryan J.

    2018-01-01

    The accurate prediction of extreme wave run-up is important for effective coastal engineering design and coastal hazard management. While run-up processes on open sandy coasts have been reasonably well-studied, very few studies have focused on understanding and predicting wave run-up at coral reef-fronted coastlines. This paper applies the short-wave resolving, Nonhydrostatic (XB-NH) and short-wave averaged, Surfbeat (XB-SB) modes of the XBeach numerical model to validate run-up using data from two 1D (alongshore uniform) fringing-reef profiles without roughness elements, with two objectives: i) to provide insight into the physical processes governing run-up in such environments; and ii) to evaluate the performance of both modes in accurately predicting run-up over a wide range of conditions. XBeach was calibrated by optimizing the maximum wave steepness parameter (maxbrsteep) in XB-NH and the dissipation coefficient (alpha) in XB-SB) using the first dataset; and then applied to the second dataset for validation. XB-NH and XB-SB predictions of extreme wave run-up (Rmax and R2%) and its components, infragravity- and sea-swell band swash (SIG and SSS) and shoreline setup (), were compared to observations. XB-NH more accurately simulated wave transformation but under-predicted shoreline setup due to its exclusion of parameterized wave-roller dynamics. XB-SB under-predicted sea-swell band swash but overestimated shoreline setup due to an over-prediction of wave heights on the reef flat. Run-up (swash) spectra were dominated by infragravity motions, allowing the short-wave (but not wave group) averaged model (XB-SB) to perform comparably well to its more complete, short-wave resolving (XB-NH) counterpart. Despite their respective limitations, both modes were able to accurately predict Rmax and R2%.

  6. SU-E-J-64: Evaluation of a Commercial EPID-Based in Vivo Dosimetric System in the Presence of Lung Tissue Heterogeneity

    Energy Technology Data Exchange (ETDEWEB)

    Gimeno-Olmos, J; Palomo-Llinares, R; Candela-Juan, C; Carmona Meseguer, V; Lliso-Valverde, F [Hospital Universitari i Politecnic La Fe, Valencia, Valencia (Spain); Garcia-Martinez, T [Hospital de la Ribera, Alzira, Valencia (Spain); Richart-Sancho, J [Clinica Benidorm, Benidorm, Alicante (Spain); Ballester, F [University of Valencia, Burjassot (Spain); Perez-Calatayud, J [Hospital Universitari i Politecnic La Fe, Valencia, Valencia (Spain); Clinica Benidorm, Benidorm, Alicante (Spain)

    2014-06-01

    Purpose: To study the performance of Dosimetry Check (DC), an EPID-based dosimetry software, which allows performing transit dosimetry, in low density medium, by comparing calculations in-phantom, and analysing results for 15 lung patients. Methods: DC software (v.3.8, pencil beam-based algorithm) has been tested, for plans (Eclipse v.10.0 TPS) delivered in two Varian Clinac iX equipped with aS1000 EPIDs.In the CIRS lung phantom, comparisons between DC and Eclipse (Acuros) were performed for several plans: (1) four field box; (2) square field delivered in arc mode; (3) RapidArc lung patient plan medially centred; (4) RapidArc lung patient plan centred in one lung. Reference points analysed: P1 (medial point, plans 1–3) and P2 (located inside one lung, plan 4).For fifteen lung patients treated with RapidArc, the isocentre and 9 additional points inside the PTV as well as the gamma passing rate (3%/3mm) for the PTV and at the main planes were studied. Results: In-phantom:P1: Per-field differences in plan 1: good agreement for AP-PA fields; discrepancy of 7% for the lateral fields. Global differences (plans 1–3): about 4%, showing a compensating effect of the individual differences.P2: Global difference (plan 4): 15 %. This represents the worst case situation as it is a point surrounded by lung tissue, where the DC pencil beam algorithm is expected to give the greater difference against Acuros.Lung patients: Mean point difference inside the PTV:(5.4±4.2) %. Gamma passing rate inside the PTV:(45±12) %. Conclusion: The performance of DC in heterogeneous lung medium was studied with a special phantom and the results for 15 patients were analysed. The found deviations show that even though DC is a highly promising in vivo dosimetry tool, there is a need of incorporating a more accurate algorithm mainly for plans with low density regions involved.

  7. Density scaling of phantom materials for a 3D dose verification system.

    Science.gov (United States)

    Tani, Kensuke; Fujita, Yukio; Wakita, Akihisa; Miyasaka, Ryohei; Uehara, Ryuzo; Kodama, Takumi; Suzuki, Yuya; Aikawa, Ako; Mizuno, Norifumi; Kawamori, Jiro; Saitoh, Hidetoshi

    2018-05-21

    In this study, the optimum density scaling factors of phantom materials for a commercially available three-dimensional (3D) dose verification system (Delta4) were investigated in order to improve the accuracy of the calculated dose distributions in the phantom materials. At field sizes of 10 × 10 and 5 × 5 cm 2 with the same geometry, tissue-phantom ratios (TPRs) in water, polymethyl methacrylate (PMMA), and Plastic Water Diagnostic Therapy (PWDT) were measured, and TPRs in various density scaling factors of water were calculated by Monte Carlo simulation, Adaptive Convolve (AdC, Pinnacle 3 ), Collapsed Cone Convolution (CCC, RayStation), and AcurosXB (AXB, Eclipse). Effective linear attenuation coefficients (μ eff ) were obtained from the TPRs. The ratios of μ eff in phantom and water ((μ eff ) pl,water ) were compared between the measurements and calculations. For each phantom material, the density scaling factor proposed in this study (DSF) was set to be the value providing a match between the calculated and measured (μ eff ) pl,water . The optimum density scaling factor was verified through the comparison of the dose distributions measured by Delta4 and calculated with three different density scaling factors: the nominal physical density (PD), nominal relative electron density (ED), and DSF. Three plans were used for the verifications: a static field of 10 × 10 cm 2 and two intensity modulated radiation therapy (IMRT) treatment plans. DSF were determined to be 1.13 for PMMA and 0.98 for PWDT. DSF for PMMA showed good agreement for AdC and CCC with 6 MV x ray, and AdC for 10 MV x ray. DSF for PWDT showed good agreement regardless of the dose calculation algorithms and x-ray energy. DSF can be considered one of the references for the density scaling factor of Delta4 phantom materials and may help improve the accuracy of the IMRT dose verification using Delta4. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley

  8. Quantum mechanically guided design of Co43Fe20Ta5.5X31.5 (X=B, Si, P, S) metallic glasses

    International Nuclear Information System (INIS)

    Hostert, C; Music, D; Schneider, J M; Bednarcik, J; Keckes, J

    2012-01-01

    A systematic ab initio molecular dynamics study was carried out to identify valence electron concentration and size induced changes on structure, elastic and magnetic properties for Co 43 Fe 20 Ta 5.5 X 31.5 (X=B, Si, P, S). Short range order, charge transfer and the bonding nature are analyzed by means of density of states, Bader decomposition and pair distribution function analysis. A clear trend of a decrease in density and bulk modulus as well as a weaker cohesion was observed as the valence electron concentration is increased by replacing B with Si and further with P and S. These changes may be understood based on increased interatomic distances, variations in coordination numbers and the electronic structure changes; as the valence electron concentration of X is increased the X bonding becomes more ionic, which disrupts the overall metallic interactions, leading to lower cohesion and stiffness. The highest magnetic moments for the transition metals are identified for X=S, despite the fact that the presence of X generally reduces the magnetic moment of Co. Furthermore, this study reveals an extended diagonal relationship between B and P within these amorphous alloys. Based on quantum mechanical data we identify composition induced changes in short range order, charge transfer and bonding nature and link them to density, elasticity and magnetism. The interplay between transition metal d band filling and s-d hybridization was identified to be a key materials design criterion. (paper)

  9. Alternativas para favorecer la polinización y producción de semilla del híbrido h-311 de maíz

    Directory of Open Access Journals (Sweden)

    Alejandro Espinosa

    2001-01-01

    Full Text Available Alternativas para favorecer la polinización y producción de semilla del híbrido H-311 de maíz. En la producción de semilla de híbridos de maíz es fundamental la coincidencia en floración de los progenitores y la disposición de los estigmas con respecto a las panojas que emiten polen. El híbrido de maíz H-311, se recomienda en México para siembras de riego, en alturas de 1200 a 1800 msnm. El orden de cruza inicial fue: (B16XB17 X (B32XB33; sin embargo, se cambio al orden inverso, es decir, (B32XB33 X (B16XB17, debido a mayor productividad y semilla de buena calidad física. La cruza simple (B16XB17 es enana (braquítico, al no haber suficiente viento en la época de polinización, se dificulta el cruzamiento y se requiere levantar el polen con mochilas de motor, la cruza simple B16XB17, rinde de 4,5 a 5,5 t/ha de semilla contra 6,0 a 7,5 t/ha de la cruza simple B32XB33; La cruza simple B16XB17, responde con alargamiento del porte por la influencia del tratamientos, con ácido giberélico (Activol, 20, 40 y 60 ppm, dos dosis de Cloruro de Mepiquat (PIX, 1 y 2 lt/ha, así como tres niveles de fertilización (400-70-30, 150-70-30 y 150- 300-30. En rendimiento hubo un efecto positivo por el manejo de ambos progenitores con diferentes fertilizaciones así como aplicación de fitohormonas, al aplicar ácido giberélico (20 ppm, el rendimiento se incrementó en 250,3% y 211,8% para las cruzas simples B32XB33 y B16XB17 respectivamente. La altura de planta de B16XB17 fue ligeramente afectada (106,0% con ácido giberélico, 40 ppm

  10. Search for a new bottomonium state decaying to Υ(1S)π+π- in pp collisions at s=8 TeV

    Energy Technology Data Exchange (ETDEWEB)

    Chatrchyan, Serguei; et al.

    2013-11-01

    The results of a search for the bottomonium counterpart, denoted as $X_b$, of the exotic charmonium state X(3872) is presented. The analysis is based on a sample of pp collisions at $\\sqrt{s}$ = 8 TeV collected by the CMS experiment at the LHC, corresponding to an integrated luminosity of 20.7 inverse femtobarns. The search looks for the exclusive decay channel $X_b \\to \\Upsilon(1S) \\pi^+ \\pi^-$ followed by $\\Upsilon(1S) \\to \\mu^+ \\mu^-$. No evidence for an $X_b$ signal is observed. Upper limits are set at the 95% confidence level on the ratio of the inclusive production cross sections times the branching fractions to $\\Upsilon(1S) \\pi^+ \\pi^-$ of the $X_b$ and the $\\Upsilon$(2S). The upper limits on the ratio are in the range 0.9-5.4% for $X_b$ masses between 10 and 11 GeV. These are the first upper limits on the production of a possible $X_b$ at a hadron collider.

  11. Experimental investigation of halogen-bond hard-soft acid-base complementarity.

    Science.gov (United States)

    Riel, Asia Marie S; Jessop, Morly J; Decato, Daniel A; Massena, Casey J; Nascimento, Vinicius R; Berryman, Orion B

    2017-04-01

    The halogen bond (XB) is a topical noncovalent interaction of rapidly increasing importance. The XB employs a `soft' donor atom in comparison to the `hard' proton of the hydrogen bond (HB). This difference has led to the hypothesis that XBs can form more favorable interactions with `soft' bases than HBs. While computational studies have supported this suggestion, solution and solid-state data are lacking. Here, XB soft-soft complementarity is investigated with a bidentate receptor that shows similar associations with neutral carbonyls and heavy chalcogen analogs. The solution speciation and XB soft-soft complementarity is supported by four crystal structures containing neutral and anionic soft Lewis bases.

  12. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  13. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  14. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  15. Technical Report: Evaluation of peripheral dose for flattening filter free photon beams

    Energy Technology Data Exchange (ETDEWEB)

    Covington, E. L.; Moran, J. M.; Owrangi, A. M.; Prisciandaro, J. I., E-mail: joannp@med.umich.edu [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan 48109 (United States); Ritter, T. A. [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan 48109 and Department of Radiation Oncology, Veterans Affairs Ann Arbor Healthcare System, Ann Arbor, Michigan 48105 (United States)

    2016-08-15

    Purpose: To develop a comprehensive peripheral dose (PD) dataset for the two unflattened beams of nominal energy 6 and 10 MV for use in clinical care. Methods: Measurements were made in a 40 × 120 × 20 cm{sup 3} (width × length × depth) stack of solid water using an ionization chamber at varying depths (dmax, 5, and 10 cm), field sizes (3 × 3 to 30 × 30 cm{sup 2}), and distances from the field edge (5–40 cm). The effects of the multileaf collimator (MLC) and collimator rotation were also evaluated for a 10 × 10 cm{sup 2} field. Using the same phantom geometry, the accuracy of the analytic anisotropic algorithm (AAA) and Acuros dose calculation algorithm was assessed and compared to the measured values. Results: The PDs for both the 6 flattening filter free (FFF) and 10 FFF photon beams were found to decrease with increasing distance from the radiation field edge and the decreasing field size. The measured PD was observed to be higher for the 6 FFF than for the 10 FFF for all field sizes and depths. The impact of collimator rotation was not found to be clinically significant when used in conjunction with MLCs. AAA and Acuros algorithms both underestimated the PD with average errors of −13.6% and −7.8%, respectively, for all field sizes and depths at distances of 5 and 10 cm from the field edge, but the average error was found to increase to nearly −69% at greater distances. Conclusions: Given the known inaccuracies of peripheral dose calculations, this comprehensive dataset can be used to estimate the out-of-field dose to regions of interest such as organs at risk, electronic implantable devices, and a fetus. While the impact of collimator rotation was not found to significantly decrease PD when used in conjunction with MLCs, results are expected to be machine model and beam energy dependent. It is not recommended to use a treatment planning system to estimate PD due to the underestimation of the out-of-field dose and the inability to calculate dose

  16. Multiple Multidentate Halogen Bonding in Solution, in the Solid State, and in the (Calculated) Gas Phase.

    Science.gov (United States)

    Jungbauer, Stefan H; Schindler, Severin; Herdtweck, Eberhardt; Keller, Sandro; Huber, Stefan M

    2015-09-21

    The binding properties of neutral halogen-bond donors (XB donors) bearing two multidentate Lewis acidic motifs toward halides were investigated. Employing polyfluorinated and polyiodinated terphenyl and quaterphenyl derivatives as anion receptors, we obtained X-ray crystallographic data of the adducts of three structurally related XB donors with tetraalkylammonium chloride, bromide, and iodide. The stability of these XB complexes in solution was determined by isothermal titration calorimetry (ITC), and the results were compared to X-ray analyses as well as to calculated binding patterns in the gas phase. Density functional theory (DFT) calculations on the gas-phase complexes indicated that the experimentally observed distortion of the XB donors during multiple multidentate binding can be reproduced in 1:1 complexes with halides, whereas adducts with two halides show a symmetric binding pattern in the gas phase that is markedly different from the solid state structures. Overall, this study demonstrates the limitations in the transferability of binding data between solid state, solution, and gas phase in the study of complex multidentate XB donors. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  18. STAR Algorithm Integration Team - Facilitating operational algorithm development

    Science.gov (United States)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  19. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    Science.gov (United States)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  20. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  1. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  2. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  3. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  4. Axial and Radial Forces of Cross-Bridges Depend on Lattice Spacing

    Science.gov (United States)

    Williams, C. David; Regnier, Michael; Daniel, Thomas L.

    2010-01-01

    Nearly all mechanochemical models of the cross-bridge treat myosin as a simple linear spring arranged parallel to the contractile filaments. These single-spring models cannot account for the radial force that muscle generates (orthogonal to the long axis of the myofilaments) or the effects of changes in filament lattice spacing. We describe a more complex myosin cross-bridge model that uses multiple springs to replicate myosin's force-generating power stroke and account for the effects of lattice spacing and radial force. The four springs which comprise this model (the 4sXB) correspond to the mechanically relevant portions of myosin's structure. As occurs in vivo, the 4sXB's state-transition kinetics and force-production dynamics vary with lattice spacing. Additionally, we describe a simpler two-spring cross-bridge (2sXB) model which produces results similar to those of the 4sXB model. Unlike the 4sXB model, the 2sXB model requires no iterative techniques, making it more computationally efficient. The rate at which both multi-spring cross-bridges bind and generate force decreases as lattice spacing grows. The axial force generated by each cross-bridge as it undergoes a power stroke increases as lattice spacing grows. The radial force that a cross-bridge produces as it undergoes a power stroke varies from expansive to compressive as lattice spacing increases. Importantly, these results mirror those for intact, contracting muscle force production. PMID:21152002

  5. Denni Algorithm An Enhanced Of SMS (Scan, Move and Sort) Algorithm

    Science.gov (United States)

    Aprilsyah Lubis, Denni; Salim Sitompul, Opim; Marwan; Tulus; Andri Budiman, M.

    2017-12-01

    Sorting has been a profound area for the algorithmic researchers, and many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. Sorting has been considered as a fundamental problem in the study of algorithms that due to many reasons namely, the necessary to sort information is inherent in many applications, algorithms often use sorting as a key subroutine, in algorithm design there are many essential techniques represented in the body of sorting algorithms, and many engineering issues come to the fore when implementing sorting algorithms., Many algorithms are very well known for sorting the unordered lists, and one of the well-known algorithms that make the process of sorting to be more economical and efficient is SMS (Scan, Move and Sort) algorithm, an enhancement of Quicksort invented Rami Mansi in 2010. This paper presents a new sorting algorithm called Denni-algorithm. The Denni algorithm is considered as an enhancement on the SMS algorithm in average, and worst cases. The Denni algorithm is compared with the SMS algorithm and the results were promising.

  6. Python algorithms mastering basic algorithms in the Python language

    CERN Document Server

    Hetland, Magnus Lie

    2014-01-01

    Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc

  7. A replica exchange Monte Carlo algorithm for protein folding in the HP model

    Directory of Open Access Journals (Sweden)

    Shmygelska Alena

    2007-09-01

    Full Text Available Abstract Background The ab initio protein folding problem consists of predicting protein tertiary structure from a given amino acid sequence by minimizing an energy function; it is one of the most important and challenging problems in biochemistry, molecular biology and biophysics. The ab initio protein folding problem is computationally challenging and has been shown to be NP MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaat0uy0HwzTfgDPnwy1egaryqtHrhAL1wy0L2yHvdaiqaacqWFneVtcqqGqbauaaa@3961@-hard even when conformations are restricted to a lattice. In this work, we implement and evaluate the replica exchange Monte Carlo (REMC method, which has already been applied very successfully to more complex protein models and other optimization problems with complex energy landscapes, in combination with the highly effective pull move neighbourhood in two widely studied Hydrophobic Polar (HP lattice models. Results We demonstrate that REMC is highly effective for solving instances of the square (2D and cubic (3D HP protein folding problem. When using the pull move neighbourhood, REMC outperforms current state-of-the-art algorithms for most benchmark instances. Additionally, we show that this new algorithm provides a larger ensemble of ground-state structures than the existing state-of-the-art methods. Furthermore, it scales well with sequence length, and it finds significantly better conformations on long biological sequences and sequences with a provably unique ground-state structure, which is believed to be a characteristic of real proteins. We also present evidence that our REMC algorithm can fold sequences which exhibit significant interaction between termini in the hydrophobic core relatively easily. Conclusion We demonstrate that REMC utilizing the pull move

  8. Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm

    Science.gov (United States)

    Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad

    2018-01-01

    Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.

  9. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  10. Synthesis, crystal structure investigation and magnetism of the complex metal-rich boride series Crx(Rh1-yRuy)7-xB3 (x=0.88-1; y=0-1) with Th7Fe3-type structure

    Science.gov (United States)

    Misse, Patrick R. N.; Mbarki, Mohammed; Fokwa, Boniface P. T.

    2012-08-01

    Powder samples and single crystals of the new complex boride series Crx(Rh1-yRuy)7-xB3 (x=0.88-1; y=0-1) have been synthesized by arc-melting the elements under purified argon atmosphere on a water-cooled copper crucible. The products, which have metallic luster, were structurally characterized by single-crystal and powder X-ray diffraction as well as EDX measurements. Within the whole solid solution range the hexagonal Th7Fe3 structure type (space group P63mc, no. 186, Z=2) was identified. Single-crystal structure refinement results indicate the presence of chromium at two sites (6c and 2b) of the available three metal Wyckoff sites, with a pronounced preference for the 6c site. An unexpected Rh/Ru site preference was found in the Ru-rich region only, leading to two different magnetic behaviors in the solid solution: The Rh-rich region shows a temperature-independent (Pauli) paramagnetism whereas an additional temperature-dependent paramagnetic component is found in the Ru-rich region.

  11. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  12. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    Science.gov (United States)

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  13. The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.

    Science.gov (United States)

    Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P

    1999-10-01

    In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.

  14. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  15. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  16. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  17. An Adaptive Filtering Algorithm Based on Genetic Algorithm-Backpropagation Network

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2013-01-01

    Full Text Available A new image filtering algorithm is proposed. GA-BPN algorithm uses genetic algorithm (GA to decide weights in a back propagation neural network (BPN. It has better global optimal characteristics than traditional optimal algorithm. In this paper, we used GA-BPN to do image noise filter researching work. Firstly, this paper uses training samples to train GA-BPN as the noise detector. Then, we utilize the well-trained GA-BPN to recognize noise pixels in target image. And at last, an adaptive weighted average algorithm is used to recover noise pixels recognized by GA-BPN. Experiment data shows that this algorithm has better performance than other filters.

  18. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  19. Convex hull ranking algorithm for multi-objective evolutionary algorithms

    NARCIS (Netherlands)

    Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.

    2012-01-01

    Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity

  20. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  1. SU-F-T-152: Experimental Validation and Calculation Benchmark for a Commercial Monte Carlo Pencil BeamScanning Proton Therapy Treatment Planning System in Heterogeneous Media

    Energy Technology Data Exchange (ETDEWEB)

    Lin, L; Huang, S; Kang, M; Ainsley, C; Simone, C; McDonough, J; Solberg, T [University of Pennsylvania, Philadelphia, PA (United States)

    2016-06-15

    Purpose: Eclipse AcurosPT 13.7, the first commercial Monte Carlo pencil beam scanning (PBS) proton therapy treatment planning system (TPS), was experimentally validated for an IBA dedicated PBS nozzle in the CIRS 002LFC thoracic phantom. Methods: A two-stage procedure involving the use of TOPAS 1.3 simulations was performed. First, Geant4-based TOPAS simulations in this phantom were experimentally validated for single and multi-spot profiles at several depths for 100, 115, 150, 180, 210 and 225 MeV proton beams, using the combination of a Lynx scintillation detector and a MatriXXPT ionization chamber array. Second, benchmark calculations were performed with both AcurosPT and TOPAS in a phantom identical to the CIRS 002LFC, with the exception that the CIRS bone/mediastinum/lung tissues were replaced with similar tissues that are predefined in AcurosPT (a limitation of this system which necessitates the two stage procedure). Results: Spot sigmas measured in tissue were in agreement within 0.2 mm of TOPAS simulation for all six energies, while AcurosPT was consistently found to have larger spot sigma (<0.7 mm) than TOPAS. Using absolute dose calibration by MatriXXPT, the agreements between profiles measurements and TOPAS simulation, and calculation benchmarks are over 97% except near the end of range using 2 mm/2% gamma criteria. Overdosing and underdosing were observed at the low and high density side of tissue interfaces, respectively, and these increased with increasing depth and decreasing energy. Near the mediastinum/lung interface, the magnitude can exceed 5 mm/10%. Furthermore, we observed >5% quenching effect in the conversion of Lynx measurements to dose. Conclusion: We recommend the use of an ionization chamber array in combination with the scintillation detector to measure absolute dose and relative PBS spot characteristics. We also recommend the use of an independent Monte Carlo calculation benchmark for the commissioning of a commercial TPS. Partially

  2. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  3. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  4. Radiation-induced coloration of xylenol blue/film containing hexachloroethane for food irradiation applications

    International Nuclear Information System (INIS)

    Soliman, Y.S.; Beshir, W.B.; Abdel-Fattah, A.A.; Ramy Amer Fahim; El-Anadouli, B.E.

    2016-01-01

    Polyvinyl butyral films mixed with xylenol blue (XB) indicator and hexachloroethane (HCE) were prepared for possible application in food irradiation. Upon γ-irradiation the films undergo visual color change from yellow (XB, pH 8) to red (XB, pH 2.8) by H + formation in the presence of HCE. The dosimetric characteristics of films containing different dye and HCE concentrations were investigated spectrophotometrically at λ max 555 nm. Radiation sensitivity is enhanced with HCE and, accordingly red color intensity. The prepared films can be applicable in dose range 0.25-10 kGy with uncertainty of dose reaching 4.45 % at 1σ. (author)

  5. Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm

    Institute of Scientific and Technical Information of China (English)

    Haidong Xu; Mingyan Jiang; Kun Xu

    2015-01-01

    The artificial bee colony (ABC) algorithm is a com-petitive stochastic population-based optimization algorithm. How-ever, the ABC algorithm does not use the social information and lacks the knowledge of the problem structure, which leads to in-sufficiency in both convergent speed and searching precision. Archimedean copula estimation of distribution algorithm (ACEDA) is a relatively simple, time-economic and multivariate correlated EDA. This paper proposes a novel hybrid algorithm based on the ABC algorithm and ACEDA cal ed Archimedean copula estima-tion of distribution based on the artificial bee colony (ACABC) algorithm. The hybrid algorithm utilizes ACEDA to estimate the distribution model and then uses the information to help artificial bees to search more efficiently in the search space. Six bench-mark functions are introduced to assess the performance of the ACABC algorithm on numerical function optimization. Experimen-tal results show that the ACABC algorithm converges much faster with greater precision compared with the ABC algorithm, ACEDA and the global best (gbest)-guided ABC (GABC) algorithm in most of the experiments.

  6. A filtered backprojection algorithm with characteristics of the iterative landweber algorithm

    OpenAIRE

    L. Zeng, Gengsheng

    2012-01-01

    Purpose: In order to eventually develop an analytical algorithm with noise characteristics of an iterative algorithm, this technical note develops a window function for the filtered backprojection (FBP) algorithm in tomography that behaves as an iterative Landweber algorithm.

  7. Super-Encryption Implementation Using Monoalphabetic Algorithm and XOR Algorithm for Data Security

    Science.gov (United States)

    Rachmawati, Dian; Andri Budiman, Mohammad; Aulia, Indra

    2018-03-01

    The exchange of data that occurs offline and online is very vulnerable to the threat of data theft. In general, cryptography is a science and art to maintain data secrecy. An encryption is a cryptography algorithm in which data is transformed into cipher text, which is something that is unreadable and meaningless so it cannot be read or understood by other parties. In super-encryption, two or more encryption algorithms are combined to make it more secure. In this work, Monoalphabetic algorithm and XOR algorithm are combined to form a super- encryption. Monoalphabetic algorithm works by changing a particular letter into a new letter based on existing keywords while the XOR algorithm works by using logic operation XOR Since Monoalphabetic algorithm is a classical cryptographic algorithm and XOR algorithm is a modern cryptographic algorithm, this scheme is expected to be both easy-to-implement and more secure. The combination of the two algorithms is capable of securing the data and restoring it back to its original form (plaintext), so the data integrity is still ensured.

  8. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    Science.gov (United States)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  9. Golden Sine Algorithm: A Novel Math-Inspired Algorithm

    Directory of Open Access Journals (Sweden)

    TANYILDIZI, E.

    2017-05-01

    Full Text Available In this study, Golden Sine Algorithm (Gold-SA is presented as a new metaheuristic method for solving optimization problems. Gold-SA has been developed as a new search algorithm based on population. This math-based algorithm is inspired by sine that is a trigonometric function. In the algorithm, random individuals are created as many as the number of search agents with uniform distribution for each dimension. The Gold-SA operator searches to achieve a better solution in each iteration by trying to bring the current situation closer to the target value. The solution space is narrowed by the golden section so that the areas that are supposed to give only good results are scanned instead of the whole solution space scan. In the tests performed, it is seen that Gold-SA has better results than other population based methods. In addition, Gold-SA has fewer algorithm-dependent parameters and operators than other metaheuristic methods, increasing the importance of this method by providing faster convergence of this new method.

  10. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    Science.gov (United States)

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  11. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  12. Gamma-ray attenuation coefficients in some heavy metal oxide borate glasses at 662 keV

    International Nuclear Information System (INIS)

    Khanna, A.; Bhatti, S.S.; Singh, K.J.; Thind, K.S.

    1996-01-01

    The linear attenuation coefficient (μ) and mass attenuation coefficients (μ/ρ) of glasses in three systems: xPbO(1-x)B 2 O 3 , 0.25PbO.xCdO(0.75-x)B 2 O 3 and xBi 2 O 3 (1-x)B 2 O 3 were measured at 662 keV. Appreciable variations were noted in the attenuation coefficients due to changes in the chemical composition of glasses. In addition to this, absorption cross-sections per atom were also calculated. A comparison of shielding properties of these glasses with standar d shielding materials like lead, lead glass and concrete has proven that these glasses have a potential application as transparent radiation shielding. (orig.)

  13. Monte Carlo Investigation on the Effect of Heterogeneities on Strut Adjusted Volume Implant (SAVI) Dosimetry

    Science.gov (United States)

    Koontz, Craig

    Breast cancer is the most prevalent cancer for women with more than 225,000 new cases diagnosed in the United States in 2012 (ACS, 2012). With the high prevalence, comes an increased emphasis on researching new techniques to treat this disease. Accelerated partial breast irradiation (APBI) has been used as an alternative to whole breast irradiation (WBI) in order to treat occult disease after lumpectomy. Similar recurrence rates have been found using ABPI after lumpectomy as with mastectomy alone, but with the added benefit of improved cosmetic and psychological results. Intracavitary brachytherapy devices have been used to deliver the APBI prescription. However, inability to produce asymmetric dose distributions in order to avoid overdosing skin and chest wall has been an issue with these devices. Multi-lumen devices were introduced to overcome this problem. Of these, the Strut-Adjusted Volume Implant (SAVI) has demonstrated the greatest ability to produce an asymmetric dose distribution, which would have greater ability to avoid skin and chest wall dose, and thus allow more women to receive this type of treatment. However, SAVI treatments come with inherent heterogeneities including variable backscatter due to the proximity to the tissue-air and tissue-lung interfaces and variable contents within the cavity created by the SAVI. The dose calculation protocol based on TG-43 does not account for heterogeneities and thus will not produce accurate dosimetry; however Acuros, a model-based dose calculation algorithm manufactured by Varian Medical Systems, claims to accurately account for heterogeneities. Monte Carlo simulation can calculate the dosimetry with high accuracy. In this thesis, a model of the SAVI will be created for Monte Carlo, specifically using MCNP code, in order to explore the affects of heterogeneities on the dose distribution. This data will be compared to TG-43 and Acuros calculated dosimetry to explore their accuracy.

  14. DSC, Raman and impedance spectroscopy studies on the xB2O3 - (90 - x)TeO2 - 10TiO2 (where x = 0 to 50 mol%) glass system

    Science.gov (United States)

    Sripada, Suresh; Rani, D. Esther Kalpana; Upender, G.; Pavani, P. Gayathri

    2013-03-01

    Titanium boro tellurite glasses in the xB2O3 -(90- x) TeO2 - 10TiO2 (where x = 0 to 50 mol%) system were prepared by using the conventional melt-quenching technique. Glass transition temperatures were measured with differential scanning calorimetry (DSC) and found to be in the range of 300-370 °C. The Raman spectra showed a cleavage of the continuous TeO4 (tbp) network by breaking of the Te-O-Te linkages. The relative transition of TeO4 - groups to TeO3 - groups is accompanied by a change in the oxygen coordination of the boron from 3 to 4 (BO3 - to BO4 -). The impedance plots Z″( ω) versus Z'( ω) for all the glass samples were recorded and found to exhibit a single circle. The AC conductivity of all glass samples was studied in the frequency range from 100 Hz to 1 MHz and in the temperature range from room temperature (RT) to 375 °C. The AC conductivity decreased by about one order in magnitude with increasing B2O3 content. The conductivity was found to be on the order of 10-4.5 to 10-6 (Ωcm)-1 at 375 °C and 1 MHz for 10 mol% and 50 mol% B2O3 contents, respectively. The relaxation behavior in these glass samples is discussed based on the complex modulus and impedance data.

  15. Fermion cluster algorithms

    International Nuclear Information System (INIS)

    Chandrasekharan, Shailesh

    2000-01-01

    Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm

  16. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    Energy Technology Data Exchange (ETDEWEB)

    Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  17. Algorithm of Particle Data Association for SLAM Based on Improved Ant Algorithm

    Directory of Open Access Journals (Sweden)

    KeKe Gen

    2015-01-01

    Full Text Available The article considers a problem of data association algorithm for simultaneous localization and mapping guidelines in determining the route of unmanned aerial vehicles (UAVs. Currently, these equipments are already widely used, but mainly controlled from the remote operator. An urgent task is to develop a control system that allows for autonomous flight. Algorithm SLAM (simultaneous localization and mapping, which allows to predict the location, speed, the ratio of flight parameters and the coordinates of landmarks and obstacles in an unknown environment, is one of the key technologies to achieve real autonomous UAV flight. The aim of this work is to study the possibility of solving this problem by using an improved ant algorithm.The data association for SLAM algorithm is meant to establish a matching set of observed landmarks and landmarks in the state vector. Ant algorithm is one of the widely used optimization algorithms with positive feedback and the ability to search in parallel, so the algorithm is suitable for solving the problem of data association for SLAM. But the traditional ant algorithm in the process of finding routes easily falls into local optimum. Adding random perturbations in the process of updating the global pheromone to avoid local optima. Setting limits pheromone on the route can increase the search space with a reasonable amount of calculations for finding the optimal route.The paper proposes an algorithm of the local data association for SLAM algorithm based on an improved ant algorithm. To increase the speed of calculation, local data association is used instead of the global data association. The first stage of the algorithm defines targets in the matching space and the observed landmarks with the possibility of association by the criterion of individual compatibility (IC. The second stage defines the matched landmarks and their coordinates using improved ant algorithm. Simulation results confirm the efficiency and

  18. The BR eigenvalue algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  19. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  20. Algorithms in Singular

    Directory of Open Access Journals (Sweden)

    Hans Schonemann

    1996-12-01

    Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].

  1. Multimodal optimization by using hybrid of artificial bee colony algorithm and BFGS algorithm

    Science.gov (United States)

    Anam, S.

    2017-10-01

    Optimization has become one of the important fields in Mathematics. Many problems in engineering and science can be formulated into optimization problems. They maybe have many local optima. The optimization problem with many local optima, known as multimodal optimization problem, is how to find the global solution. Several metaheuristic methods have been proposed to solve multimodal optimization problems such as Particle Swarm Optimization (PSO), Genetics Algorithm (GA), Artificial Bee Colony (ABC) algorithm, etc. The performance of the ABC algorithm is better than or similar to those of other population-based algorithms with the advantage of employing a fewer control parameters. The ABC algorithm also has the advantages of strong robustness, fast convergence and high flexibility. However, it has the disadvantages premature convergence in the later search period. The accuracy of the optimal value cannot meet the requirements sometimes. Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is a good iterative method for finding a local optimum. Compared with other local optimization methods, the BFGS algorithm is better. Based on the advantages of the ABC algorithm and the BFGS algorithm, this paper proposes a hybrid of the artificial bee colony algorithm and the BFGS algorithm to solve the multimodal optimization problem. The first step is that the ABC algorithm is run to find a point. In the second step is that the point obtained by the first step is used as an initial point of BFGS algorithm. The results show that the hybrid method can overcome from the basic ABC algorithm problems for almost all test function. However, if the shape of function is flat, the proposed method cannot work well.

  2. Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Surafel Luleseged Tilahun

    2012-01-01

    Full Text Available Firefly algorithm is one of the new metaheuristic algorithms for optimization problems. The algorithm is inspired by the flashing behavior of fireflies. In the algorithm, randomly generated solutions will be considered as fireflies, and brightness is assigned depending on their performance on the objective function. One of the rules used to construct the algorithm is, a firefly will be attracted to a brighter firefly, and if there is no brighter firefly, it will move randomly. In this paper we modify this random movement of the brighter firefly by generating random directions in order to determine the best direction in which the brightness increases. If such a direction is not generated, it will remain in its current position. Furthermore the assignment of attractiveness is modified in such a way that the effect of the objective function is magnified. From the simulation result it is shown that the modified firefly algorithm performs better than the standard one in finding the best solution with smaller CPU time.

  3. Network-Oblivious Algorithms

    DEFF Research Database (Denmark)

    Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino

    2016-01-01

    A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network......-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem’s input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that for a wide class of network-oblivious algorithms, optimality...... of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed....

  4. A novel hybrid algorithm of GSA with Kepler algorithm for numerical optimization

    Directory of Open Access Journals (Sweden)

    Soroor Sarafrazi

    2015-07-01

    Full Text Available It is now well recognized that pure algorithms can be promisingly improved by hybridization with other techniques. One of the relatively new metaheuristic algorithms is Gravitational Search Algorithm (GSA which is based on the Newton laws. In this paper, to enhance the performance of GSA, a novel algorithm called “Kepler”, inspired by the astrophysics, is introduced. The Kepler algorithm is based on the principle of the first Kepler law. The hybridization of GSA and Kepler algorithm is an efficient approach to provide much stronger specialization in intensification and/or diversification. The performance of GSA–Kepler is evaluated by applying it to 14 benchmark functions with 20–1000 dimensions and the optimal approximation of linear system as a practical optimization problem. The results obtained reveal that the proposed hybrid algorithm is robust enough to optimize the benchmark functions and practical optimization problems.

  5. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  6. DEVELOPMENT OF A NEW ALGORITHM FOR KEY AND S-BOX GENERATION IN BLOWFISH ALGORITHM

    Directory of Open Access Journals (Sweden)

    TAYSEER S. ATIA

    2014-08-01

    Full Text Available Blowfish algorithm is a block cipher algorithm, its strong, simple algorithm used to encrypt data in block of size 64-bit. Key and S-box generation process in this algorithm require time and memory space the reasons that make this algorithm not convenient to be used in smart card or application requires changing secret key frequently. In this paper a new key and S-box generation process was developed based on Self Synchronization Stream Cipher (SSS algorithm where the key generation process for this algorithm was modified to be used with the blowfish algorithm. Test result shows that the generation process requires relatively slow time and reasonably low memory requirement and this enhance the algorithm and gave it the possibility for different usage.

  7. Monte Carlo algorithms with absorbing Markov chains: Fast local algorithms for slow dynamics

    International Nuclear Information System (INIS)

    Novotny, M.A.

    1995-01-01

    A class of Monte Carlo algorithms which incorporate absorbing Markov chains is presented. In a particular limit, the lowest order of these algorithms reduces to the n-fold way algorithm. These algorithms are applied to study the escape from the metastable state in the two-dimensional square-lattice nearest-neighbor Ising ferromagnet in an unfavorable applied field, and the agreement with theoretical predictions is very good. It is demonstrated that the higher-order algorithms can be many orders of magnitude faster than either the traditional Monte Carlo or n-fold way algorithms

  8. Dynamic route guidance algorithm based algorithm based on artificial immune system

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    To improve the performance of the K-shortest paths search in intelligent traffic guidance systems,this paper proposes an optimal search algorithm based on the intelligent optimization search theory and the memphor mechanism of vertebrate immune systems.This algorithm,applied to the urban traffic network model established by the node-expanding method,can expediently realize K-shortest paths search in the urban traffic guidance systems.Because of the immune memory and global parallel search ability from artificial immune systems,K shortest paths can be found without any repeat,which indicates evidently the superiority of the algorithm to the conventional ones.Not only does it perform a better parallelism,the algorithm also prevents premature phenomenon that often occurs in genetic algorithms.Thus,it is especially suitable for real-time requirement of the traffic guidance system and other engineering optimal applications.A case study verifies the efficiency and the practicability of the algorithm aforementioned.

  9. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  10. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  11. Algorithms as fetish: Faith and possibility in algorithmic work

    Directory of Open Access Journals (Sweden)

    Suzanne L Thomas

    2018-01-01

    Full Text Available Algorithms are powerful because we invest in them the power to do things. With such promise, they can transform the ordinary, say snapshots along a robotic vacuum cleaner’s route, into something much more, such as a clean home. Echoing David Graeber’s revision of fetishism, we argue that this easy slip from technical capabilities to broader claims betrays not the “magic” of algorithms but rather the dynamics of their exchange. Fetishes are not indicators of false thinking, but social contracts in material form. They mediate emerging distributions of power often too nascent, too slippery or too disconcerting to directly acknowledge. Drawing primarily on 2016 ethnographic research with computer vision professionals, we show how faith in what algorithms can do shapes the social encounters and exchanges of their production. By analyzing algorithms through the lens of fetishism, we can see the social and economic investment in some people’s labor over others. We also see everyday opportunities for social creativity and change. We conclude that what is problematic about algorithms is not their fetishization but instead their stabilization into full-fledged gods and demons – the more deserving objects of critique.

  12. Quick fuzzy backpropagation algorithm.

    Science.gov (United States)

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  13. A New Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Medha Gupta

    2016-07-01

    Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.

  14. Quantum Computation and Algorithms

    International Nuclear Information System (INIS)

    Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.

    1999-01-01

    It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution

  15. Semioptimal practicable algorithmic cooling

    International Nuclear Information System (INIS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-01-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  16. An Ordering Linear Unification Algorithm

    Institute of Scientific and Technical Information of China (English)

    胡运发

    1989-01-01

    In this paper,we present an ordering linear unification algorithm(OLU).A new idea on substituteion of the binding terms is introduced to the algorithm,which is able to overcome some drawbacks of other algorithms,e.g.,MM algorithm[1],RG1 and RG2 algorithms[2],Particularly,if we use the directed eyclie graphs,the algoritm needs not check the binding order,then the OLU algorithm can also be aplied to the infinite tree data struceture,and a higher efficiency can be expected.The paper focuses upon the discussion of OLU algorithm and a partial order structure with respect to the unification algorithm.This algorithm has been implemented in the GKD-PROLOG/VAX 780 interpreting system.Experimental results have shown that the algorithm is very simple and efficient.

  17. Generalized module extension Banach algebras: Derivations and ...

    African Journals Online (AJOL)

    Let A and X be Banach algebras and let X be an algebraic Banach A-module. Then the ℓ-1direct sum A x X equipped with the multiplication (a; x)(b; y) = (ab; ay + xb + xy) (a; b ∈ A; x; y ∈ X) is a Banach algebra, denoted by A ⋈ X, which will be called "a generalized module extension Banach algebra". Module extension ...

  18. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  19. X-band klystrons for Japan Linear Collider

    International Nuclear Information System (INIS)

    Mizuno, H.; Odagiri, J.; Higo, T.; Yonezawa, H.; Yamaguchi, N.

    1992-01-01

    To achieve the acceleration gradient of 100 MeV/m necessary for the future linear collider in X-band, an RF power source which could produce more than 100 MW peak power with the pulse duration of 500 nsec is needed even with the factor 4 RF pulse compression system. As the first step for the development of the 100 MW class klystrons in X-band (11.424 GHz), a 30 MW class klystron named XB-50K was tested several times since 1990. XB-50K was tested up to the peak power of 18 MW with the pulse duration of 100 ns. A new 100 MW class klystron named XB-72K was designed and fabricated. Some test results of this klystron are reported. (Author) 9 refs., 3 figs., 2 tabs

  20. RFID Location Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Zi Min

    2016-01-01

    Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.

  1. Improved multivariate polynomial factoring algorithm

    International Nuclear Information System (INIS)

    Wang, P.S.

    1978-01-01

    A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timing are included

  2. Governance by algorithms

    Directory of Open Access Journals (Sweden)

    Francesca Musiani

    2013-08-01

    Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.

  3. Algorithmic and user study of an autocompletion algorithm on a large medical vocabulary.

    Science.gov (United States)

    Sevenster, Merlijn; van Ommering, Rob; Qian, Yuechen

    2012-02-01

    Autocompletion supports human-computer interaction in software applications that let users enter textual data. We will be inspired by the use case in which medical professionals enter ontology concepts, catering the ongoing demand for structured and standardized data in medicine. Goal is to give an algorithmic analysis of one particular autocompletion algorithm, called multi-prefix matching algorithm, which suggests terms whose words' prefixes contain all words in the string typed by the user, e.g., in this sense, opt ner me matches optic nerve meningioma. Second we aim to investigate how well it supports users entering concepts from a large and comprehensive medical vocabulary (snomed ct). We give a concise description of the multi-prefix algorithm, and sketch how it can be optimized to meet required response time. Performance will be compared to a baseline algorithm, which gives suggestions that extend the string typed by the user to the right, e.g. optic nerve m gives optic nerve meningioma, but opt ner me does not. We conduct a user experiment in which 12 participants are invited to complete 40 snomed ct terms with the baseline algorithm and another set of 40 snomed ct terms with the multi-prefix algorithm. Our results show that users need significantly fewer keystrokes when supported by the multi-prefix algorithm than when supported by the baseline algorithm. The proposed algorithm is a competitive candidate for searching and retrieving terms from a large medical ontology. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Optimal Fungal Space Searching Algorithms.

    Science.gov (United States)

    Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V

    2016-10-01

    Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.

  5. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  6. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  7. Genetic Algorithm Applied to the Eigenvalue Equalization Filtered-x LMS Algorithm (EE-FXLMS

    Directory of Open Access Journals (Sweden)

    Stephan P. Lovstedt

    2008-01-01

    Full Text Available The FXLMS algorithm, used extensively in active noise control (ANC, exhibits frequency-dependent convergence behavior. This leads to degraded performance for time-varying tonal noise and noise with multiple stationary tones. Previous work by the authors proposed the eigenvalue equalization filtered-x least mean squares (EE-FXLMS algorithm. For that algorithm, magnitude coefficients of the secondary path transfer function are modified to decrease variation in the eigenvalues of the filtered-x autocorrelation matrix, while preserving the phase, giving faster convergence and increasing overall attenuation. This paper revisits the EE-FXLMS algorithm, using a genetic algorithm to find magnitude coefficients that give the least variation in eigenvalues. This method overcomes some of the problems with implementing the EE-FXLMS algorithm arising from finite resolution of sampled systems. Experimental control results using the original secondary path model, and a modified secondary path model for both the previous implementation of EE-FXLMS and the genetic algorithm implementation are compared.

  8. On factoring RSA modulus using random-restart hill-climbing algorithm and Pollard’s rho algorithm

    Science.gov (United States)

    Budiman, M. A.; Rachmawati, D.

    2017-12-01

    The security of the widely-used RSA public key cryptography algorithm depends on the difficulty of factoring a big integer into two large prime numbers. For many years, the integer factorization problem has been intensively and extensively studied in the field of number theory. As a result, a lot of deterministic algorithms such as Euler’s algorithm, Kraitchik’s, and variants of Pollard’s algorithms have been researched comprehensively. Our study takes a rather uncommon approach: rather than making use of intensive number theories, we attempt to factorize RSA modulus n by using random-restart hill-climbing algorithm, which belongs the class of metaheuristic algorithms. The factorization time of RSA moduli with different lengths is recorded and compared with the factorization time of Pollard’s rho algorithm, which is a deterministic algorithm. Our experimental results indicates that while random-restart hill-climbing algorithm is an acceptable candidate to factorize smaller RSA moduli, the factorization speed is much slower than that of Pollard’s rho algorithm.

  9. Experimental studies of VpxB electron linear accelerator

    International Nuclear Information System (INIS)

    Taura, T.; Onihashi, H.; Otsuka, K.; Nishida, Y.; Yugami, N.

    1989-01-01

    In order to demonstrate a new electron linear accelerator an electron beam is accelerated either in the conventional linear accelerator scheme or in the V p xB scheme in a same machine and higher energy gain of about 18 % is observed in the V p xB scheme as is expected from the designed values. The experimental results are compared with the numerical simulation to show reasonable agreement. (author)

  10. Opposition-Based Adaptive Fireworks Algorithm

    Directory of Open Access Journals (Sweden)

    Chibing Gong

    2016-07-01

    Full Text Available A fireworks algorithm (FWA is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA. The purpose of this paper is to add opposition-based learning (OBL to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based adaptive fireworks algorithm (OAFWA. The final results conclude that OAFWA significantly outperformed EFWA and AFWA in terms of solution accuracy. Additionally, OAFWA was compared with a bat algorithm (BA, differential evolution (DE, self-adapting control parameters in differential evolution (jDE, a firefly algorithm (FA, and a standard particle swarm optimization 2011 (SPSO2011 algorithm. The research results indicate that OAFWA ranks the highest of the six algorithms for both solution accuracy and runtime cost.

  11. Opposite Degree Algorithm and Its Applications

    Directory of Open Access Journals (Sweden)

    Xiao-Guang Yue

    2015-12-01

    Full Text Available The opposite (Opposite Degree, referred to as OD algorithm is an intelligent algorithm proposed by Yue Xiaoguang et al. Opposite degree algorithm is mainly based on the concept of opposite degree, combined with the idea of design of neural network and genetic algorithm and clustering analysis algorithm. The OD algorithm is divided into two sub algorithms, namely: opposite degree - numerical computation (OD-NC algorithm and opposite degree - Classification computation (OD-CC algorithm.

  12. Algorithmic phase diagrams

    Science.gov (United States)

    Hockney, Roger

    1987-01-01

    Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation.

  13. SU-E-T-547: A Method to Correlate Treatment Planning Issue with Clinical Analysis for Prostate Stereotactic Body Radiotherapy (SBRT)

    International Nuclear Information System (INIS)

    Li, K; Jung, E; Newton, J; Cornell, D; Able, A

    2014-01-01

    Purpose: In this study, the algorithms and calculation setting effect and contribution weighing on prostate Volumetric Modulated Arc Therapy (VMAT) based SBRT were evaluated for clinical analysis. Methods: A low risk prostate patient under SBRT was selected for the treatment planning evaluation. The treatment target was divided into low dose prescription target volume (PTV) and high Dose PTV. Normal tissue constraints include urethra and femur head, and rectum was separated into anterior, lateral and posterior parts. By varying the constraint limit of treatment plan calculation setting and algorithms, the effect on dose coverage and normal tissue dose constraint parameter carried effective comparison for the nominal prescription and constraint. For each setting, their percentage differences to the nominal value were calculated with geometric mean and harmonic mean. Results: In the arbitrary prostate SBRT case, 14 variables were selected for this evaluation by using nominal prescription and constraint. Six VMAT planning settings were anisotropic analytic algorithm stereotactic beam with and without couch structure in grid size of 1mm and 2mm, non stereotactic beam, Acuros algorithm . Their geometry means of the variable sets for these plans were 112.3%, 111.9%, 112.09%, 111.75%, 111.28%, and 112.05%. And the corresponding harmonic means were 2.02%, 2.16%, 3.15%, 4.74%, 5.47% and 5.55%. Conclusions: In this study, the algorithm difference shows relatively larger harmonic mean between prostate SBRT VMAT plans. This study provides a methodology to find sensitive combined variables related to clinical analysis, and similar approach could be applied to the whole treatment procedure from simulation to treatment in radiotherapy for big clinical data analysis

  14. SU-E-T-547: A Method to Correlate Treatment Planning Issue with Clinical Analysis for Prostate Stereotactic Body Radiotherapy (SBRT)

    Energy Technology Data Exchange (ETDEWEB)

    Li, K; Jung, E; Newton, J [Associates In Medical Physics, Lanham, MD (United States); John R Marsh Cancer Center, Hagerstown, MD (United States); Cornell, D [John R Marsh Cancer Center, Hagerstown, MD (United States); Able, A [Associates In Medical Physics, Lanham, MD (United States)

    2014-06-01

    Purpose: In this study, the algorithms and calculation setting effect and contribution weighing on prostate Volumetric Modulated Arc Therapy (VMAT) based SBRT were evaluated for clinical analysis. Methods: A low risk prostate patient under SBRT was selected for the treatment planning evaluation. The treatment target was divided into low dose prescription target volume (PTV) and high Dose PTV. Normal tissue constraints include urethra and femur head, and rectum was separated into anterior, lateral and posterior parts. By varying the constraint limit of treatment plan calculation setting and algorithms, the effect on dose coverage and normal tissue dose constraint parameter carried effective comparison for the nominal prescription and constraint. For each setting, their percentage differences to the nominal value were calculated with geometric mean and harmonic mean. Results: In the arbitrary prostate SBRT case, 14 variables were selected for this evaluation by using nominal prescription and constraint. Six VMAT planning settings were anisotropic analytic algorithm stereotactic beam with and without couch structure in grid size of 1mm and 2mm, non stereotactic beam, Acuros algorithm . Their geometry means of the variable sets for these plans were 112.3%, 111.9%, 112.09%, 111.75%, 111.28%, and 112.05%. And the corresponding harmonic means were 2.02%, 2.16%, 3.15%, 4.74%, 5.47% and 5.55%. Conclusions: In this study, the algorithm difference shows relatively larger harmonic mean between prostate SBRT VMAT plans. This study provides a methodology to find sensitive combined variables related to clinical analysis, and similar approach could be applied to the whole treatment procedure from simulation to treatment in radiotherapy for big clinical data analysis.

  15. Recursive forgetting algorithms

    DEFF Research Database (Denmark)

    Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan

    1992-01-01

    In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...

  16. Sedative and mechanical hypoalgesic effects of butorphanol in xylazine-premedicated donkeys.

    Science.gov (United States)

    Lizarraga, I; Castillo-Alcala, F

    2015-05-01

    Combinations of α2 -adrenoceptor and opioid agonists are commonly used in equids, but little scientific information is available on donkeys. To compare the sedative and hypoalgesic effects of xylazine alone or in combination with different dosages of butorphanol in donkeys. Placebo-controlled, operator-blinded, randomised, crossover, Latin square study. Six donkeys received intravenous normal saline and normal saline (NS-NS); xylazine (0.5 mg/kg bwt) and normal saline (X-NS); xylazine and 10 μg/kg bwt butorphanol (X-B10); xylazine and 20 μg/kg bwt butorphanol (X-B20); xylazine and 30 μg/kg bwt butorphanol (X-B30); and xylazine and 40 μg/kg bwt butorphanol (X-B40). Sedation scores (SS), head height above ground (HHAG) and mechanical nociceptive thresholds (MNT) were assessed before and for 120 min after treatment. Areas under the curve (AUC) values for 0-30, 30-60 and 60-120 min were computed for SS, HHAG and MNT. As appropriate, differences between treatments were analysed using the Friedman test followed by Dunn's test and a repeated measures one-way analysis of variance followed by Tukey's test; significance was set at Pdonkeys undergoing certain clinical procedures. © 2014 EVJ Ltd.

  17. Diversity, Biocontrol, and Plant Growth Promoting Abilities of Xylem Residing Bacteria from Solanaceous Crops

    Directory of Open Access Journals (Sweden)

    Gauri A. Achari

    2014-01-01

    Full Text Available Eggplant (Solanum melongena L. is one of the solanaceous crops of economic and cultural importance and is widely cultivated in the state of Goa, India. Eggplant cultivation is severely affected by bacterial wilt caused by Ralstonia solanacearum that colonizes the xylem tissue. In this study, 167 bacteria were isolated from the xylem of healthy eggplant, chilli, and Solanum torvum Sw. by vacuum infiltration and maceration. Amplified rDNA restriction analysis (ARDRA grouped these xylem residing bacteria (XRB into 38 haplotypes. Twenty-eight strains inhibited growth of R. solanacearum and produced volatile and diffusible antagonistic compounds and plant growth promoting substances in vitro. Antagonistic strains XB86, XB169, XB177, and XB200 recorded a biocontrol efficacy greater than 85% against BW and exhibited 12%–22 % increase in shoot length in eggplant in the greenhouse screening. 16S rRNA based identification revealed the presence of 23 different bacterial genera. XRB with high biocontrol and plant growth promoting activities were identified as strains of Staphylococcus sp., Bacillus sp., Streptomyces sp., Enterobacter sp., and Agrobacterium sp. This study is the first report on identity of bacteria from the xylem of solanaceous crops having traits useful in cultivation of eggplant.

  18. Agency and Algorithms

    Directory of Open Access Journals (Sweden)

    Hanns Holger Rutz

    2016-11-01

    Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory. 

  19. Real-Coded Quantum-Inspired Genetic Algorithm-Based BP Neural Network Algorithm

    Directory of Open Access Journals (Sweden)

    Jianyong Liu

    2015-01-01

    Full Text Available The method that the real-coded quantum-inspired genetic algorithm (RQGA used to optimize the weights and threshold of BP neural network is proposed to overcome the defect that the gradient descent method makes the algorithm easily fall into local optimal value in the learning process. Quantum genetic algorithm (QGA is with good directional global optimization ability, but the conventional QGA is based on binary coding; the speed of calculation is reduced by the coding and decoding processes. So, RQGA is introduced to explore the search space, and the improved varied learning rate is adopted to train the BP neural network. Simulation test shows that the proposed algorithm is effective to rapidly converge to the solution conformed to constraint conditions.

  20. A cluster algorithm for graphs

    NARCIS (Netherlands)

    S. van Dongen

    2000-01-01

    textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)

  1. Unsupervised learning algorithms

    CERN Document Server

    Aydin, Kemal

    2016-01-01

    This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...

  2. Generalized-ensemble molecular dynamics and Monte Carlo algorithms beyond the limit of the multicanonical algorithm

    International Nuclear Information System (INIS)

    Okumura, Hisashi

    2010-01-01

    I review two new generalized-ensemble algorithms for molecular dynamics and Monte Carlo simulations of biomolecules, that is, the multibaric–multithermal algorithm and the partial multicanonical algorithm. In the multibaric–multithermal algorithm, two-dimensional random walks not only in the potential-energy space but also in the volume space are realized. One can discuss the temperature dependence and pressure dependence of biomolecules with this algorithm. The partial multicanonical simulation samples a wide range of only an important part of potential energy, so that one can concentrate the effort to determine a multicanonical weight factor only on the important energy terms. This algorithm has higher sampling efficiency than the multicanonical and canonical algorithms. (review)

  3. Theoretic derivation of directed acyclic subgraph algorithm and comparisons with message passing algorithm

    Science.gov (United States)

    Ha, Jeongmok; Jeong, Hong

    2016-07-01

    This study investigates the directed acyclic subgraph (DAS) algorithm, which is used to solve discrete labeling problems much more rapidly than other Markov-random-field-based inference methods but at a competitive accuracy. However, the mechanism by which the DAS algorithm simultaneously achieves competitive accuracy and fast execution speed, has not been elucidated by a theoretical derivation. We analyze the DAS algorithm by comparing it with a message passing algorithm. Graphical models, inference methods, and energy-minimization frameworks are compared between DAS and message passing algorithms. Moreover, the performances of DAS and other message passing methods [sum-product belief propagation (BP), max-product BP, and tree-reweighted message passing] are experimentally compared.

  4. Shadow algorithms data miner

    CERN Document Server

    Woo, Andrew

    2012-01-01

    Digital shadow generation continues to be an important aspect of visualization and visual effects in film, games, simulations, and scientific applications. This resource offers a thorough picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. From general fundamentals to specific applications, it addresses shadow algorithms and how to manage huge data sets from a shadow perspective. The book also examines the use of shadow algorithms in industrial applications, in terms of what algorithms are used and what software is applicable.

  5. Diversity-Guided Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Ursem, Rasmus Kjær

    2002-01-01

    Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...

  6. Contact-impact algorithms on parallel computers

    International Nuclear Information System (INIS)

    Zhong Zhihua; Nilsson, Larsgunnar

    1994-01-01

    Contact-impact algorithms on parallel computers are discussed within the context of explicit finite element analysis. The algorithms concerned include a contact searching algorithm and an algorithm for contact force calculations. The contact searching algorithm is based on the territory concept of the general HITA algorithm. However, no distinction is made between different contact bodies, or between different contact surfaces. All contact segments from contact boundaries are taken as a single set. Hierarchy territories and contact territories are expanded. A three-dimensional bucket sort algorithm is used to sort contact nodes. The defence node algorithm is used in the calculation of contact forces. Both the contact searching algorithm and the defence node algorithm are implemented on the connection machine CM-200. The performance of the algorithms is examined under different circumstances, and numerical results are presented. ((orig.))

  7. A review on quantum search algorithms

    Science.gov (United States)

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  8. Magnet sorting algorithms

    International Nuclear Information System (INIS)

    Dinev, D.

    1996-01-01

    Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)

  9. Composite Differential Search Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2014-01-01

    Full Text Available Differential search algorithm (DS is a relatively new evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.

  10. Artifact removal algorithms for stroke detection using a multistatic MIST beamforming algorithm.

    Science.gov (United States)

    Ricci, E; Di Domenico, S; Cianca, E; Rossi, T

    2015-01-01

    Microwave imaging (MWI) has been recently proved as a promising imaging modality for low-complexity, low-cost and fast brain imaging tools, which could play a fundamental role to efficiently manage emergencies related to stroke and hemorrhages. This paper focuses on the UWB radar imaging approach and in particular on the processing algorithms of the backscattered signals. Assuming the use of the multistatic version of the MIST (Microwave Imaging Space-Time) beamforming algorithm, developed by Hagness et al. for the early detection of breast cancer, the paper proposes and compares two artifact removal algorithms. Artifacts removal is an essential step of any UWB radar imaging system and currently considered artifact removal algorithms have been shown not to be effective in the specific scenario of brain imaging. First of all, the paper proposes modifications of a known artifact removal algorithm. These modifications are shown to be effective to achieve good localization accuracy and lower false positives. However, the main contribution is the proposal of an artifact removal algorithm based on statistical methods, which allows to achieve even better performance but with much lower computational complexity.

  11. Law and Order in Algorithmics

    NARCIS (Netherlands)

    Fokkinga, M.M.

    1992-01-01

    An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as

  12. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  13. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  14. Portfolios of quantum algorithms.

    Science.gov (United States)

    Maurer, S M; Hogg, T; Huberman, B A

    2001-12-17

    Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.

  15. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  16. A verified LLL algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  17. SPECIAL LIBRARIES OF FRAGMENTS OF ALGORITHMIC NETWORKS TO AUTOMATE THE DEVELOPMENT OF ALGORITHMIC MODELS

    Directory of Open Access Journals (Sweden)

    V. E. Marley

    2015-01-01

    Full Text Available Summary. The concept of algorithmic models appeared from the algorithmic approach in which the simulated object, the phenomenon appears in the form of process, subject to strict rules of the algorithm, which placed the process of operation of the facility. Under the algorithmic model is the formalized description of the scenario subject specialist for the simulated process, the structure of which is comparable with the structure of the causal and temporal relationships between events of the process being modeled, together with all information necessary for its software implementation. To represent the structure of algorithmic models used algorithmic network. Normally, they were defined as loaded finite directed graph, the vertices which are mapped to operators and arcs are variables, bound by operators. The language of algorithmic networks has great features, the algorithms that it can display indifference the class of all random algorithms. In existing systems, automation modeling based on algorithmic nets, mainly used by operators working with real numbers. Although this reduces their ability, but enough for modeling a wide class of problems related to economy, environment, transport, technical processes. The task of modeling the execution of schedules and network diagrams is relevant and useful. There are many counting systems, network graphs, however, the monitoring process based analysis of gaps and terms of graphs, no analysis of prediction execution schedule or schedules. The library is designed to build similar predictive models. Specifying source data to obtain a set of projections from which to choose one and take it for a new plan.

  18. Group leaders optimization algorithm

    Science.gov (United States)

    Daskin, Anmer; Kais, Sabre

    2011-03-01

    We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.

  19. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  20. The global Minmax k-means algorithm.

    Science.gov (United States)

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.

  1. Clinical implementation of AXB from AAA for breast: Plan quality and subvolume analysis.

    Science.gov (United States)

    Guebert, Alexandra; Conroy, Leigh; Weppler, Sarah; Alghamdi, Majed; Conway, Jessica; Harper, Lindsay; Phan, Tien; Olivotto, Ivo A; Smith, Wendy L; Quirk, Sarah

    2018-04-25

    Two dose calculation algorithms are available in Varian Eclipse software: Anisotropic Analytical Algorithm (AAA) and Acuros External Beam (AXB). Many Varian Eclipse-based centers have access to AXB; however, a thorough understanding of how it will affect plan characteristics and, subsequently, clinical practice is necessary prior to implementation. We characterized the difference in breast plan quality between AXB and AAA for dissemination to clinicians during implementation. Locoregional irradiation plans were created with AAA for 30 breast cancer patients with a prescription dose of 50 Gy to the breast and 45 Gy to the regional node, in 25 fractions. The internal mammary chain (IMC CTV ) nodes were covered by 80% of the breast dose. AXB, both dose-to-water and dose-to-medium reporting, was used to recalculate plans while maintaining constant monitor units. Target coverage and organ-at-risk doses were compared between the two algorithms using dose-volume parameters. An analysis to assess location-specific changes was performed by dividing the breast into nine subvolumes in the superior-inferior and left-right directions. There were minimal differences found between the AXB and AAA calculated plans. The median difference between AXB and AAA for breast CTV V 95% , was AAA for breast radiotherapy is not expected to result in changes in clinical practice for prescribing or planning breast radiotherapy. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  2. New Optimization Algorithms in Physics

    CERN Document Server

    Hartmann, Alexander K

    2004-01-01

    Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

  3. Learning algorithms and automatic processing of languages; Algorithmes a apprentissage et traitement automatique des langues

    Energy Technology Data Exchange (ETDEWEB)

    Fluhr, Christian Yves Andre

    1977-06-15

    This research thesis concerns the field of artificial intelligence. It addresses learning algorithms applied to automatic processing of languages. The author first briefly describes some mechanisms of human intelligence in order to describe how these mechanisms are simulated on a computer. He outlines the specific role of learning in various manifestations of intelligence. Then, based on the Markov's algorithm theory, the author discusses the notion of learning algorithm. Two main types of learning algorithms are then addressed: firstly, an 'algorithm-teacher dialogue' type sanction-based algorithm which aims at learning how to solve grammatical ambiguities in submitted texts; secondly, an algorithm related to a document system which structures semantic data automatically obtained from a set of texts in order to be able to understand by references to any question on the content of these texts.

  4. Graph Colouring Algorithms

    DEFF Research Database (Denmark)

    Husfeldt, Thore

    2015-01-01

    This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...

  5. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  6. From Genetics to Genetic Algorithms

    Indian Academy of Sciences (India)

    Genetic algorithms (GAs) are computational optimisation schemes with an ... The algorithms solve optimisation problems ..... Genetic Algorithms in Search, Optimisation and Machine. Learning, Addison-Wesley Publishing Company, Inc. 1989.

  7. Quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Shenvi, Neil; Whaley, K. Birgitta; Kempe, Julia

    2003-01-01

    Quantum random walks on graphs have been shown to display many interesting properties, including exponentially fast hitting times when compared with their classical counterparts. However, it is still unclear how to use these novel properties to gain an algorithmic speedup over classical algorithms. In this paper, we present a quantum search algorithm based on the quantum random-walk architecture that provides such a speedup. It will be shown that this algorithm performs an oracle search on a database of N items with O(√(N)) calls to the oracle, yielding a speedup similar to other quantum search algorithms. It appears that the quantum random-walk formulation has considerable flexibility, presenting interesting opportunities for development of other, possibly novel quantum algorithms

  8. Recognition algorithms in knot theory

    International Nuclear Information System (INIS)

    Dynnikov, I A

    2003-01-01

    In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory

  9. Fast algorithm for Morphological Filters

    International Nuclear Information System (INIS)

    Lou Shan; Jiang Xiangqian; Scott, Paul J

    2011-01-01

    In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.

  10. An Algorithm Computing the Local $b$ Function by an Approximate Division Algorithm in $\\hat{\\mathcal{D}}$

    OpenAIRE

    Nakayama, Hiromasa

    2006-01-01

    We give an algorithm to compute the local $b$ function. In this algorithm, we use the Mora division algorithm in the ring of differential operators and an approximate division algorithm in the ring of differential operators with power series coefficient.

  11. Introduction to Evolutionary Algorithms

    CERN Document Server

    Yu, Xinjie

    2010-01-01

    Evolutionary algorithms (EAs) are becoming increasingly attractive for researchers from various disciplines, such as operations research, computer science, industrial engineering, electrical engineering, social science, economics, etc. This book presents an insightful, comprehensive, and up-to-date treatment of EAs, such as genetic algorithms, differential evolution, evolution strategy, constraint optimization, multimodal optimization, multiobjective optimization, combinatorial optimization, evolvable hardware, estimation of distribution algorithms, ant colony optimization, particle swarm opti

  12. Study of the effective inverse photon efficiency using optical emission spectroscopy combined with cavity ring-down spectroscopy approach

    Science.gov (United States)

    Wu, Xingwei; Li, Cong; Wang, Yong; Wang, Zhiwei; Feng, Chunlei; Ding, Hongbin

    2015-09-01

    The hydrocarbon impurities formation is inevitable due to wall erosion in a long pulse high performance scenario with carbon-based plasma facing materials in fusion devices. The standard procedure to determine the chemical erosion yield in situ is by means of inverse photon efficiency D/XB. In this work, the conversion factor between CH4 flux and photon flux of CH A → X transition (effective inverse photon efficiency PE-1) was measured directly using a cascaded arc plasma simulator with argon/methane. This study shows that the measured PE-1 is different from the calculated D/XB. We compared the photon flux measured by optical emission spectroscopy (OES) and calculated by electron impact excitation of CH(X) which was diagnosed by cavity ring-down spectroscopy (CRDS). It seems that charge exchange and dissociative recombination processes are the main channels of CH(A) production and removal which lead to the inconsistency of PE -1 and D/XB at lower temperature. Meanwhile, the fraction of excited CH(A) produced by dissociative recombination processes was investigated, and we found it increased with Te in the range from 4% to 13% at Te definition instead of D/XB since the electron impact excitation is not the only channel of CH(A) production. These results have an effect on evaluating the yield of chemical erosion in divertor of fusion device.

  13. Evidence-based algorithm for heparin dosing before cardiopulmonary bypass. Part 1: Development of the algorithm.

    Science.gov (United States)

    McKinney, Mark C; Riley, Jeffrey B

    2007-12-01

    The incidence of heparin resistance during adult cardiac surgery with cardiopulmonary bypass has been reported at 15%-20%. The consistent use of a clinical decision-making algorithm may increase the consistency of patient care and likely reduce the total required heparin dose and other problems associated with heparin dosing. After a directed survey of practicing perfusionists regarding treatment of heparin resistance and a literature search for high-level evidence regarding the diagnosis and treatment of heparin resistance, an evidence-based decision-making algorithm was constructed. The face validity of the algorithm decisive steps and logic was confirmed by a second survey of practicing perfusionists. The algorithm begins with review of the patient history to identify predictors for heparin resistance. The definition for heparin resistance contained in the algorithm is an activated clotting time 450 IU/kg heparin loading dose. Based on the literature, the treatment for heparin resistance used in the algorithm is anti-thrombin III supplement. The algorithm seems to be valid and is supported by high-level evidence and clinician opinion. The next step is a human randomized clinical trial to test the clinical procedure guideline algorithm vs. current standard clinical practice.

  14. Algorithmic Principles of Mathematical Programming

    NARCIS (Netherlands)

    Faigle, Ulrich; Kern, Walter; Still, Georg

    2002-01-01

    Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear

  15. ADORE-GA: Genetic algorithm variant of the ADORE algorithm for ROP detector layout optimization in CANDU reactors

    International Nuclear Information System (INIS)

    Kastanya, Doddy

    2012-01-01

    Highlights: ► ADORE is an algorithm for CANDU ROP Detector Layout Optimization. ► ADORE-GA is a Genetic Algorithm variant of the ADORE algorithm. ► Robustness test of ADORE-GA algorithm is presented in this paper. - Abstract: The regional overpower protection (ROP) systems protect CANDU® reactors against overpower in the fuel that could reduce the safety margin-to-dryout. The overpower could originate from a localized power peaking within the core or a general increase in the global core power level. The design of the detector layout for ROP systems is a challenging discrete optimization problem. In recent years, two algorithms have been developed to find a quasi optimal solution to this detector layout optimization problem. Both of these algorithms utilize the simulated annealing (SA) algorithm as their optimization engine. In the present paper, an alternative optimization algorithm, namely the genetic algorithm (GA), has been implemented as the optimization engine. The implementation is done within the ADORE algorithm. Results from evaluating the effects of using various mutation rates and crossover parameters are presented in this paper. It has been demonstrated that the algorithm is sufficiently robust in producing similar quality solutions.

  16. Where genetic algorithms excel.

    Science.gov (United States)

    Baum, E B; Boneh, D; Garrett, C

    2001-01-01

    We analyze the performance of a genetic algorithm (GA) we call Culling, and a variety of other algorithms, on a problem we refer to as the Additive Search Problem (ASP). We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Noisy ASP is the first problem we are aware of where a genetic-type algorithm bests all known competitors. We generalize ASP to k-ASP to study whether GAs will achieve "implicit parallelism" in a problem with many more schemata. GAs fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a mean field theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GAs can beat competing methods.

  17. Mao-Gilles Stabilization Algorithm

    OpenAIRE

    Jérôme Gilles

    2013-01-01

    Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...

  18. Efficient GPS Position Determination Algorithms

    National Research Council Canada - National Science Library

    Nguyen, Thao Q

    2007-01-01

    ... differential GPS algorithm for a network of users. The stand-alone user GPS algorithm is a direct, closed-form, and efficient new position determination algorithm that exploits the closed-form solution of the GPS trilateration equations and works...

  19. Improved autonomous star identification algorithm

    International Nuclear Information System (INIS)

    Luo Li-Yan; Xu Lu-Ping; Zhang Hua; Sun Jing-Rong

    2015-01-01

    The log–polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. (paper)

  20. Fluid-structure-coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure, and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed here have been extended to three dimensions and implemented in the computer code PELE-3D

  1. Fluid structure coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D

  2. Majorization arrow in quantum-algorithm design

    International Nuclear Information System (INIS)

    Latorre, J.I.; Martin-Delgado, M.A.

    2002-01-01

    We apply majorization theory to study the quantum algorithms known so far and find that there is a majorization principle underlying the way they operate. Grover's algorithm is a neat instance of this principle where majorization works step by step until the optimal target state is found. Extensions of this situation are also found in algorithms based in quantum adiabatic evolution and the family of quantum phase-estimation algorithms, including Shor's algorithm. We state that in quantum algorithms the time arrow is a majorization arrow

  3. Foundations of genetic algorithms 1991

    CERN Document Server

    1991-01-01

    Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition

  4. Opposition-Based Adaptive Fireworks Algorithm

    OpenAIRE

    Chibing Gong

    2016-01-01

    A fireworks algorithm (FWA) is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA) proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA). The purpose of this paper is to add opposition-based learning (OBL) to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based a...

  5. Chinese handwriting recognition an algorithmic perspective

    CERN Document Server

    Su, Tonghua

    2013-01-01

    This book provides an algorithmic perspective on the recent development of Chinese handwriting recognition. Two technically sound strategies, the segmentation-free and integrated segmentation-recognition strategy, are investigated and algorithms that have worked well in practice are primarily focused on. Baseline systems are initially presented for these strategies and are subsequently expanded on and incrementally improved. The sophisticated algorithms covered include: 1) string sample expansion algorithms which synthesize string samples from isolated characters or distort realistic string samples; 2) enhanced feature representation algorithms, e.g. enhanced four-plane features and Delta features; 3) novel learning algorithms, such as Perceptron learning with dynamic margin, MPE training and distributed training; and lastly 4) ensemble algorithms, that is, combining the two strategies using both parallel structure and serial structure. All the while, the book moves from basic to advanced algorithms, helping ...

  6. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  7. THE APPROACHING TRAIN DETECTION ALGORITHM

    OpenAIRE

    S. V. Bibikov

    2015-01-01

    The paper deals with detection algorithm for rail vibroacoustic waves caused by approaching train on the background of increased noise. The urgency of algorithm development for train detection in view of increased rail noise, when railway lines are close to roads or road intersections is justified. The algorithm is based on the method of weak signals detection in a noisy environment. The information statistics ultimate expression is adjusted. We present the results of algorithm research and t...

  8. Combinatorial optimization algorithms and complexity

    CERN Document Server

    Papadimitriou, Christos H

    1998-01-01

    This clearly written, mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NP-complete problems, more. All chapters are supplemented by thought-provoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering.

  9. Enhanced sampling algorithms.

    Science.gov (United States)

    Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko

    2013-01-01

    In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.

  10. Economic dispatch using chaotic bat algorithm

    International Nuclear Information System (INIS)

    Adarsh, B.R.; Raghunathan, T.; Jayabarathi, T.; Yang, Xin-She

    2016-01-01

    This paper presents the application of a new metaheuristic optimization algorithm, the chaotic bat algorithm for solving the economic dispatch problem involving a number of equality and inequality constraints such as power balance, prohibited operating zones and ramp rate limits. Transmission losses and multiple fuel options are also considered for some problems. The chaotic bat algorithm, a variant of the basic bat algorithm, is obtained by incorporating chaotic sequences to enhance its performance. Five different example problems comprising 6, 13, 20, 40 and 160 generating units are solved to demonstrate the effectiveness of the algorithm. The algorithm requires little tuning by the user, and the results obtained show that it either outperforms or compares favorably with several existing techniques reported in literature. - Highlights: • The chaotic bat algorithm, a new metaheuristic optimization algorithm has been used. • The problem solved – the economic dispatch problem – is nonlinear, discontinuous. • It has number of equality and inequality constraints. • The algorithm has been demonstrated to be applicable on high dimensional problems.

  11. Inclusive Flavour Tagging Algorithm

    International Nuclear Information System (INIS)

    Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex

    2016-01-01

    Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment. (paper)

  12. Distributed k-Means Algorithm and Fuzzy c-Means Algorithm for Sensor Networks Based on Multiagent Consensus Theory.

    Science.gov (United States)

    Qin, Jiahu; Fu, Weiming; Gao, Huijun; Zheng, Wei Xing

    2016-03-03

    This paper is concerned with developing a distributed k-means algorithm and a distributed fuzzy c-means algorithm for wireless sensor networks (WSNs) where each node is equipped with sensors. The underlying topology of the WSN is supposed to be strongly connected. The consensus algorithm in multiagent consensus theory is utilized to exchange the measurement information of the sensors in WSN. To obtain a faster convergence speed as well as a higher possibility of having the global optimum, a distributed k-means++ algorithm is first proposed to find the initial centroids before executing the distributed k-means algorithm and the distributed fuzzy c-means algorithm. The proposed distributed k-means algorithm is capable of partitioning the data observed by the nodes into measure-dependent groups which have small in-group and large out-group distances, while the proposed distributed fuzzy c-means algorithm is capable of partitioning the data observed by the nodes into different measure-dependent groups with degrees of membership values ranging from 0 to 1. Simulation results show that the proposed distributed algorithms can achieve almost the same results as that given by the centralized clustering algorithms.

  13. A survey of parallel multigrid algorithms

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1987-01-01

    A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.

  14. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  15. Quantum algorithm for support matrix machines

    Science.gov (United States)

    Duan, Bojia; Yuan, Jiabin; Liu, Ying; Li, Dan

    2017-09-01

    We propose a quantum algorithm for support matrix machines (SMMs) that efficiently addresses an image classification problem by introducing a least-squares reformulation. This algorithm consists of two core subroutines: a quantum matrix inversion (Harrow-Hassidim-Lloyd, HHL) algorithm and a quantum singular value thresholding (QSVT) algorithm. The two algorithms can be implemented on a universal quantum computer with complexity O[log(npq) ] and O[log(pq)], respectively, where n is the number of the training data and p q is the size of the feature space. By iterating the algorithms, we can find the parameters for the SMM classfication model. Our analysis shows that both HHL and QSVT algorithms achieve an exponential increase of speed over their classical counterparts.

  16. Mao-Gilles Stabilization Algorithm

    Directory of Open Access Journals (Sweden)

    Jérôme Gilles

    2013-07-01

    Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.

  17. Effects of visualization on algorithm comprehension

    Science.gov (United States)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  18. Algorithmic approach to diagram techniques

    International Nuclear Information System (INIS)

    Ponticopoulos, L.

    1980-10-01

    An algorithmic approach to diagram techniques of elementary particles is proposed. The definition and axiomatics of the theory of algorithms are presented, followed by the list of instructions of an algorithm formalizing the construction of graphs and the assignment of mathematical objects to them. (T.A.)

  19. Multisensor data fusion algorithm development

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  20. A new cluster algorithm for graphs

    NARCIS (Netherlands)

    S. van Dongen

    1998-01-01

    textabstractA new cluster algorithm for graphs called the emph{Markov Cluster algorithm ($MCL$ algorithm) is introduced. The graphs may be both weighted (with nonnegative weight) and directed. Let~$G$~be such a graph. The $MCL$ algorithm simulates flow in $G$ by first identifying $G$ in a

  1. An overview of smart grid routing algorithms

    Science.gov (United States)

    Wang, Junsheng; OU, Qinghai; Shen, Haijuan

    2017-08-01

    This paper summarizes the typical routing algorithm in smart grid by analyzing the communication business and communication requirements of intelligent grid. Mainly from the two kinds of routing algorithm is analyzed, namely clustering routing algorithm and routing algorithm, analyzed the advantages and disadvantages of two kinds of typical routing algorithm in routing algorithm and applicability.

  2. A Parametric k-Means Algorithm

    Science.gov (United States)

    Tarpey, Thaddeus

    2007-01-01

    Summary The k points that optimally represent a distribution (usually in terms of a squared error loss) are called the k principal points. This paper presents a computationally intensive method that automatically determines the principal points of a parametric distribution. Cluster means from the k-means algorithm are nonparametric estimators of principal points. A parametric k-means approach is introduced for estimating principal points by running the k-means algorithm on a very large simulated data set from a distribution whose parameters are estimated using maximum likelihood. Theoretical and simulation results are presented comparing the parametric k-means algorithm to the usual k-means algorithm and an example on determining sizes of gas masks is used to illustrate the parametric k-means algorithm. PMID:17917692

  3. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  4. Some software algorithms for microprocessor ratemeters

    International Nuclear Information System (INIS)

    Savic, Z.

    1991-01-01

    After a review of the basic theoretical ratemeter problem and a general discussion of microprocessor ratemeters, a short insight into their hardware organization is given. Three software algorithms are described: the old ones the quasi-exponential and floating mean algorithm, and a new weighted moving average algorithm. The equations for statistical characterization of the new algorithm are given and an intercomparison is made. It is concluded that the new algorithm has statistical advantages over the old ones. (orig.)

  5. Some software algorithms for microprocessor ratemeters

    Energy Technology Data Exchange (ETDEWEB)

    Savic, Z. (Military Technical Inst., Belgrade (Yugoslavia))

    1991-03-15

    After a review of the basic theoretical ratemeter problem and a general discussion of microprocessor ratemeters, a short insight into their hardware organization is given. Three software algorithms are described: the old ones the quasi-exponential and floating mean algorithm, and a new weighted moving average algorithm. The equations for statistical characterization of the new algorithm are given and an intercomparison is made. It is concluded that the new algorithm has statistical advantages over the old ones. (orig.).

  6. Higher-order force gradient symplectic algorithms

    Science.gov (United States)

    Chin, Siu A.; Kidwell, Donald W.

    2000-12-01

    We show that a recently discovered fourth order symplectic algorithm, which requires one evaluation of force gradient in addition to three evaluations of the force, when iterated to higher order, yielded algorithms that are far superior to similarly iterated higher order algorithms based on the standard Forest-Ruth algorithm. We gauge the accuracy of each algorithm by comparing the step-size independent error functions associated with energy conservation and the rotation of the Laplace-Runge-Lenz vector when solving a highly eccentric Kepler problem. For orders 6, 8, 10, and 12, the new algorithms are approximately a factor of 103, 104, 104, and 105 better.

  7. Proceedings of the Army Numerical Analysis Conference (11th) Held at Frankford Arsenal, Philadelphia, Pa., on 13-14 February 1974

    Science.gov (United States)

    1974-12-01

    World Conference on Earthquake Engineering, Santiago, Chile , January 13 - 18, 1969. 2. English, G. W., and Adams, P. F., "Experiments on Laterally... Chile , January 13 - 18, 1969. 4. Wakabayashi, M., "Frames Under Strong Impulsive, Wind or Seismic Loading", Proceedings of the International...while 501 / f(Vzb) = A.. ,. » _i/.. \\i i .. rv»i* e’(x ) + u.[g’(x ) - g’ixj] + vb[h’(xa) - h’(xb)] + V’^ + vIh ’(xb) (uaWV V Lhl

  8. The Retina Algorithm

    CERN Multimedia

    CERN. Geneva; PUNZI, Giovanni

    2015-01-01

    Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.

  9. Machine Learning an algorithmic perspective

    CERN Document Server

    Marsland, Stephen

    2009-01-01

    Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le

  10. Instance-specific algorithm configuration

    CERN Document Server

    Malitsky, Yuri

    2014-01-01

    This book presents a modular and expandable technique in the rapidly emerging research area of automatic configuration and selection of the best algorithm for the instance at hand. The author presents the basic model behind ISAC and then details a number of modifications and practical applications. In particular, he addresses automated feature generation, offline algorithm configuration for portfolio generation, algorithm selection, adaptive solvers, online tuning, and parallelization.    The author's related thesis was honorably mentioned (runner-up) for the ACP Dissertation Award in 2014,

  11. Quantum algorithm for linear regression

    Science.gov (United States)

    Wang, Guoming

    2017-07-01

    We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

  12. Algorithm FIRE-Feynman Integral REduction

    International Nuclear Information System (INIS)

    Smirnov, A.V.

    2008-01-01

    The recently developed algorithm FIRE performs the reduction of Feynman integrals to master integrals. It is based on a number of strategies, such as applying the Laporta algorithm, the s-bases algorithm, region-bases and integrating explicitly over loop momenta when possible. Currently it is being used in complicated three-loop calculations.

  13. Synthesis of Greedy Algorithms Using Dominance Relations

    Science.gov (United States)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  14. Algorithm for Compressing Time-Series Data

    Science.gov (United States)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  15. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  16. Novel medical image enhancement algorithms

    Science.gov (United States)

    Agaian, Sos; McClendon, Stephen A.

    2010-01-01

    In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.

  17. Wavelet-LMS algorithm-based echo cancellers

    Science.gov (United States)

    Seetharaman, Lalith K.; Rao, Sathyanarayana S.

    2002-12-01

    This paper presents Echo Cancellers based on the Wavelet-LMS Algorithm. The performance of the Least Mean Square Algorithm in Wavelet transform domain is observed and its application in Echo cancellation is analyzed. The Widrow-Hoff Least Mean Square Algorithm is most widely used algorithm for Adaptive filters that function as Echo Cancellers. The present day communication signals are widely non-stationary in nature and some errors crop up when Least Mean Square Algorithm is used for the Echo Cancellers handling such signals. The analysis of non-stationary signals often involves a compromise between how well transitions or discontinuities can be located. The multi-scale or multi-resolution of signal analysis, which is the essence of wavelet transform, makes Wavelets popular in non-stationary signal analysis. In this paper, we present a Wavelet-LMS algorithm wherein the wavelet coefficients of a signal are modified adaptively using the Least Mean Square Algorithm and then reconstructed to give an Echo-free signal. The Echo Canceller based on this Algorithm is found to have a better convergence and a comparatively lesser MSE (Mean Square error).

  18. Analysis and Improvement of Fireworks Algorithm

    Directory of Open Access Journals (Sweden)

    Xi-Guang Li

    2017-02-01

    Full Text Available The Fireworks Algorithm is a recently developed swarm intelligence algorithm to simulate the explosion process of fireworks. Based on the analysis of each operator of Fireworks Algorithm (FWA, this paper improves the FWA and proves that the improved algorithm converges to the global optimal solution with probability 1. The proposed algorithm improves the goal of further boosting performance and achieving global optimization where mainly include the following strategies. Firstly using the opposition-based learning initialization population. Secondly a new explosion amplitude mechanism for the optimal firework is proposed. In addition, the adaptive t-distribution mutation for non-optimal individuals and elite opposition-based learning for the optimal individual are used. Finally, a new selection strategy, namely Disruptive Selection, is proposed to reduce the running time of the algorithm compared with FWA. In our simulation, we apply the CEC2013 standard functions and compare the proposed algorithm (IFWA with SPSO2011, FWA, EFWA and dynFWA. The results show that the proposed algorithm has better overall performance on the test functions.

  19. Empirical study of parallel LRU simulation algorithms

    Science.gov (United States)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  20. A Hybrid Chaotic Quantum Evolutionary Algorithm

    DEFF Research Database (Denmark)

    Cai, Y.; Zhang, M.; Cai, H.

    2010-01-01

    A hybrid chaotic quantum evolutionary algorithm is proposed to reduce amount of computation, speed up convergence and restrain premature phenomena of quantum evolutionary algorithm. The proposed algorithm adopts the chaotic initialization method to generate initial population which will form a pe...... tests. The presented algorithm is applied to urban traffic signal timing optimization and the effect is satisfied....

  1. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    International Nuclear Information System (INIS)

    Cheng Sheng-Yi; Liu Wen-Jin; Chen Shan-Qiu; Dong Li-Zhi; Yang Ping; Xu Bing

    2015-01-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n 2 ) ∼ O(n 3 ) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ∼ (O(n) 3/2 ), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. (paper)

  2. The Parallel Algorithm Based on Genetic Algorithm for Improving the Performance of Cognitive Radio

    Directory of Open Access Journals (Sweden)

    Liu Miao

    2018-01-01

    Full Text Available The intercarrier interference (ICI problem of cognitive radio (CR is severe. In this paper, the machine learning algorithm is used to obtain the optimal interference subcarriers of an unlicensed user (un-LU. Masking the optimal interference subcarriers can suppress the ICI of CR. Moreover, the parallel ICI suppression algorithm is designed to improve the calculation speed and meet the practical requirement of CR. Simulation results show that the data transmission rate threshold of un-LU can be set, the data transmission quality of un-LU can be ensured, the ICI of a licensed user (LU is suppressed, and the bit error rate (BER performance of LU is improved by implementing the parallel suppression algorithm. The ICI problem of CR is solved well by the new machine learning algorithm. The computing performance of the algorithm is improved by designing a new parallel structure and the communication performance of CR is enhanced.

  3. Look-ahead fermion algorithm

    International Nuclear Information System (INIS)

    Grady, M.

    1986-01-01

    I describe a fast fermion algorithm which utilizes pseudofermion fields but appears to have little or no systematic error. Test simulations on two-dimensional gauge theories are described. A possible justification for the algorithm being exact is discussed. 8 refs

  4. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  5. Algorithmic complexity of quantum capacity

    Science.gov (United States)

    Oskouei, Samad Khabbazi; Mancini, Stefano

    2018-04-01

    We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.

  6. Learning from nature: Nature-inspired algorithms

    DEFF Research Database (Denmark)

    Albeanu, Grigore; Madsen, Henrik; Popentiu-Vladicescu, Florin

    2016-01-01

    .), genetic and evolutionary strategies, artificial immune systems etc. Well-known examples of applications include: aircraft wing design, wind turbine design, bionic car, bullet train, optimal decisions related to traffic, appropriate strategies to survive under a well-adapted immune system etc. Based......During last decade, the nature has inspired researchers to develop new algorithms. The largest collection of nature-inspired algorithms is biology-inspired: swarm intelligence (particle swarm optimization, ant colony optimization, cuckoo search, bees' algorithm, bat algorithm, firefly algorithm etc...... on collective social behaviour of organisms, researchers have developed optimization strategies taking into account not only the individuals, but also groups and environment. However, learning from nature, new classes of approaches can be identified, tested and compared against already available algorithms...

  7. Cache-Oblivious Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    2004-01-01

    Frigo, Leiserson, Prokop and Ramachandran in 1999 introduced the ideal-cache model as a formal model of computation for developing algorithms in environments with multiple levels of caching, and coined the terminology of cache-oblivious algorithms. Cache-oblivious algorithms are described...... as standard RAM algorithms with only one memory level, i.e. without any knowledge about memory hierarchies, but are analyzed in the two-level I/O model of Aggarwal and Vitter for an arbitrary memory and block size and an optimal off-line cache replacement strategy. The result are algorithms that automatically...... apply to multi-level memory hierarchies. This paper gives an overview of the results achieved on cache-oblivious algorithms and data structures since the seminal paper by Frigo et al....

  8. Portable Health Algorithms Test System

    Science.gov (United States)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  9. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  10. Hardware Acceleration of Adaptive Neural Algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - world conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.

  11. Performance of Jet Algorithms in CMS

    CERN Document Server

    CMS Collaboration

    The CMS Combined Software and Analysis Challenge 2007 (CSA07) is well underway and expected to produce a wealth of physics analyses to be applied to the first incoming detector data in 2008. The JetMET group of CMS supports four different jet clustering algorithms for the CSA07 Monte Carlo samples, with two different parameterizations each: \\fastkt, \\siscone, \\midpoint, and \\itcone. We present several studies comparing the performance of these algorithms using QCD dijet and \\ttbar Monte Carlo samples. We specifically observe that the \\siscone algorithm performs equal to or better than the \\midpoint algorithm in all presented studies and propose that \\siscone be adopted as the preferred cone-based jet clustering algorithm in future CMS physics analyses, as it is preferred by theorists for its infrared- and collinear-safety to all orders of perturbative QCD. We furthermore encourage the use of the \\fastkt algorithm which is found to perform as good as any other algorithm under study, features dramatically reduc...

  12. Quantum-circuit model of Hamiltonian search algorithms

    International Nuclear Information System (INIS)

    Roland, Jeremie; Cerf, Nicolas J.

    2003-01-01

    We analyze three different quantum search algorithms, namely, the traditional circuit-based Grover's algorithm, its continuous-time analog by Hamiltonian evolution, and the quantum search by local adiabatic evolution. We show that these algorithms are closely related in the sense that they all perform a rotation, at a constant angular velocity, from a uniform superposition of all states to the solution state. This makes it possible to implement the two Hamiltonian-evolution algorithms on a conventional quantum circuit, while keeping the quadratic speedup of Grover's original algorithm. It also clarifies the link between the adiabatic search algorithm and Grover's algorithm

  13. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  14. Case report

    African Journals Online (AJOL)

    abp

    21 sept. 2017 ... XA avait aux deux yeux, une AV à 8/10 et XB, une AV de 3/10 améliorée au trou sténopéïque à 7/10 en faveur d'une amétropie. XA présentait un ptosis bilatéral qui remontait à l'âge de 2 ans avec une action du releveur de la paupière supérieure (RPS) à 7 mm et XB, un ptosis bilatéral remontant à l'âge de ...

  15. A hardware-algorithm co-design approach to optimize seizure detection algorithms for implantable applications.

    Science.gov (United States)

    Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P

    2010-10-30

    Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy. Copyright © 2010 Elsevier B.V. All rights reserved.

  16. 8. Algorithm Design Techniques

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...

  17. Efficient RNA structure comparison algorithms.

    Science.gov (United States)

    Arslan, Abdullah N; Anandan, Jithendar; Fry, Eric; Monschke, Keith; Ganneboina, Nitin; Bowerman, Jason

    2017-12-01

    Recently proposed relative addressing-based ([Formula: see text]) RNA secondary structure representation has important features by which an RNA structure database can be stored into a suffix array. A fast substructure search algorithm has been proposed based on binary search on this suffix array. Using this substructure search algorithm, we present a fast algorithm that finds the largest common substructure of given multiple RNA structures in [Formula: see text] format. The multiple RNA structure comparison problem is NP-hard in its general formulation. We introduced a new problem for comparing multiple RNA structures. This problem has more strict similarity definition and objective, and we propose an algorithm that solves this problem efficiently. We also develop another comparison algorithm that iteratively calls this algorithm to locate nonoverlapping large common substructures in compared RNAs. With the new resulting tools, we improved the RNASSAC website (linked from http://faculty.tamuc.edu/aarslan ). This website now also includes two drawing tools: one specialized for preparing RNA substructures that can be used as input by the search tool, and another one for automatically drawing the entire RNA structure from a given structure sequence.

  18. Algorithms for worst-case tolerance optimization

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans; Madsen, Kaj

    1979-01-01

    New algorithms are presented for the solution of optimum tolerance assignment problems. The problems considered are defined mathematically as a worst-case problem (WCP), a fixed tolerance problem (FTP), and a variable tolerance problem (VTP). The basic optimization problem without tolerances...... is denoted the zero tolerance problem (ZTP). For solution of the WCP we suggest application of interval arithmetic and also alternative methods. For solution of the FTP an algorithm is suggested which is conceptually similar to algorithms previously developed by the authors for the ZTP. Finally, the VTP...... is solved by a double-iterative algorithm in which the inner iteration is performed by the FTP- algorithm. The application of the algorithm is demonstrated by means of relatively simple numerical examples. Basic properties, such as convergence properties, are displayed based on the examples....

  19. Elementary functions algorithms and implementation

    CERN Document Server

    Muller, Jean-Michel

    2016-01-01

    This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...

  20. Streaming Algorithms for Line Simplification

    DEFF Research Database (Denmark)

    Abam, Mohammad; de Berg, Mark; Hachenberger, Peter

    2010-01-01

    this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our...... simplification to the error of the optimal simplification with k points. We obtain the algorithms with O(1) competitive ratio for three cases: convex paths, where the error is measured using the Hausdorff distance (or Fréchet distance), xy-monotone paths, where the error is measured using the Hausdorff distance...... (or Fréchet distance), and general paths, where the error is measured using the Fréchet distance. In the first case the algorithm needs O(k) additional storage, and in the latter two cases the algorithm needs O(k 2) additional storage....

  1. Advancements to the planogram frequency–distance rebinning algorithm

    International Nuclear Information System (INIS)

    Champley, Kyle M; Kinahan, Paul E; Raylman, Raymond R

    2010-01-01

    In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact

  2. Multiple-algorithm parallel fusion of infrared polarization and intensity images based on algorithmic complementarity and synergy

    Science.gov (United States)

    Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng

    2018-01-01

    Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.

  3. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  4. Concentration, ozone formation potential and source analysis of volatile organic compounds (VOCs) in a thermal power station centralized area: A study in Shuozhou, China.

    Science.gov (United States)

    Yan, Yulong; Peng, Lin; Li, Rumei; Li, Yinghui; Li, Lijuan; Bai, Huiling

    2017-04-01

    Volatile organic compounds (VOCs) from two sampling sites (HB and XB) in a power station centralized area, in Shuozhou city, China, were sampled by stainless steel canisters and measured by gas chromatography-mass selective detection/flame ionization detection (GC-MSD/FID) in the spring and autumn of 2014. The concentration of VOCs was higher in the autumn (HB, 96.87 μg/m 3 ; XB, 58.94 μg/m 3 ) than in the spring (HB, 41.49 μg/m 3 ; XB, 43.46 μg/m 3 ), as lower wind speed in the autumn could lead to pollutant accumulation, especially at HB, which is a new urban area surrounded by residential areas and a transportation hub. Alkanes were the dominant group at both HB and XB in both sampling periods, but the contribution of aromatic pollutants at HB in the autumn was much higher than that of the other alkanes (11.16-19.55%). Compared to other cities, BTEX pollution in Shuozhou was among the lowest levels in the world. Because of the high levels of aromatic pollutants, the ozone formation potential increased significantly at HB in the autumn. Using the ratio analyses to identify the age of the air masses and analyze the sources, the results showed that the atmospheric VOCs at XB were strongly influenced by the remote sources of coal combustion, while at HB in the spring and autumn were affected by the remote sources of coal combustion and local sources of vehicle emission, respectively. Source analysis conducted using the Positive Matrix Factorization (PMF) model at Shuozhou showed that coal combustion and vehicle emissions made the two largest contributions (29.98% and 21.25%, respectively) to atmospheric VOCs. With further economic restructuring, the influence of vehicle emissions on the air quality should become more significant, indicating that controlling vehicle emissions is key to reducing the air pollution. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. A Global algorithm for linear radiosity

    OpenAIRE

    Sbert Cassasayas, Mateu; Pueyo Sánchez, Xavier

    1993-01-01

    A linear algorithm for radiosity is presented, linear both in time and storage. The new algorithm is based on previous work by the authors and on the well known algorithms for progressive radiosity and Monte Carlo particle transport.

  6. Algorithmic Mechanism Design of Evolutionary Computation.

    Science.gov (United States)

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm.

  7. MultiAspect Graphs: Algebraic Representation and Algorithms

    Directory of Open Access Journals (Sweden)

    Klaus Wehmuth

    2016-12-01

    Full Text Available We present the algebraic representation and basic algorithms for MultiAspect Graphs (MAGs. A MAG is a structure capable of representing multilayer and time-varying networks, as well as higher-order networks, while also having the property of being isomorphic to a directed graph. In particular, we show that, as a consequence of the properties associated with the MAG structure, a MAG can be represented in matrix form. Moreover, we also show that any possible MAG function (algorithm can be obtained from this matrix-based representation. This is an important theoretical result since it paves the way for adapting well-known graph algorithms for application in MAGs. We present a set of basic MAG algorithms, constructed from well-known graph algorithms, such as degree computing, Breadth First Search (BFS, and Depth First Search (DFS. These algorithms adapted to the MAG context can be used as primitives for building other more sophisticated MAG algorithms. Therefore, such examples can be seen as guidelines on how to properly derive MAG algorithms from basic algorithms on directed graphs. We also make available Python implementations of all the algorithms presented in this paper.

  8. Firefly Mating Algorithm for Continuous Optimization Problems

    Directory of Open Access Journals (Sweden)

    Amarita Ritthipakdee

    2017-01-01

    Full Text Available This paper proposes a swarm intelligence algorithm, called firefly mating algorithm (FMA, for solving continuous optimization problems. FMA uses genetic algorithm as the core of the algorithm. The main feature of the algorithm is a novel mating pair selection method which is inspired by the following 2 mating behaviors of fireflies in nature: (i the mutual attraction between males and females causes them to mate and (ii fireflies of both sexes are of the multiple-mating type, mating with multiple opposite sex partners. A female continues mating until her spermatheca becomes full, and, in the same vein, a male can provide sperms for several females until his sperm reservoir is depleted. This new feature enhances the global convergence capability of the algorithm. The performance of FMA was tested with 20 benchmark functions (sixteen 30-dimensional functions and four 2-dimensional ones against FA, ALC-PSO, COA, MCPSO, LWGSODE, MPSODDS, DFOA, SHPSOS, LSA, MPDPGA, DE, and GABC algorithms. The experimental results showed that the success rates of our proposed algorithm with these functions were higher than those of other algorithms and the proposed algorithm also required fewer numbers of iterations to reach the global optima.

  9. Unconventional Algorithms: Complementarity of Axiomatics and Construction

    Directory of Open Access Journals (Sweden)

    Gordana Dodig Crnkovic

    2012-10-01

    Full Text Available In this paper, we analyze axiomatic and constructive issues of unconventional computations from a methodological and philosophical point of view. We explain how the new models of algorithms and unconventional computations change the algorithmic universe, making it open and allowing increased flexibility and expressive power that augment creativity. At the same time, the greater power of new types of algorithms also results in the greater complexity of the algorithmic universe, transforming it into the algorithmic multiverse and demanding new tools for its study. That is why we analyze new powerful tools brought forth by local mathematics, local logics, logical varieties and the axiomatic theory of algorithms, automata and computation. We demonstrate how these new tools allow efficient navigation in the algorithmic multiverse. Further work includes study of natural computation by unconventional algorithms and constructive approaches.

  10. Integrated Association Rules Complete Hiding Algorithms

    Directory of Open Access Journals (Sweden)

    Mohamed Refaat Abdellah

    2017-01-01

    Full Text Available This paper presents database security approach for complete hiding of sensitive association rules by using six novel algorithms. These algorithms utilize three new weights to reduce the needed database modifications and support complete hiding, as well as they reduce the knowledge distortion and the data distortions. Complete weighted hiding algorithms enhance the hiding failure by 100%; these algorithms have the advantage of performing only a single scan for the database to gather the required information to form the hiding process. These proposed algorithms are built within the database structure which enables the sanitized database to be generated on run time as needed.

  11. New Insights into the RLS Algorithm

    Directory of Open Access Journals (Sweden)

    Gänsler Tomas

    2004-01-01

    Full Text Available The recursive least squares (RLS algorithm is one of the most popular adaptive algorithms that can be found in the literature, due to the fact that it is easily and exactly derived from the normal equations. In this paper, we give another interpretation of the RLS algorithm and show the importance of linear interpolation error energies in the RLS structure. We also give a very efficient way to recursively estimate the condition number of the input signal covariance matrix thanks to fast versions of the RLS algorithm. Finally, we quantify the misalignment of the RLS algorithm with respect to the condition number.

  12. Computational geometry algorithms and applications

    CERN Document Server

    de Berg, Mark; Overmars, Mark; Schwarzkopf, Otfried

    1997-01-01

    Computational geometry emerged from the field of algorithms design and anal­ ysis in the late 1970s. It has grown into a recognized discipline with its own journals, conferences, and a large community of active researchers. The suc­ cess of the field as a research discipline can on the one hand be explained from the beauty of the problems studied and the solutions obtained, and, on the other hand, by the many application domains--computer graphics, geographic in­ formation systems (GIS), robotics, and others-in which geometric algorithms play a fundamental role. For many geometric problems the early algorithmic solutions were either slow or difficult to understand and implement. In recent years a number of new algorithmic techniques have been developed that improved and simplified many of the previous approaches. In this textbook we have tried to make these modem algorithmic solutions accessible to a large audience. The book has been written as a textbook for a course in computational geometry, but it can ...

  13. Comparing Online Algorithms for Bin Packing Problems

    DEFF Research Database (Denmark)

    Epstein, Leah; Favrholdt, Lene Monrad; Kohrt, Jens Svalgaard

    2012-01-01

    The relative worst-order ratio is a measure of the quality of online algorithms. In contrast to the competitive ratio, this measure compares two online algorithms directly instead of using an intermediate comparison with an optimal offline algorithm. In this paper, we apply the relative worst-ord......-order ratio to online algorithms for several common variants of the bin packing problem. We mainly consider pairs of algorithms that are not distinguished by the competitive ratio and show that the relative worst-order ratio prefers the intuitively better algorithm of each pair....

  14. SU-F-BRD-15: The Impact of Dose Calculation Algorithm and Hounsfield Units Conversion Tables On Plan Dosimetry for Lung SBRT

    Energy Technology Data Exchange (ETDEWEB)

    Kuo, L; Yorke, E; Lim, S; Mechalakos, J; Rimner, A [Memorial Sloan-Kettering Cancer Center, NY, NY (United States)

    2014-06-15

    Purpose: To assess dosimetric differences in IMRT lung stereotactic body radiotherapy (SBRT) plans calculated with Varian AAA and Acuros (AXB) and with vendor-supplied (V) versus in-house (IH) measured Hounsfield units (HU) to mass and HU to electron density conversion tables. Methods: In-house conversion tables were measured using Gammex 472 density-plug phantom. IMRT plans (6 MV, Varian TrueBeam, 6–9 coplanar fields) meeting departmental coverage and normal tissue constraints were retrospectively generated for 10 lung SBRT cases using Eclipse Vn 10.0.28 AAA with in-house tables (AAA/IH). Using these monitor units and MLC sequences, plans were recalculated with AAA and vendor tables (AAA/V) and with AXB with both tables (AXB/IH and AXB/V). Ratios to corresponding AAA/IH values were calculated for PTV D95, D01, D99, mean-dose, total and ipsilateral lung V20 and chestwall V30. Statistical significance of differences was judged by Wilcoxon Signed Rank Test (p<0.05). Results: For HU<−400 the vendor HU-mass density table was notably below the IH table. PTV D95 ratios to AAA/IH, averaged over all patients, are 0.963±0.073 (p=0.508), 0.914±0.126 (p=0.011), and 0.998±0.001 (p=0.005) for AXB/IH, AXB/V and AAA/V respectively. Total lung V20 ratios are 1.006±0.046 (p=0.386), 0.975±0.080 (p=0.514) and 0.998±0.002 (p=0.007); ipsilateral lung V20 ratios are 1.008±0.041(p=0.284), 0.977±0.076 (p=0.443), and 0.998±0.018 (p=0.005) for AXB/IH, AXB/V and AAA/V respectively. In 7 cases, ratios to AAA/IH were within ± 5% for all indices studied. For 3 cases characterized by very low lung density and small PTV (19.99±8.09 c.c.), PTV D95 ratio for AXB/V ranged from 67.4% to 85.9%, AXB/IH D95 ratio ranged from 81.6% to 93.4%; there were large differences in other studied indices. Conclusion: For AXB users, careful attention to HU conversion tables is important, as they can significantly impact AXB (but not AAA) lung SBRT plans. Algorithm selection is also important for

  15. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  16. Empirical tests of the Gradual Learning Algorithm

    NARCIS (Netherlands)

    Boersma, P.; Hayes, B.

    1999-01-01

    The Gradual Learning Algorithm (Boersma 1997) is a constraint ranking algorithm for learning Optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and

  17. Empirical tests of the Gradual Learning Algorithm

    NARCIS (Netherlands)

    Boersma, P.; Hayes, B.

    2001-01-01

    The Gradual Learning Algorithm (Boersma 1997) is a constraint-ranking algorithm for learning optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and

  18. Deterministic algorithms for multi-criteria Max-TSP

    NARCIS (Netherlands)

    Manthey, Bodo

    2012-01-01

    We present deterministic approximation algorithms for the multi-criteria maximum traveling salesman problem (Max-TSP). Our algorithms are faster and simpler than the existing randomized algorithms. We devise algorithms for the symmetric and asymmetric multi-criteria Max-TSP that achieve ratios of

  19. Verification-Based Interval-Passing Algorithm for Compressed Sensing

    OpenAIRE

    Wu, Xiaofu; Yang, Zhen

    2013-01-01

    We propose a verification-based Interval-Passing (IP) algorithm for iteratively reconstruction of nonnegative sparse signals using parity check matrices of low-density parity check (LDPC) codes as measurement matrices. The proposed algorithm can be considered as an improved IP algorithm by further incorporation of the mechanism of verification algorithm. It is proved that the proposed algorithm performs always better than either the IP algorithm or the verification algorithm. Simulation resul...

  20. Multimodal Estimation of Distribution Algorithms.

    Science.gov (United States)

    Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun

    2016-02-15

    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.

  1. Algorithms and Their Explanations

    NARCIS (Netherlands)

    Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.

    2014-01-01

    By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of

  2. Engineering a Cache-Oblivious Sorting Algorithm

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Vinther, Kristoffer

    2007-01-01

    This paper is an algorithmic engineering study of cache-oblivious sorting. We investigate by empirical methods a number of implementation issues and parameter choices for the cache-oblivious sorting algorithm Lazy Funnelsort, and compare the final algorithm with Quicksort, the established standard...

  3. PROPOSAL OF ALGORITHM FOR ROUTE OPTIMIZATION

    OpenAIRE

    Robert Ramon de Carvalho Sousa; Abimael de Jesus Barros Costa; Eliezé Bulhões de Carvalho; Adriano de Carvalho Paranaíba; Daylyne Maerla Gomes Lima Sandoval

    2016-01-01

    This article uses “Six Sigma” methodology for the elaboration of an algorithm for routing problems which is able to obtain more efficient results than those from Clarke and Wright´s (CW) algorithm (1964) in situations of random increase of product delivery demands, facing the incapability of service level increase . In some situations, the algorithm proposed obtained more efficient results than the CW algorithm. The key factor was a reduction in the number of mistakes (on...

  4. Genetic algorithms and fuzzy multiobjective optimization

    CERN Document Server

    Sakawa, Masatoshi

    2002-01-01

    Since the introduction of genetic algorithms in the 1970s, an enormous number of articles together with several significant monographs and books have been published on this methodology. As a result, genetic algorithms have made a major contribution to optimization, adaptation, and learning in a wide variety of unexpected fields. Over the years, many excellent books in genetic algorithm optimization have been published; however, they focus mainly on single-objective discrete or other hard optimization problems under certainty. There appears to be no book that is designed to present genetic algorithms for solving not only single-objective but also fuzzy and multiobjective optimization problems in a unified way. Genetic Algorithms And Fuzzy Multiobjective Optimization introduces the latest advances in the field of genetic algorithm optimization for 0-1 programming, integer programming, nonconvex programming, and job-shop scheduling problems under multiobjectiveness and fuzziness. In addition, the book treats a w...

  5. Principal component analysis networks and algorithms

    CERN Document Server

    Kong, Xiangyu; Duan, Zhansheng

    2017-01-01

    This book not only provides a comprehensive introduction to neural-based PCA methods in control science, but also presents many novel PCA algorithms and their extensions and generalizations, e.g., dual purpose, coupled PCA, GED, neural based SVD algorithms, etc. It also discusses in detail various analysis methods for the convergence, stabilizing, self-stabilizing property of algorithms, and introduces the deterministic discrete-time systems method to analyze the convergence of PCA/MCA algorithms. Readers should be familiar with numerical analysis and the fundamentals of statistics, such as the basics of least squares and stochastic algorithms. Although it focuses on neural networks, the book only presents their learning law, which is simply an iterative algorithm. Therefore, no a priori knowledge of neural networks is required. This book will be of interest and serve as a reference source to researchers and students in applied mathematics, statistics, engineering, and other related fields.

  6. FIREWORKS ALGORITHM FOR UNCONSTRAINED FUNCTION OPTIMIZATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    Evans BAIDOO

    2017-03-01

    Full Text Available Modern real world science and engineering problems can be classified as multi-objective optimisation problems which demand for expedient and efficient stochastic algorithms to respond to the optimization needs. This paper presents an object-oriented software application that implements a firework optimization algorithm for function optimization problems. The algorithm, a kind of parallel diffuse optimization algorithm is based on the explosive phenomenon of fireworks. The algorithm presented promising results when compared to other population or iterative based meta-heuristic algorithm after it was experimented on five standard benchmark problems. The software application was implemented in Java with interactive interface which allow for easy modification and extended experimentation. Additionally, this paper validates the effect of runtime on the algorithm performance.

  7. A Cavity QED Implementation of Deutsch-Jozsa Algorithm

    OpenAIRE

    Guerra, E. S.

    2004-01-01

    The Deutsch-Jozsa algorithm is a generalization of the Deutsch algorithm which was the first algorithm written. We present schemes to implement the Deutsch algorithm and the Deutsch-Jozsa algorithm via cavity QED.

  8. A pipelined FPGA implementation of an encryption algorithm based on genetic algorithm

    Science.gov (United States)

    Thirer, Nonel

    2013-05-01

    With the evolution of digital data storage and exchange, it is essential to protect the confidential information from every unauthorized access. High performance encryption algorithms were developed and implemented by software and hardware. Also many methods to attack the cipher text were developed. In the last years, the genetic algorithm has gained much interest in cryptanalysis of cipher texts and also in encryption ciphers. This paper analyses the possibility to use the genetic algorithm as a multiple key sequence generator for an AES (Advanced Encryption Standard) cryptographic system, and also to use a three stages pipeline (with four main blocks: Input data, AES Core, Key generator, Output data) to provide a fast encryption and storage/transmission of a large amount of data.

  9. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  10. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  11. One improved LSB steganography algorithm

    Science.gov (United States)

    Song, Bing; Zhang, Zhi-hong

    2013-03-01

    It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.

  12. Graph Algorithm Animation with Grrr

    OpenAIRE

    Rodgers, Peter; Vidal, Natalia

    2000-01-01

    We discuss geometric positioning, highlighting of visited nodes and user defined highlighting that form the algorithm animation facilities in the Grrr graph rewriting programming language. The main purpose of animation was initially for the debugging and profiling of Grrr code, but recently it has been extended for the purpose of teaching algorithms to undergraduate students. The animation is restricted to graph based algorithms such as graph drawing, list manipulation or more traditional gra...

  13. Algorithms over partially ordered sets

    DEFF Research Database (Denmark)

    Baer, Robert M.; Østerby, Ole

    1969-01-01

    in partially ordered sets, answer the combinatorial question of how many maximal chains might exist in a partially ordered set withn elements, and we give an algorithm for enumerating all maximal chains. We give (in § 3) algorithms which decide whether a partially ordered set is a (lower or upper) semi......-lattice, and whether a lattice has distributive, modular, and Boolean properties. Finally (in § 4) we give Algol realizations of the various algorithms....

  14. A generalization of Takane's algorithm for DEDICOM

    NARCIS (Netherlands)

    Kiers, Henk A.L.; ten Berge, Jos M.F.; Takane, Yoshio; de Leeuw, Jan

    An algorithm is described for fitting the DEDICOM model for the analysis of asymmetric data matrices. This algorithm generalizes an algorithm suggested by Takane in that it uses a damping parameter in the iterative process. Takane's algorithm does not always converge monotonically. Based on the

  15. Integrating Algorithm Visualization Video into a First-Year Algorithm and Data Structure Course

    Science.gov (United States)

    Crescenzi, Pilu; Malizia, Alessio; Verri, M. Cecilia; Diaz, Paloma; Aedo, Ignacio

    2012-01-01

    In this paper we describe the results that we have obtained while integrating algorithm visualization (AV) movies (strongly tightened with the other teaching material), within a first-year undergraduate course on algorithms and data structures. Our experimental results seem to support the hypothesis that making these movies available significantly…

  16. Halogen bonding and pharmaceutical cocrystals: the case of a widely used preservative.

    Science.gov (United States)

    Baldrighi, Michele; Cavallo, Gabriella; Chierotti, Michele R; Gobetto, Roberto; Metrangolo, Pierangelo; Pilati, Tullio; Resnati, Giuseppe; Terraneo, Giancarlo

    2013-05-06

    3-Iodo-2-propynyl-N-butylcarbamate (IPBC) is an iodinated antimicrobial product used globally as a preservative, fungicide, and algaecide. IPBC is difficult to obtain in pure form as well as to handle in industrial products because it tends to be sticky and clumpy. Here, we describe the preparation of four pharmaceutical cocrystals involving IPBC. The obtained cocrystals have been characterized by X-ray diffraction, solution and solid-state NMR, IR, and DSC analyses. In all the described cases the halogen bond (XB) is the key interaction responsible for the self-assembly of the pharmaceutical cocrystals thanks to the involvement of the 1-iodoalkyne moiety of IPBC, which functions as a very reliable XB-donor, with both neutral and anionic XB-acceptors. Most of the obtained cocrystals have improved properties with respect to the source API, in terms, e.g., of thermal stability. The cocrystal involving the GRAS excipient CaCl2 has superior powder flow characteristics compared to the pure IPBC, representing a promising solution to the handling issues related to the manufacturing of products containing IPBC.

  17. Silencing of the Rice Gene LRR1 Compromises Rice Xa21 Transcript Accumulation and XA21-Mediated Immunity.

    Science.gov (United States)

    Caddell, Daniel F; Park, Chang-Jin; Thomas, Nicholas C; Canlas, Patrick E; Ronald, Pamela C

    2017-12-01

    The rice immune receptor XA21 confers resistance to Xanthomonas oryzae pv. oryzae (Xoo), the causal agent of bacterial leaf blight. We previously demonstrated that an auxilin-like protein, XA21 BINDING PROTEIN 21 (XB21), positively regulates resistance to Xoo. To further investigate the function of XB21, we performed a yeast two-hybrid screen. We identified 22 unique XB21 interacting proteins, including LEUCINE-RICH REPEAT PROTEIN 1 (LRR1), which we selected for further analysis. Silencing of LRR1 in the XA21 genetic background (XA21-LRR1Ri) compromises resistance to Xoo compared with control XA21 plants. XA21-LRR1Ri plants have reduced Xa21 transcript levels and reduced expression of genes that serve as markers of XA21-mediated activation. Overexpression of LRR1 is insufficient to alter resistance to Xoo in rice lines lacking XA21. Taken together, our results indicate that LRR1 is required for wild-type Xa21 transcript expression and XA21-mediated immunity.

  18. The Algorithm of Link Prediction on Social Network

    Directory of Open Access Journals (Sweden)

    Liyan Dong

    2013-01-01

    Full Text Available At present, most link prediction algorithms are based on the similarity between two entities. Social network topology information is one of the main sources to design the similarity function between entities. But the existing link prediction algorithms do not apply the network topology information sufficiently. For lack of traditional link prediction algorithms, we propose two improved algorithms: CNGF algorithm based on local information and KatzGF algorithm based on global information network. For the defect of the stationary of social network, we also provide the link prediction algorithm based on nodes multiple attributes information. Finally, we verified these algorithms on DBLP data set, and the experimental results show that the performance of the improved algorithm is superior to that of the traditional link prediction algorithm.

  19. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  20. Discrete Riccati equation solutions: Distributed algorithms

    Directory of Open Access Journals (Sweden)

    D. G. Lainiotis

    1996-01-01

    Full Text Available In this paper new distributed algorithms for the solution of the discrete Riccati equation are introduced. The algorithms are used to provide robust and computational efficient solutions to the discrete Riccati equation. The proposed distributed algorithms are theoretically interesting and computationally attractive.

  1. A Hybrid Algorithm for Optimizing Multi- Modal Functions

    Institute of Scientific and Technical Information of China (English)

    Li Qinghua; Yang Shida; Ruan Youlin

    2006-01-01

    A new genetic algorithm is presented based on the musical performance. The novelty of this algorithm is that a new genetic algorithm, mimicking the musical process of searching for a perfect state of harmony, which increases the robustness of it greatly and gives a new meaning of it in the meantime, has been developed. Combining the advantages of the new genetic algorithm, simplex algorithm and tabu search, a hybrid algorithm is proposed. In order to verify the effectiveness of the hybrid algorithm, it is applied to solving some typical numerical function optimization problems which are poorly solved by traditional genetic algorithms. The experimental results show that the hybrid algorithm is fast and reliable.

  2. Fast algorithm of adaptive Fourier series

    Science.gov (United States)

    Gao, You; Ku, Min; Qian, Tao

    2018-05-01

    Adaptive Fourier decomposition (AFD, precisely 1-D AFD or Core-AFD) was originated for the goal of positive frequency representations of signals. It achieved the goal and at the same time offered fast decompositions of signals. There then arose several types of AFDs. AFD merged with the greedy algorithm idea, and in particular, motivated the so-called pre-orthogonal greedy algorithm (Pre-OGA) that was proven to be the most efficient greedy algorithm. The cost of the advantages of the AFD type decompositions is, however, the high computational complexity due to the involvement of maximal selections of the dictionary parameters. The present paper offers one formulation of the 1-D AFD algorithm by building the FFT algorithm into it. Accordingly, the algorithm complexity is reduced, from the original $\\mathcal{O}(M N^2)$ to $\\mathcal{O}(M N\\log_2 N)$, where $N$ denotes the number of the discretization points on the unit circle and $M$ denotes the number of points in $[0,1)$. This greatly enhances the applicability of AFD. Experiments are carried out to show the high efficiency of the proposed algorithm.

  3. Modified Decoding Algorithm of LLR-SPA

    Directory of Open Access Journals (Sweden)

    Zhongxun Wang

    2014-09-01

    Full Text Available In wireless sensor networks, the energy consumption is mainly occurred in the stage of information transmission. The Low Density Parity Check code can make full use of the channel information to save energy. Because of the widely used decoding algorithm of the Low Density Parity Check code, this paper proposes a new decoding algorithm which is based on the LLR-SPA (Sum-Product Algorithm in Log-Likelihood-domain to improve the accuracy of the decoding algorithm. In the modified algorithm, a piecewise linear function is used to approximate the complicated Jacobi correction term in LLR-SPA decoding algorithm. Construct the tangent by the tangency point to the function of Jacobi correction term, which is based on the first order Taylor Series. In this way, the proposed piecewise linear approximation offers almost a perfect match to the function of Jacobi correction term. Meanwhile, the proposed piecewise linear approximation could avoid the operation of logarithmic which is more suitable for practical application. The simulation results show that the proposed algorithm could improve the decoding accuracy greatly without noticeable variation of the computational complexity.

  4. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    Science.gov (United States)

    Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing

    2015-08-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).

  5. Planar graphs theory and algorithms

    CERN Document Server

    Nishizeki, T

    1988-01-01

    Collected in this volume are most of the important theorems and algorithms currently known for planar graphs, together with constructive proofs for the theorems. Many of the algorithms are written in Pidgin PASCAL, and are the best-known ones; the complexities are linear or 0(nlogn). The first two chapters provide the foundations of graph theoretic notions and algorithmic techniques. The remaining chapters discuss the topics of planarity testing, embedding, drawing, vertex- or edge-coloring, maximum independence set, subgraph listing, planar separator theorem, Hamiltonian cycles, and single- or multicommodity flows. Suitable for a course on algorithms, graph theory, or planar graphs, the volume will also be useful for computer scientists and graph theorists at the research level. An extensive reference section is included.

  6. Comprehensive asynchronous symmetric rendezvous algorithm in ...

    Indian Academy of Sciences (India)

    Meenu Chawla

    2017-11-10

    Nov 10, 2017 ... Simulation results affirm that CASR algorithm performs better in terms of average time-to-rendezvous as compared ... process; neighbour discovery; symmetric rendezvous algorithm. 1. .... dezvous in finite time under the symmetric model. The CH ..... CASR algorithm in Matlab 7.11 and performed several.

  7. Learning algorithms and automatic processing of languages

    International Nuclear Information System (INIS)

    Fluhr, Christian Yves Andre

    1977-01-01

    This research thesis concerns the field of artificial intelligence. It addresses learning algorithms applied to automatic processing of languages. The author first briefly describes some mechanisms of human intelligence in order to describe how these mechanisms are simulated on a computer. He outlines the specific role of learning in various manifestations of intelligence. Then, based on the Markov's algorithm theory, the author discusses the notion of learning algorithm. Two main types of learning algorithms are then addressed: firstly, an 'algorithm-teacher dialogue' type sanction-based algorithm which aims at learning how to solve grammatical ambiguities in submitted texts; secondly, an algorithm related to a document system which structures semantic data automatically obtained from a set of texts in order to be able to understand by references to any question on the content of these texts

  8. Using Alternative Multiplication Algorithms to "Offload" Cognition

    Science.gov (United States)

    Jazby, Dan; Pearn, Cath

    2015-01-01

    When viewed through a lens of embedded cognition, algorithms may enable aspects of the cognitive work of multi-digit multiplication to be "offloaded" to the environmental structure created by an algorithm. This study analyses four multiplication algorithms by viewing different algorithms as enabling cognitive work to be distributed…

  9. Named Entity Linking Algorithm

    Directory of Open Access Journals (Sweden)

    M. F. Panteleev

    2017-01-01

    Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.

  10. Algorithm 426 : Merge sort algorithm [M1

    NARCIS (Netherlands)

    Bron, C.

    1972-01-01

    Sorting by means of a two-way merge has a reputation of requiring a clerically complicated and cumbersome program. This ALGOL 60 procedure demonstrates that, using recursion, an elegant and efficient algorithm can be designed, the correctness of which is easily proved [2]. Sorting n objects gives

  11. Wavefront-ray grid FDTD algorithm

    OpenAIRE

    ÇİYDEM, MEHMET

    2016-01-01

    A finite difference time domain algorithm on a wavefront-ray grid (WRG-FDTD) is proposed in this study to reduce numerical dispersion of conventional FDTD methods. A FDTD algorithm conforming to a wavefront-ray grid can be useful to take into account anisotropy effects of numerical grids since it features directional energy flow along the rays. An explicit and second-order accurate WRG-FDTD algorithm is provided in generalized curvilinear coordinates for an inhomogeneous isotropic medium. Num...

  12. Unsupervised Classification Using Immune Algorithm

    OpenAIRE

    Al-Muallim, M. T.; El-Kouatly, R.

    2012-01-01

    Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...

  13. Hybrid employment recommendation algorithm based on Spark

    Science.gov (United States)

    Li, Zuoquan; Lin, Yubei; Zhang, Xingming

    2017-08-01

    Aiming at the real-time application of collaborative filtering employment recommendation algorithm (CF), a clustering collaborative filtering recommendation algorithm (CCF) is developed, which applies hierarchical clustering to CF and narrows the query range of neighbour items. In addition, to solve the cold-start problem of content-based recommendation algorithm (CB), a content-based algorithm with users’ information (CBUI) is introduced for job recommendation. Furthermore, a hybrid recommendation algorithm (HRA) which combines CCF and CBUI algorithms is proposed, and implemented on Spark platform. The experimental results show that HRA can overcome the problems of cold start and data sparsity, and achieve good recommendation accuracy and scalability for employment recommendation.

  14. Evaluation Of Algorithms Of Anti- HIV Antibody Tests

    Directory of Open Access Journals (Sweden)

    Paranjape R.S

    1997-01-01

    Full Text Available Research question: Can alternate algorithms be used in place of conventional algorithm for epidemiological studies of HIV infection with less expenses? Objective: To compare the results of HIV sero- prevalence as determined by test algorithms combining three kits with conventional test algorithm. Study design: Cross â€" sectional. Participants: 282 truck drivers. Statistical analysis: Sensitivity and specificity analysis and predictive values. Results: Three different algorithms that do not include Western Blot (WB were compared with the conventional algorithm, in a truck driver population with 5.6% prevalence of HIV â€"I infection. Algorithms with one EIA (Genetic Systems or Biotest and a rapid test (immunocomb or with two EIAs showed 100% positive predictive value in relation to the conventional algorithm. Using an algorithm with EIA as screening test and a rapid test as a confirmatory test was 50 to 70% less expensive than the conventional algorithm per positive scrum sample. These algorithms obviate the interpretation of indeterminate results and also give differential diagnosis of HIV-2 infection. Alternate algorithms are ideally suited for community based control programme in developing countries. Application of these algorithms in population with low prevalence should also be studied in order to evaluate universal applicability.

  15. Categorizing Variations of Student-Implemented Sorting Algorithms

    Science.gov (United States)

    Taherkhani, Ahmad; Korhonen, Ari; Malmi, Lauri

    2012-01-01

    In this study, we examined freshmen students' sorting algorithm implementations in data structures and algorithms' course in two phases: at the beginning of the course before the students received any instruction on sorting algorithms, and after taking a lecture on sorting algorithms. The analysis revealed that many students have insufficient…

  16. Decoding Hermitian Codes with Sudan's Algorithm

    DEFF Research Database (Denmark)

    Høholdt, Tom; Nielsen, Rasmus Refslund

    1999-01-01

    We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q-polynomial, and a reduct......We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q...

  17. Decoding algorithm for vortex communications receiver

    Science.gov (United States)

    Kupferman, Judy; Arnon, Shlomi

    2018-01-01

    Vortex light beams can provide a tremendous alphabet for encoding information. We derive a symbol decoding algorithm for a direct detection matrix detector vortex beam receiver using Laguerre Gauss (LG) modes, and develop a mathematical model of symbol error rate (SER) for this receiver. We compare SER as a function of signal to noise ratio (SNR) for our algorithm and for the Pearson correlation algorithm. To our knowledge, this is the first comprehensive treatment of a decoding algorithm of a matrix detector for an LG receiver.

  18. Quantum algorithms for testing Boolean functions

    Directory of Open Access Journals (Sweden)

    Erika Andersson

    2010-06-01

    Full Text Available We discuss quantum algorithms, based on the Bernstein-Vazirani algorithm, for finding which variables a Boolean function depends on. There are 2^n possible linear Boolean functions of n variables; given a linear Boolean function, the Bernstein-Vazirani quantum algorithm can deterministically identify which one of these Boolean functions we are given using just one single function query. The same quantum algorithm can also be used to learn which input variables other types of Boolean functions depend on, with a success probability that depends on the form of the Boolean function that is tested, but does not depend on the total number of input variables. We also outline a procedure to futher amplify the success probability, based on another quantum algorithm, the Grover search.

  19. [An improved algorithm for electrohysterogram envelope extraction].

    Science.gov (United States)

    Lu, Yaosheng; Pan, Jie; Chen, Zhaoxia; Chen, Zhaoxia

    2017-02-01

    Extraction uterine contraction signal from abdominal uterine electromyogram(EMG) signal is considered as the most promising method to replace the traditional tocodynamometer(TOCO) for detecting uterine contractions activity. The traditional root mean square(RMS) algorithm has only some limited values in canceling the impulsive noise. In our study, an improved algorithm for uterine EMG envelope extraction was proposed to overcome the problem. Firstly, in our experiment, zero-crossing detection method was used to separate the burst of uterine electrical activity from the raw uterine EMG signal. After processing the separated signals by employing two filtering windows which have different width, we used the traditional RMS algorithm to extract uterus EMG envelope. To assess the performance of the algorithm, the improved algorithm was compared with two existing intensity of uterine electromyogram(IEMG) extraction algorithms. The results showed that the improved algorithm was better than the traditional ones in eliminating impulsive noise present in the uterine EMG signal. The measurement sensitivity and positive predictive value(PPV) of the improved algorithm were 0.952 and 0.922, respectively, which were not only significantly higher than the corresponding values(0.859 and 0.847) of the first comparison algorithm, but also higher than the values(0.928 and 0.877) of the second comparison algorithm. Thus the new method is reliable and effective.

  20. Molecular and Cellular Determinants of Malignant Transformation in Pulmonary Premalignancy

    Science.gov (United States)

    2017-07-01

    including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing...as premalignant- and malignant-specific mutations, were identified. The mutational data was analyzed in the pathway context. Based on the...Milestone Achieved: HRPO/ACURO Approval Completed Major Task 2 Subtask 1: To construct sequencing libraries and perform exome enrichment (50

  1. Tolerance in Nonhuman Primates by Delayed Mixed Chimerism

    Science.gov (United States)

    2014-10-01

    offer patients restoration of function and form following severe, disabling and disfiguring injury or tissue loss, in circumstances where the results...Transplantation was uncomplicated. Osteosynthesis was achieved with excellent intraoperative position and fixation. Anastomoses of the radial artery and...to IACUC and ACURO for approval to place an indwelling vascular access port ( VAP ) in each subsequent recipient animal. These ports are accesses

  2. Recursive automatic classification algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Bauman, E V; Dorofeyuk, A A

    1982-03-01

    A variational statement of the automatic classification problem is given. The dependence of the form of the optimal partition surface on the form of the classification objective functional is investigated. A recursive algorithm is proposed for maximising a functional of reasonably general form. The convergence problem is analysed in connection with the proposed algorithm. 8 references.

  3. Seamless Merging of Hypertext and Algorithm Animation

    Science.gov (United States)

    Karavirta, Ville

    2009-01-01

    Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…

  4. Binar Sort: A Linear Generalized Sorting Algorithm

    OpenAIRE

    Gilreath, William F.

    2008-01-01

    Sorting is a common and ubiquitous activity for computers. It is not surprising that there exist a plethora of sorting algorithms. For all the sorting algorithms, it is an accepted performance limit that sorting algorithms are linearithmic or O(N lg N). The linearithmic lower bound in performance stems from the fact that the sorting algorithms use the ordering property of the data. The sorting algorithm uses comparison by the ordering property to arrange the data elements from an initial perm...

  5. On König's root finding algorithms

    DEFF Research Database (Denmark)

    Buff, Xavier; Henriksen, Christian

    2003-01-01

    In this paper, we first recall the definition of a family of root-finding algorithms known as König's algorithms. We establish some local and some global properties of those algorithms. We give a characterization of rational maps which arise as König's methods of polynomials with simple roots. We...

  6. Adaptive discrete-ordinates algorithms and strategies

    International Nuclear Information System (INIS)

    Stone, J.C.; Adams, M.L.

    2005-01-01

    We present our latest algorithms and strategies for adaptively refined discrete-ordinates quadrature sets. In our basic strategy, which we apply here in two-dimensional Cartesian geometry, the spatial domain is divided into regions. Each region has its own quadrature set, which is adapted to the region's angular flux. Our algorithms add a 'test' direction to the quadrature set if the angular flux calculated at that direction differs by more than a user-specified tolerance from the angular flux interpolated from other directions. Different algorithms have different prescriptions for the method of interpolation and/or choice of test directions and/or prescriptions for quadrature weights. We discuss three different algorithms of different interpolation orders. We demonstrate through numerical results that each algorithm is capable of generating solutions with negligible angular discretization error. This includes elimination of ray effects. We demonstrate that all of our algorithms achieve a given level of error with far fewer unknowns than does a standard quadrature set applied to an entire problem. To address a potential issue with other algorithms, we present one algorithm that retains exact integration of high-order spherical-harmonics functions, no matter how much local refinement takes place. To address another potential issue, we demonstrate that all of our methods conserve partial currents across interfaces where quadrature sets change. We conclude that our approach is extremely promising for solving the long-standing problem of angular discretization error in multidimensional transport problems. (authors)

  7. Multicore and GPU algorithms for Nussinov RNA folding

    Science.gov (United States)

    2014-01-01

    Background One segment of a RNA sequence might be paired with another segment of the same RNA sequence due to the force of hydrogen bonds. This two-dimensional structure is called the RNA sequence's secondary structure. Several algorithms have been proposed to predict an RNA sequence's secondary structure. These algorithms are referred to as RNA folding algorithms. Results We develop cache efficient, multicore, and GPU algorithms for RNA folding using Nussinov's algorithm. Conclusions Our cache efficient algorithm provides a speedup between 1.6 and 3.0 relative to a naive straightforward single core code. The multicore version of the cache efficient single core algorithm provides a speedup, relative to the naive single core algorithm, between 7.5 and 14.0 on a 6 core hyperthreaded CPU. Our GPU algorithm for the NVIDIA C2050 is up to 1582 times as fast as the naive single core algorithm and between 5.1 and 11.2 times as fast as the fastest previously known GPU algorithm for Nussinov RNA folding. PMID:25082539

  8. Optimal Pid Controller Design Using Adaptive Vurpso Algorithm

    Science.gov (United States)

    Zirkohi, Majid Moradi

    2015-04-01

    The purpose of this paper is to improve theVelocity Update Relaxation Particle Swarm Optimization algorithm (VURPSO). The improved algorithm is called Adaptive VURPSO (AVURPSO) algorithm. Then, an optimal design of a Proportional-Integral-Derivative (PID) controller is obtained using the AVURPSO algorithm. An adaptive momentum factor is used to regulate a trade-off between the global and the local exploration abilities in the proposed algorithm. This operation helps the system to reach the optimal solution quickly and saves the computation time. Comparisons on the optimal PID controller design confirm the superiority of AVURPSO algorithm to the optimization algorithms mentioned in this paper namely the VURPSO algorithm, the Ant Colony algorithm, and the conventional approach. Comparisons on the speed of convergence confirm that the proposed algorithm has a faster convergence in a less computation time to yield a global optimum value. The proposed AVURPSO can be used in the diverse areas of optimization problems such as industrial planning, resource allocation, scheduling, decision making, pattern recognition and machine learning. The proposed AVURPSO algorithm is efficiently used to design an optimal PID controller.

  9. Array architectures for iterative algorithms

    Science.gov (United States)

    Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas

    1987-01-01

    Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.

  10. Testing block subdivision algorithms on block designs

    Science.gov (United States)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  11. Algorithm for counting large directed loops

    Energy Technology Data Exchange (ETDEWEB)

    Bianconi, Ginestra [Abdus Salam International Center for Theoretical Physics, Strada Costiera 11, 34014 Trieste (Italy); Gulbahce, Natali [Theoretical Division and Center for Nonlinear Studies, Los Alamos National Laboratory, NM 87545 (United States)

    2008-06-06

    We derive a Belief-Propagation algorithm for counting large loops in a directed network. We evaluate the distribution of the number of small loops in a directed random network with given degree sequence. We apply the algorithm to a few characteristic directed networks of various network sizes and loop structures and compare the algorithm with exhaustive counting results when possible. The algorithm is adequate in estimating loop counts for large directed networks and can be used to compare the loop structure of directed networks and their randomized counterparts.

  12. Hardware modules of the RSA algorithm

    Directory of Open Access Journals (Sweden)

    Škobić Velibor

    2014-01-01

    Full Text Available This paper describes basic principles of data protection using the RSA algorithm, as well as algorithms for its calculation. The RSA algorithm is implemented on FPGA integrated circuit EP4CE115F29C7, family Cyclone IV, Altera. Four modules of Montgomery algorithm are designed using VHDL. Synthesis and simulation are done using Quartus II software and ModelSim. The modules are analyzed for different key lengths (16 to 1024 in terms of the number of logic elements, the maximum frequency and speed.

  13. PROPOSAL OF ALGORITHM FOR ROUTE OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Robert Ramon de Carvalho Sousa

    2016-06-01

    Full Text Available This article uses “Six Sigma” methodology for the elaboration of an algorithm for routing problems which is able to obtain more efficient results than those from Clarke and Wright´s (CW algorithm (1964 in situations of random increase of product delivery demands, facing the incapability of service level increase . In some situations, the algorithm proposed obtained more efficient results than the CW algorithm. The key factor was a reduction in the number of mistakes (one way routes and in the level of result variation.

  14. Toward human-centered algorithm design

    Directory of Open Access Journals (Sweden)

    Eric PS Baumer

    2017-07-01

    Full Text Available As algorithms pervade numerous facets of daily life, they are incorporated into systems for increasingly diverse purposes. These systems’ results are often interpreted differently by the designers who created them than by the lay persons who interact with them. This paper offers a proposal for human-centered algorithm design, which incorporates human and social interpretations into the design process for algorithmically based systems. It articulates three specific strategies for doing so: theoretical, participatory, and speculative. Drawing on the author’s work designing and deploying multiple related systems, the paper provides a detailed example of using a theoretical approach. It also discusses findings pertinent to participatory and speculative design approaches. The paper addresses both strengths and challenges for each strategy in helping to center the process of designing algorithmically based systems around humans.

  15. Designing algorithms using CAD technologies

    Directory of Open Access Journals (Sweden)

    Alin IORDACHE

    2008-01-01

    Full Text Available A representative example of eLearning-platform modular application, ‘Logical diagrams’, is intended to be a useful learning and testing tool for the beginner programmer, but also for the more experienced one. The problem this application is trying to solve concerns young programmers who forget about the fundamentals of this domain, algorithmic. Logical diagrams are a graphic representation of an algorithm, which uses different geometrical figures (parallelograms, rectangles, rhombuses, circles with particular meaning that are called blocks and connected between them to reveal the flow of the algorithm. The role of this application is to help the user build the diagram for the algorithm and then automatically generate the C code and test it.

  16. A new chaotic algorithm for image encryption

    International Nuclear Information System (INIS)

    Gao Haojiang; Zhang Yisheng; Liang Shuyun; Li Dequn

    2006-01-01

    Recent researches of image encryption algorithms have been increasingly based on chaotic systems, but the drawbacks of small key space and weak security in one-dimensional chaotic cryptosystems are obvious. This paper presents a new nonlinear chaotic algorithm (NCA) which uses power function and tangent function instead of linear function. Its structural parameters are obtained by experimental analysis. And an image encryption algorithm in a one-time-one-password system is designed. The experimental results demonstrate that the image encryption algorithm based on NCA shows advantages of large key space and high-level security, while maintaining acceptable efficiency. Compared with some general encryption algorithms such as DES, the encryption algorithm is more secure

  17. Learning theory of distributed spectral algorithms

    International Nuclear Information System (INIS)

    Guo, Zheng-Chu; Lin, Shao-Bo; Zhou, Ding-Xuan

    2017-01-01

    Spectral algorithms have been widely used and studied in learning theory and inverse problems. This paper is concerned with distributed spectral algorithms, for handling big data, based on a divide-and-conquer approach. We present a learning theory for these distributed kernel-based learning algorithms in a regression framework including nice error bounds and optimal minimax learning rates achieved by means of a novel integral operator approach and a second order decomposition of inverse operators. Our quantitative estimates are given in terms of regularity of the regression function, effective dimension of the reproducing kernel Hilbert space, and qualification of the filter function of the spectral algorithm. They do not need any eigenfunction or noise conditions and are better than the existing results even for the classical family of spectral algorithms. (paper)

  18. Finite lattice extrapolation algorithms

    International Nuclear Information System (INIS)

    Henkel, M.; Schuetz, G.

    1987-08-01

    Two algorithms for sequence extrapolation, due to von den Broeck and Schwartz and Bulirsch and Stoer are reviewed and critically compared. Applications to three states and six states quantum chains and to the (2+1)D Ising model show that the algorithm of Bulirsch and Stoer is superior, in particular if only very few finite lattice data are available. (orig.)

  19. Improved core protection calculator system algorithm

    International Nuclear Information System (INIS)

    Yoon, Tae Young; Park, Young Ho; In, Wang Kee; Bae, Jong Sik; Baeg, Seung Yeob

    2009-01-01

    Core Protection Calculator System (CPCS) is a digitized core protection system which provides core protection functions based on two reactor core operation parameters, Departure from Nucleate Boiling Ratio (DNBR) and Local Power Density (LPD). It generates a reactor trip signal when the core condition exceeds the DNBR or LPD design limit. It consists of four independent channels which adapted a two out of four trip logic. CPCS algorithm improvement for the newly designed core protection calculator system, RCOPS (Reactor COre Protection System), is described in this paper. New features include the improvement of DNBR algorithm for thermal margin, the addition of pre trip alarm generation for auxiliary trip function, VOPT (Variable Over Power Trip) prevention during RPCS (Reactor Power Cutback System) actuation and the improvement of CEA (Control Element Assembly) signal checking algorithm. To verify the improved CPCS algorithm, CPCS algorithm verification tests, 'Module Test' and 'Unit Test', would be performed on RCOPS single channel facility. It is expected that the improved CPCS algorithm will increase DNBR margin and enhance the plant availability by reducing unnecessary reactor trips

  20. Routing algorithms in networks-on-chip

    CERN Document Server

    Daneshtalab, Masoud

    2014-01-01

    This book provides a single-source reference to routing algorithms for Networks-on-Chip (NoCs), as well as in-depth discussions of advanced solutions applied to current and next generation, many core NoC-based Systems-on-Chip (SoCs). After a basic introduction to the NoC design paradigm and architectures, routing algorithms for NoC architectures are presented and discussed at all abstraction levels, from the algorithmic level to actual implementation.  Coverage emphasizes the role played by the routing algorithm and is organized around key problems affecting current and next generation, many-core SoCs. A selection of routing algorithms is included, specifically designed to address key issues faced by designers in the ultra-deep sub-micron (UDSM) era, including performance improvement, power, energy, and thermal issues, fault tolerance and reliability.   ·         Provides a comprehensive overview of routing algorithms for Networks-on-Chip and NoC-based, manycore systems; ·         Describe...

  1. Distribution agnostic structured sparsity recovery algorithms

    KAUST Repository

    Al-Naffouri, Tareq Y.

    2013-05-01

    We present an algorithm and its variants for sparse signal recovery from a small number of its measurements in a distribution agnostic manner. The proposed algorithm finds Bayesian estimate of a sparse signal to be recovered and at the same time is indifferent to the actual distribution of its non-zero elements. Termed Support Agnostic Bayesian Matching Pursuit (SABMP), the algorithm also has the capability of refining the estimates of signal and required parameters in the absence of the exact parameter values. The inherent feature of the algorithm of being agnostic to the distribution of the data grants it the flexibility to adapt itself to several related problems. Specifically, we present two important extensions to this algorithm. One extension handles the problem of recovering sparse signals having block structures while the other handles multiple measurement vectors to jointly estimate the related unknown signals. We conduct extensive experiments to show that SABMP and its variants have superior performance to most of the state-of-the-art algorithms and that too at low-computational expense. © 2013 IEEE.

  2. Analysing Music with Point-Set Compression Algorithms

    DEFF Research Database (Denmark)

    Meredith, David

    2016-01-01

    Several point-set pattern-discovery and compression algorithms designed for analysing music are reviewed and evaluated. Each algorithm takes as input a point-set representation of a score in which each note is represented as a point in pitch-time space. Each algorithm computes the maximal...... and sections in pieces of classical music. On the first task, the best-performing algorithms achieved success rates of around 84%. In the second task, the best algorithms achieved mean F1 scores of around 0.49, with scores for individual pieces rising as high as 0.71....

  3. Gradient Evolution-based Support Vector Machine Algorithm for Classification

    Science.gov (United States)

    Zulvia, Ferani E.; Kuo, R. J.

    2018-03-01

    This paper proposes a classification algorithm based on a support vector machine (SVM) and gradient evolution (GE) algorithms. SVM algorithm has been widely used in classification. However, its result is significantly influenced by the parameters. Therefore, this paper aims to propose an improvement of SVM algorithm which can find the best SVMs’ parameters automatically. The proposed algorithm employs a GE algorithm to automatically determine the SVMs’ parameters. The GE algorithm takes a role as a global optimizer in finding the best parameter which will be used by SVM algorithm. The proposed GE-SVM algorithm is verified using some benchmark datasets and compared with other metaheuristic-based SVM algorithms. The experimental results show that the proposed GE-SVM algorithm obtains better results than other algorithms tested in this paper.

  4. Stability and chaos of LMSER PCA learning algorithm

    International Nuclear Information System (INIS)

    Lv Jiancheng; Y, Zhang

    2007-01-01

    LMSER PCA algorithm is a principal components analysis algorithm. It is used to extract principal components on-line from input data. The algorithm has both stability and chaotic dynamic behavior under some conditions. This paper studies the local stability of the LMSER PCA algorithm via a corresponding deterministic discrete time system. Conditions for local stability are derived. The paper also explores the chaotic behavior of this algorithm. It shows that the LMSER PCA algorithm can produce chaos. Waveform plots, Lyapunov exponents and bifurcation diagrams are presented to illustrate the existence of chaotic behavior of this algorithm

  5. Dose Calculation Accuracy of the Monte Carlo Algorithm for CyberKnife Compared with Other Commercially Available Dose Calculation Algorithms

    International Nuclear Information System (INIS)

    Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny

    2011-01-01

    Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required.

  6. Behavioural modelling using the MOESP algorithm, dynamic neural networks and the Bartels-Stewart algorithm

    NARCIS (Netherlands)

    Schilders, W.H.A.; Meijer, P.B.L.; Ciggaar, E.

    2008-01-01

    In this paper we discuss the use of the state-space modelling MOESP algorithm to generate precise information about the number of neurons and hidden layers in dynamic neural networks developed for the behavioural modelling of electronic circuits. The Bartels–Stewart algorithm is used to transform

  7. Algorithmic randomness and physical entropy

    International Nuclear Information System (INIS)

    Zurek, W.H.

    1989-01-01

    Algorithmic randomness provides a rigorous, entropylike measure of disorder of an individual, microscopic, definite state of a physical system. It is defined by the size (in binary digits) of the shortest message specifying the microstate uniquely up to the assumed resolution. Equivalently, algorithmic randomness can be expressed as the number of bits in the smallest program for a universal computer that can reproduce the state in question (for instance, by plotting it with the assumed accuracy). In contrast to the traditional definitions of entropy, algorithmic randomness can be used to measure disorder without any recourse to probabilities. Algorithmic randomness is typically very difficult to calculate exactly but relatively easy to estimate. In large systems, probabilistic ensemble definitions of entropy (e.g., coarse-grained entropy of Gibbs and Boltzmann's entropy H=lnW, as well as Shannon's information-theoretic entropy) provide accurate estimates of the algorithmic entropy of an individual system or its average value for an ensemble. One is thus able to rederive much of thermodynamics and statistical mechanics in a setting very different from the usual. Physical entropy, I suggest, is a sum of (i) the missing information measured by Shannon's formula and (ii) of the algorithmic information content---algorithmic randomness---present in the available data about the system. This definition of entropy is essential in describing the operation of thermodynamic engines from the viewpoint of information gathering and using systems. These Maxwell demon-type entities are capable of acquiring and processing information and therefore can ''decide'' on the basis of the results of their measurements and computations the best strategy for extracting energy from their surroundings. From their internal point of view the outcome of each measurement is definite

  8. Relative Pose Estimation Algorithm with Gyroscope Sensor

    Directory of Open Access Journals (Sweden)

    Shanshan Wei

    2016-01-01

    Full Text Available This paper proposes a novel vision and inertial fusion algorithm S2fM (Simplified Structure from Motion for camera relative pose estimation. Different from current existing algorithms, our algorithm estimates rotation parameter and translation parameter separately. S2fM employs gyroscopes to estimate camera rotation parameter, which is later fused with the image data to estimate camera translation parameter. Our contributions are in two aspects. (1 Under the circumstance that no inertial sensor can estimate accurately enough translation parameter, we propose a translation estimation algorithm by fusing gyroscope sensor and image data. (2 Our S2fM algorithm is efficient and suitable for smart devices. Experimental results validate efficiency of the proposed S2fM algorithm.

  9. The Chandra Source Catalog: Algorithms

    Science.gov (United States)

    McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.

  10. Multidimensional Scaling Localization Algorithm in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Zhang Dongyang

    2014-02-01

    Full Text Available Due to the localization algorithm in large-scale wireless sensor network exists shortcomings both in positioning accuracy and time complexity compared to traditional localization algorithm, this paper presents a fast multidimensional scaling location algorithm. By positioning algorithm for fast multidimensional scaling, fast mapping initialization, fast mapping and coordinate transform can get schematic coordinates of node, coordinates Initialize of MDS algorithm, an accurate estimate of the node coordinates and using the PRORUSTES to analysis alignment of the coordinate and final position coordinates of nodes etc. There are four steps, and the thesis gives specific implementation steps of the algorithm. Finally, compared with stochastic algorithms and classical MDS algorithm experiment, the thesis takes application of specific examples. Experimental results show that: the proposed localization algorithm has fast multidimensional scaling positioning accuracy in ensuring certain circumstances, but also greatly improves the speed of operation.

  11. Cascade Error Projection: A New Learning Algorithm

    Science.gov (United States)

    Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.

    1995-01-01

    A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.

  12. Quantum learning algorithms for quantum measurements

    Energy Technology Data Exchange (ETDEWEB)

    Bisio, Alessandro, E-mail: alessandro.bisio@unipv.it [QUIT Group, Dipartimento di Fisica ' A. Volta' and INFN, via Bassi 6, 27100 Pavia (Italy); D' Ariano, Giacomo Mauro, E-mail: dariano@unipv.it [QUIT Group, Dipartimento di Fisica ' A. Volta' and INFN, via Bassi 6, 27100 Pavia (Italy); Perinotti, Paolo, E-mail: paolo.perinotti@unipv.it [QUIT Group, Dipartimento di Fisica ' A. Volta' and INFN, via Bassi 6, 27100 Pavia (Italy); Sedlak, Michal, E-mail: michal.sedlak@unipv.it [QUIT Group, Dipartimento di Fisica ' A. Volta' and INFN, via Bassi 6, 27100 Pavia (Italy); Institute of Physics, Slovak Academy of Sciences, Dubravska cesta 9, 845 11 Bratislava (Slovakia)

    2011-09-12

    We study quantum learning algorithms for quantum measurements. The optimal learning algorithm is derived for arbitrary von Neumann measurements in the case of training with one or two examples. The analysis of the case of three examples reveals that, differently from the learning of unitary gates, the optimal algorithm for learning of quantum measurements cannot be parallelized, and requires quantum memories for the storage of information. -- Highlights: → Optimal learning algorithm for von Neumann measurements. → From 2 copies to 1 copy: the optimal strategy is parallel. → From 3 copies to 1 copy: the optimal strategy must be non-parallel.

  13. Quantum learning algorithms for quantum measurements

    International Nuclear Information System (INIS)

    Bisio, Alessandro; D'Ariano, Giacomo Mauro; Perinotti, Paolo; Sedlak, Michal

    2011-01-01

    We study quantum learning algorithms for quantum measurements. The optimal learning algorithm is derived for arbitrary von Neumann measurements in the case of training with one or two examples. The analysis of the case of three examples reveals that, differently from the learning of unitary gates, the optimal algorithm for learning of quantum measurements cannot be parallelized, and requires quantum memories for the storage of information. -- Highlights: → Optimal learning algorithm for von Neumann measurements. → From 2 copies to 1 copy: the optimal strategy is parallel. → From 3 copies to 1 copy: the optimal strategy must be non-parallel.

  14. A controllable sensor management algorithm capable of learning

    Science.gov (United States)

    Osadciw, Lisa A.; Veeramacheneni, Kalyan K.

    2005-03-01

    Sensor management technology progress is challenged by the geographic space it spans, the heterogeneity of the sensors, and the real-time timeframes within which plans controlling the assets are executed. This paper presents a new sensor management paradigm and demonstrates its application in a sensor management algorithm designed for a biometric access control system. This approach consists of an artificial intelligence (AI) algorithm focused on uncertainty measures, which makes the high level decisions to reduce uncertainties and interfaces with the user, integrated cohesively with a bottom up evolutionary algorithm, which optimizes the sensor network"s operation as determined by the AI algorithm. The sensor management algorithm presented is composed of a Bayesian network, the AI algorithm component, and a swarm optimization algorithm, the evolutionary algorithm. Thus, the algorithm can change its own performance goals in real-time and will modify its own decisions based on observed measures within the sensor network. The definition of the measures as well as the Bayesian network determine the robustness of the algorithm and its utility in reacting dynamically to changes in the global system.

  15. WDM Multicast Tree Construction Algorithms and Their Comparative Evaluations

    Science.gov (United States)

    Makabe, Tsutomu; Mikoshi, Taiju; Takenaka, Toyofumi

    We propose novel tree construction algorithms for multicast communication in photonic networks. Since multicast communications consume many more link resources than unicast communications, effective algorithms for route selection and wavelength assignment are required. We propose a novel tree construction algorithm, called the Weighted Steiner Tree (WST) algorithm and a variation of the WST algorithm, called the Composite Weighted Steiner Tree (CWST) algorithm. Because these algorithms are based on the Steiner Tree algorithm, link resources among source and destination pairs tend to be commonly used and link utilization ratios are improved. Because of this, these algorithms can accept many more multicast requests than other multicast tree construction algorithms based on the Dijkstra algorithm. However, under certain delay constraints, the blocking characteristics of the proposed Weighted Steiner Tree algorithm deteriorate since some light paths between source and destinations use many hops and cannot satisfy the delay constraint. In order to adapt the approach to the delay-sensitive environments, we have devised the Composite Weighted Steiner Tree algorithm comprising the Weighted Steiner Tree algorithm and the Dijkstra algorithm for use in a delay constrained environment such as an IPTV application. In this paper, we also give the results of simulation experiments which demonstrate the superiority of the proposed Composite Weighted Steiner Tree algorithm compared with the Distributed Minimum Hop Tree (DMHT) algorithm, from the viewpoint of the light-tree request blocking.

  16. Automatic Circuit Design and Optimization Using Modified PSO Algorithm

    Directory of Open Access Journals (Sweden)

    Subhash Patel

    2016-04-01

    Full Text Available In this work, we have proposed modified PSO algorithm based optimizer for automatic circuit design. The performance of the modified PSO algorithm is compared with two other evolutionary algorithms namely ABC algorithm and standard PSO algorithm by designing two stage CMOS operational amplifier and bulk driven OTA in 130nm technology. The results show the robustness of the proposed algorithm. With modified PSO algorithm, the average design error for two stage op-amp is only 0.054% in contrast to 3.04% for standard PSO algorithm and 5.45% for ABC algorithm. For bulk driven OTA, average design error is 1.32% with MPSO compared to 4.70% with ABC algorithm and 5.63% with standard PSO algorithm.

  17. An investigation of genetic algorithms

    International Nuclear Information System (INIS)

    Douglas, S.R.

    1995-04-01

    Genetic algorithms mimic biological evolution by natural selection in their search for better individuals within a changing population. they can be used as efficient optimizers. This report discusses the developing field of genetic algorithms. It gives a simple example of the search process and introduces the concept of schema. It also discusses modifications to the basic genetic algorithm that result in species and niche formation, in machine learning and artificial evolution of computer programs, and in the streamlining of human-computer interaction. (author). 3 refs., 1 tab., 2 figs

  18. Subcubic Control Flow Analysis Algorithms

    DEFF Research Database (Denmark)

    Midtgaard, Jan; Van Horn, David

    We give the first direct subcubic algorithm for performing control flow analysis of higher-order functional programs. Despite the long held belief that inclusion-based flow analysis could not surpass the ``cubic bottleneck, '' we apply known set compression techniques to obtain an algorithm...... that runs in time O(n^3/log n) on a unit cost random-access memory model machine. Moreover, we refine the initial flow analysis into two more precise analyses incorporating notions of reachability. We give subcubic algorithms for these more precise analyses and relate them to an existing analysis from...

  19. The theory of hybrid stochastic algorithms

    International Nuclear Information System (INIS)

    Kennedy, A.D.

    1989-01-01

    These lectures introduce the family of Hybrid Stochastic Algorithms for performing Monte Carlo calculations in Quantum Field Theory. After explaining the basic concepts of Monte Carlo integration we discuss the properties of Markov processes and one particularly useful example of them: the Metropolis algorithm. Building upon this framework we consider the Hybrid and Langevin algorithms from the viewpoint that they are approximate versions of the Hybrid Monte Carlo method; and thus we are led to consider Molecular Dynamics using the Leapfrog algorithm. The lectures conclude by reviewing recent progress in these areas, explaining higher-order integration schemes, the asymptotic large-volume behaviour of the various algorithms, and some simple exact results obtained by applying them to free field theory. It is attempted throughout to give simple yet correct proofs of the various results encountered. 38 refs

  20. A sampling algorithm for segregation analysis

    Directory of Open Access Journals (Sweden)

    Henshall John

    2001-11-01

    Full Text Available Abstract Methods for detecting Quantitative Trait Loci (QTL without markers have generally used iterative peeling algorithms for determining genotype probabilities. These algorithms have considerable shortcomings in complex pedigrees. A Monte Carlo Markov chain (MCMC method which samples the pedigree of the whole population jointly is described. Simultaneous sampling of the pedigree was achieved by sampling descent graphs using the Metropolis-Hastings algorithm. A descent graph describes the inheritance state of each allele and provides pedigrees guaranteed to be consistent with Mendelian sampling. Sampling descent graphs overcomes most, if not all, of the limitations incurred by iterative peeling algorithms. The algorithm was able to find the QTL in most of the simulated populations. However, when the QTL was not modeled or found then its effect was ascribed to the polygenic component. No QTL were detected when they were not simulated.

  1. Collective probabilities algorithm for surface hopping calculations

    International Nuclear Information System (INIS)

    Bastida, Adolfo; Cruz, Carlos; Zuniga, Jose; Requena, Alberto

    2003-01-01

    General equations that transition probabilities of the hopping algorithms in surface hopping calculations must obey to assure the equality between the average quantum and classical populations are derived. These equations are solved for two particular cases. In the first it is assumed that probabilities are the same for all trajectories and that the number of hops is kept to a minimum. These assumptions specify the collective probabilities (CP) algorithm, for which the transition probabilities depend on the average populations for all trajectories. In the second case, the probabilities for each trajectory are supposed to be completely independent of the results from the other trajectories. There is, then, a unique solution of the general equations assuring that the transition probabilities are equal to the quantum population of the target state, which is referred to as the independent probabilities (IP) algorithm. The fewest switches (FS) algorithm developed by Tully is accordingly understood as an approximate hopping algorithm which takes elements from the accurate CP and IP solutions. A numerical test of all these hopping algorithms is carried out for a one-dimensional two-state problem with two avoiding crossings which shows the accuracy and computational efficiency of the collective probabilities algorithm proposed, the limitations of the FS algorithm and the similarity between the results offered by the IP algorithm and those obtained with the Ehrenfest method

  2. The Research and Application of SURF Algorithm Based on Feature Point Selection Algorithm

    Directory of Open Access Journals (Sweden)

    Zhang Fang Hu

    2014-04-01

    Full Text Available As the pixel information of depth image is derived from the distance information, when implementing SURF algorithm with KINECT sensor for static sign language recognition, there can be some mismatched pairs in palm area. This paper proposes a feature point selection algorithm, by filtering the SURF feature points step by step based on the number of feature points within adaptive radius r and the distance between the two points, it not only greatly improves the recognition rate, but also ensures the robustness under the environmental factors, such as skin color, illumination intensity, complex background, angle and scale changes. The experiment results show that the improved SURF algorithm can effectively improve the recognition rate, has a good robustness.

  3. Sorting on STAR. [CDC computer algorithm timing comparison

    Science.gov (United States)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  4. An algorithm for reduct cardinality minimization

    KAUST Repository

    AbouEisha, Hassan M.

    2013-12-01

    This is devoted to the consideration of a new algorithm for reduct cardinality minimization. This algorithm transforms the initial table to a decision table of a special kind, simplify this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. Results of computer experiments with decision tables from UCI ML Repository are discussed. © 2013 IEEE.

  5. An algorithm for reduct cardinality minimization

    KAUST Repository

    AbouEisha, Hassan M.; Al Farhan, Mohammed; Chikalov, Igor; Moshkov, Mikhail

    2013-01-01

    This is devoted to the consideration of a new algorithm for reduct cardinality minimization. This algorithm transforms the initial table to a decision table of a special kind, simplify this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. Results of computer experiments with decision tables from UCI ML Repository are discussed. © 2013 IEEE.

  6. A new hybrid evolutionary algorithm based on new fuzzy adaptive PSO and NM algorithms for Distribution Feeder Reconfiguration

    International Nuclear Information System (INIS)

    Niknam, Taher; Azadfarsani, Ehsan; Jabbari, Masoud

    2012-01-01

    Highlights: ► Network reconfiguration is a very important way to save the electrical energy. ► This paper proposes a new algorithm to solve the DFR. ► The algorithm combines NFAPSO with NM. ► The proposed algorithm is tested on two distribution test feeders. - Abstract: Network reconfiguration for loss reduction in distribution system is a very important way to save the electrical energy. This paper proposes a new hybrid evolutionary algorithm to solve the Distribution Feeder Reconfiguration problem (DFR). The algorithm is based on combination of a New Fuzzy Adaptive Particle Swarm Optimization (NFAPSO) and Nelder–Mead simplex search method (NM) called NFAPSO–NM. In the proposed algorithm, a new fuzzy adaptive particle swarm optimization includes two parts. The first part is Fuzzy Adaptive Binary Particle Swarm Optimization (FABPSO) that determines the status of tie switches (open or close) and second part is Fuzzy Adaptive Discrete Particle Swarm Optimization (FADPSO) that determines the sectionalizing switch number. In other side, due to the results of binary PSO(BPSO) and discrete PSO(DPSO) algorithms highly depends on the values of their parameters such as the inertia weight and learning factors, a fuzzy system is employed to adaptively adjust the parameters during the search process. Moreover, the Nelder–Mead simplex search method is combined with the NFAPSO algorithm to improve its performance. Finally, the proposed algorithm is tested on two distribution test feeders. The results of simulation show that the proposed method is very powerful and guarantees to obtain the global optimization.

  7. Efficient sequential and parallel algorithms for record linkage.

    Science.gov (United States)

    Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar

    2014-01-01

    Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Our sequential and parallel algorithms have been tested on a real dataset of 1,083,878 records and synthetic datasets ranging in size from 50,000 to 9,000,000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm.

  8. A new algorithm for hip fracture surgery

    DEFF Research Database (Denmark)

    Palm, Henrik; Krasheninnikoff, Michael; Holck, Kim

    2012-01-01

    Background and purpose Treatment of hip fracture patients is controversial. We implemented a new operative and supervision algorithm (the Hvidovre algorithm) for surgical treatment of all hip fractures, primarily based on own previously published results. Methods 2,000 consecutive patients over 50...... years of age who were admitted and operated on because of a hip fracture were prospectively included. 1,000 of these patients were included after implementation of the algorithm. Demographic parameters, hospital treatment, and reoperations within the first postoperative year were assessed from patient...... by reoperations was reduced from 24% of total hospitalization before the algorithm was introduced to 18% after it was introduced. Interpretation It is possible to implement an algorithm for treatment of all hip fracture patients in a large teaching hospital. In our case, the Hvidovre algorithm both raised...

  9. Modified BTC Algorithm for Audio Signal Coding

    Directory of Open Access Journals (Sweden)

    TOMIC, S.

    2016-11-01

    Full Text Available This paper describes modification of a well-known image coding algorithm, named Block Truncation Coding (BTC and its application in audio signal coding. BTC algorithm was originally designed for black and white image coding. Since black and white images and audio signals have different statistical characteristics, the application of this image coding algorithm to audio signal presents a novelty and a challenge. Several implementation modifications are described in this paper, while the original idea of the algorithm is preserved. The main modifications are performed in the area of signal quantization, by designing more adequate quantizers for audio signal processing. The result is a novel audio coding algorithm, whose performance is presented and analyzed in this research. The performance analysis indicates that this novel algorithm can be successfully applied in audio signal coding.

  10. A divide-and-conquer algorithm for large-scale de novo transcriptome assembly through combining small assemblies from existing algorithms.

    Science.gov (United States)

    Sze, Sing-Hoi; Parrott, Jonathan J; Tarone, Aaron M

    2017-12-06

    While the continued development of high-throughput sequencing has facilitated studies of entire transcriptomes in non-model organisms, the incorporation of an increasing amount of RNA-Seq libraries has made de novo transcriptome assembly difficult. Although algorithms that can assemble a large amount of RNA-Seq data are available, they are generally very memory-intensive and can only be used to construct small assemblies. We develop a divide-and-conquer strategy that allows these algorithms to be utilized, by subdividing a large RNA-Seq data set into small libraries. Each individual library is assembled independently by an existing algorithm, and a merging algorithm is developed to combine these assemblies by picking a subset of high quality transcripts to form a large transcriptome. When compared to existing algorithms that return a single assembly directly, this strategy achieves comparable or increased accuracy as memory-efficient algorithms that can be used to process a large amount of RNA-Seq data, and comparable or decreased accuracy as memory-intensive algorithms that can only be used to construct small assemblies. Our divide-and-conquer strategy allows memory-intensive de novo transcriptome assembly algorithms to be utilized to construct large assemblies.

  11. Fractal Landscape Algorithms for Environmental Simulations

    Science.gov (United States)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  12. Executable Pseudocode for Graph Algorithms

    NARCIS (Netherlands)

    B. Ó Nualláin (Breanndán)

    2015-01-01

    textabstract Algorithms are written in pseudocode. However the implementation of an algorithm in a conventional, imperative programming language can often be scattered over hundreds of lines of code thus obscuring its essence. This can lead to difficulties in understanding or verifying the

  13. Algorithms for Decision Tree Construction

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    The study of algorithms for decision tree construction was initiated in 1960s. The first algorithms are based on the separation heuristic [13, 31] that at each step tries dividing the set of objects as evenly as possible. Later Garey and Graham [28

  14. A Distributed Spanning Tree Algorithm

    DEFF Research Database (Denmark)

    Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Sven Hauge

    We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two-way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well...

  15. Cascade Error Projection Learning Algorithm

    Science.gov (United States)

    Duong, T. A.; Stubberud, A. R.; Daud, T.

    1995-01-01

    A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.

  16. A quantum causal discovery algorithm

    Science.gov (United States)

    Giarmatzi, Christina; Costa, Fabio

    2018-03-01

    Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.

  17. Parallelizing flow-accumulation calculations on graphics processing units—From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm

    Science.gov (United States)

    Qin, Cheng-Zhi; Zhan, Lijun

    2012-06-01

    As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU

  18. Efficient Record Linkage Algorithms Using Complete Linkage Clustering.

    Science.gov (United States)

    Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar

    2016-01-01

    Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times.

  19. Randomized Filtering Algorithms

    DEFF Research Database (Denmark)

    Katriel, Irit; Van Hentenryck, Pascal

    2008-01-01

    of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed...... in the expected sense. The second scheme is a Las Vegas algorithm using filtering triggers: Its effectiveness is the same as enforcing are consistency after every domain event, while in the expected case it is faster by a factor of m/n, where n and m are, respectively, the number of nodes and edges...

  20. Algorithms and Data Structures (lecture 1)

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Algorithms have existed, in one form or another, for as long as humanity has. During the second half of the 20th century, the field was revolutionised with the introduction of ever faster computers. In these lectures we discuss how algorithms are designed, how to evaluate their speed, and how to identify areas of improvement in existing algorithms. An algorithm consists of more than just a series of instructions; almost as important is the memory structure of the data on which it operates. A part of the lectures will be dedicated to a discussion of the various ways one can store data in memory, and their advantages and disadvantages.

  1. Algorithms and Data Structures (lecture 2)

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Algorithms have existed, in one form or another, for as long as humanity has. During the second half of the 20th century, the field was revolutionised with the introduction of ever faster computers. In these lectures we discuss how algorithms are designed, how to evaluate their speed, and how to identify areas of improvement in existing algorithms. An algorithm consists of more than just a series of instructions; almost as important is the memory structure of the data on which it operates. A part of the lectures will be dedicated to a discussion of the various ways one can store data in memory, and their advantages and disadvantages.

  2. Eigenvalue Decomposition-Based Modified Newton Algorithm

    Directory of Open Access Journals (Sweden)

    Wen-jun Wang

    2013-01-01

    Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.

  3. Euclidean shortest paths exact or approximate algorithms

    CERN Document Server

    Li, Fajie

    2014-01-01

    This book reviews algorithms for the exact or approximate solution of shortest-path problems, with a specific focus on a class of algorithms called rubberband algorithms. The coverage includes mathematical proofs for many of the given statements.

  4. Searching Algorithms Implemented on Probabilistic Systolic Arrays

    Czech Academy of Sciences Publication Activity Database

    Kramosil, Ivan

    1996-01-01

    Roč. 25, č. 1 (1996), s. 7-45 ISSN 0308-1079 R&D Projects: GA ČR GA201/93/0781 Keywords : searching algorithms * probabilistic algorithms * systolic arrays * parallel algorithms Impact factor: 0.214, year: 1996

  5. (MBO) algorithm in multi-reservoir system optimisation

    African Journals Online (AJOL)

    A comparative study of marriage in honey bees optimisation (MBO) algorithm in ... A practical application of the marriage in honey bees optimisation (MBO) ... to those of other evolutionary algorithms, such as the genetic algorithm (GA), ant ...

  6. An Implementation of RC4+ Algorithm and Zig-zag Algorithm in a Super Encryption Scheme for Text Security

    Science.gov (United States)

    Budiman, M. A.; Amalia; Chayanie, N. I.

    2018-03-01

    Cryptography is the art and science of using mathematical methods to preserve message security. There are two types of cryptography, namely classical and modern cryptography. Nowadays, most people would rather use modern cryptography than classical cryptography because it is harder to break than the classical one. One of classical algorithm is the Zig-zag algorithm that uses the transposition technique: the original message is unreadable unless the person has the key to decrypt the message. To improve the security, the Zig-zag Cipher is combined with RC4+ Cipher which is one of the symmetric key algorithms in the form of stream cipher. The two algorithms are combined to make a super-encryption. By combining these two algorithms, the message will be harder to break by a cryptanalyst. The result showed that complexity of the combined algorithm is θ(n2 ), while the complexity of Zig-zag Cipher and RC4+ Cipher are θ(n2 ) and θ(n), respectively.

  7. Parameter Selection for Ant Colony Algorithm Based on Bacterial Foraging Algorithm

    Directory of Open Access Journals (Sweden)

    Peng Li

    2016-01-01

    Full Text Available The optimal performance of the ant colony algorithm (ACA mainly depends on suitable parameters; therefore, parameter selection for ACA is important. We propose a parameter selection method for ACA based on the bacterial foraging algorithm (BFA, considering the effects of coupling between different parameters. Firstly, parameters for ACA are mapped into a multidimensional space, using a chemotactic operator to ensure that each parameter group approaches the optimal value, speeding up the convergence for each parameter set. Secondly, the operation speed for optimizing the entire parameter set is accelerated using a reproduction operator. Finally, the elimination-dispersal operator is used to strengthen the global optimization of the parameters, which avoids falling into a local optimal solution. In order to validate the effectiveness of this method, the results were compared with those using a genetic algorithm (GA and a particle swarm optimization (PSO, and simulations were conducted using different grid maps for robot path planning. The results indicated that parameter selection for ACA based on BFA was the superior method, able to determine the best parameter combination rapidly, accurately, and effectively.

  8. Perturbation resilience and superiorization of iterative algorithms

    International Nuclear Information System (INIS)

    Censor, Y; Davidi, R; Herman, G T

    2010-01-01

    Iterative algorithms aimed at solving some problems are discussed. For certain problems, such as finding a common point in the intersection of a finite number of convex sets, there often exist iterative algorithms that impose very little demand on computer resources. For other problems, such as finding that point in the intersection at which the value of a given function is optimal, algorithms tend to need more computer memory and longer execution time. A methodology is presented whose aim is to produce automatically for an iterative algorithm of the first kind a 'superiorized version' of it that retains its computational efficiency but nevertheless goes a long way toward solving an optimization problem. This is possible to do if the original algorithm is 'perturbation resilient', which is shown to be the case for various projection algorithms for solving the consistent convex feasibility problem. The superiorized versions of such algorithms use perturbations that steer the process in the direction of a superior feasible point, which is not necessarily optimal, with respect to the given function. After presenting these intuitive ideas in a precise mathematical form, they are illustrated in image reconstruction from projections for two different projection algorithms superiorized for the function whose value is the total variation of the image

  9. Arc-Search Infeasible Interior-Point Algorithm for Linear Programming

    OpenAIRE

    Yang, Yaguang

    2014-01-01

    Mehrotra's algorithm has been the most successful infeasible interior-point algorithm for linear programming since 1990. Most popular interior-point software packages for linear programming are based on Mehrotra's algorithm. This paper proposes an alternative algorithm, arc-search infeasible interior-point algorithm. We will demonstrate, by testing Netlib problems and comparing the test results obtained by arc-search infeasible interior-point algorithm and Mehrotra's algorithm, that the propo...

  10. Exact and Heuristic Algorithms for Runway Scheduling

    Science.gov (United States)

    Malik, Waqar A.; Jung, Yoon C.

    2016-01-01

    This paper explores the Single Runway Scheduling (SRS) problem with arrivals, departures, and crossing aircraft on the airport surface. Constraints for wake vortex separations, departure area navigation separations and departure time window restrictions are explicitly considered. The main objective of this research is to develop exact and heuristic based algorithms that can be used in real-time decision support tools for Air Traffic Control Tower (ATCT) controllers. The paper provides a multi-objective dynamic programming (DP) based algorithm that finds the exact solution to the SRS problem, but may prove unusable for application in real-time environment due to large computation times for moderate sized problems. We next propose a second algorithm that uses heuristics to restrict the search space for the DP based algorithm. A third algorithm based on a combination of insertion and local search (ILS) heuristics is then presented. Simulation conducted for the east side of Dallas/Fort Worth International Airport allows comparison of the three proposed algorithms and indicates that the ILS algorithm performs favorably in its ability to find efficient solutions and its computation times.

  11. A distributed spanning tree algorithm

    DEFF Research Database (Denmark)

    Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Svend Hauge

    1988-01-01

    We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well as comm...

  12. Quantum Computations: Fundamentals and Algorithms

    International Nuclear Information System (INIS)

    Duplij, S.A.; Shapoval, I.I.

    2007-01-01

    Basic concepts of quantum information theory, principles of quantum calculations and the possibility of creation on this basis unique on calculation power and functioning principle device, named quantum computer, are concerned. The main blocks of quantum logic, schemes of quantum calculations implementation, as well as some known today effective quantum algorithms, called to realize advantages of quantum calculations upon classical, are presented here. Among them special place is taken by Shor's algorithm of number factorization and Grover's algorithm of unsorted database search. Phenomena of decoherence, its influence on quantum computer stability and methods of quantum errors correction are described

  13. Problem solving with genetic algorithms and Splicer

    Science.gov (United States)

    Bayer, Steven E.; Wang, Lui

    1991-01-01

    Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.

  14. Computional algorithm for lifetime exposure to antimicrobials in pigs using register data-The LEA algorithm.

    Science.gov (United States)

    Birkegård, Anna Camilla; Andersen, Vibe Dalhoff; Halasa, Tariq; Jensen, Vibeke Frøkjær; Toft, Nils; Vigre, Håkan

    2017-10-01

    Accurate and detailed data on antimicrobial exposure in pig production are essential when studying the association between antimicrobial exposure and antimicrobial resistance. Due to difficulties in obtaining primary data on antimicrobial exposure in a large number of farms, there is a need for a robust and valid method to estimate the exposure using register data. An approach that estimates the antimicrobial exposure in every rearing period during the lifetime of a pig using register data was developed into a computational algorithm. In this approach data from national registers on antimicrobial purchases, movements of pigs and farm demographics registered at farm level are used. The algorithm traces batches of pigs retrospectively from slaughter to the farm(s) that housed the pigs during their finisher, weaner, and piglet period. Subsequently, the algorithm estimates the antimicrobial exposure as the number of Animal Defined Daily Doses for treatment of one kg pig in each of the rearing periods. Thus, the antimicrobial purchase data at farm level are translated into antimicrobial exposure estimates at batch level. A batch of pigs is defined here as pigs sent to slaughter at the same day from the same farm. In this study we present, validate, and optimise a computational algorithm that calculate the lifetime exposure of antimicrobials for slaughter pigs. The algorithm was evaluated by comparing the computed estimates to data on antimicrobial usage from farm records in 15 farm units. We found a good positive correlation between the two estimates. The algorithm was run for Danish slaughter pigs sent to slaughter in January to March 2015 from farms with more than 200 finishers to estimate the proportion of farms that it was applicable for. In the final process, the algorithm was successfully run for batches of pigs originating from 3026 farms with finisher units (77% of the initial population). This number can be increased if more accurate register data can be

  15. SIFT based algorithm for point feature tracking

    Directory of Open Access Journals (Sweden)

    Adrian BURLACU

    2007-12-01

    Full Text Available In this paper a tracking algorithm for SIFT features in image sequences is developed. For each point feature extracted using SIFT algorithm a descriptor is computed using information from its neighborhood. Using an algorithm based on minimizing the distance between two descriptors tracking point features throughout image sequences is engaged. Experimental results, obtained from image sequences that capture scaling of different geometrical type object, reveal the performances of the tracking algorithm.

  16. Automatic bounding estimation in modified NLMS algorithm

    International Nuclear Information System (INIS)

    Shahtalebi, K.; Doost-Hoseini, A.M.

    2002-01-01

    Modified Normalized Least Mean Square algorithm, which is a sign form of Nlm based on set-membership (S M) theory in the class of optimal bounding ellipsoid (OBE) algorithms, requires a priori knowledge of error bounds that is unknown in most applications. In a special but popular case of measurement noise, a simple algorithm has been proposed. With some simulation examples the performance of algorithm is compared with Modified Normalized Least Mean Square

  17. Deriving the Normalized Min-Sum Algorithm from Cooperative Optimization

    OpenAIRE

    Huang, Xiaofei

    2006-01-01

    The normalized min-sum algorithm can achieve near-optimal performance at decoding LDPC codes. However, it is a critical question to understand the mathematical principle underlying the algorithm. Traditionally, people thought that the normalized min-sum algorithm is a good approximation to the sum-product algorithm, the best known algorithm for decoding LDPC codes and Turbo codes. This paper offers an alternative approach to understand the normalized min-sum algorithm. The algorithm is derive...

  18. Control of Lung Inflammation by Microbiome and Peptidoglycan Recognition Protein

    Science.gov (United States)

    2017-07-01

    Recognition Protein 1 (Pglyrp1). Microflora was depleted in mice with antibiotics and pregnant females and their pups were colonized with microfloras...5. Changes/Problems 11 6. Products 12 7. Participants & Other Collaborating Organizations 14 8. Special Reporting Requirements 16 9. Appendices...completion. Major Task 1: Obtain IUSM-NW IACUC and DoD ACURO Animal Care and Use Application reviews and approvals Month 1 100% completed Milestone

  19. A fast fractional difference algorithm

    DEFF Research Database (Denmark)

    Jensen, Andreas Noack; Nielsen, Morten Ørregaard

    2014-01-01

    We provide a fast algorithm for calculating the fractional difference of a time series. In standard implementations, the calculation speed (number of arithmetic operations) is of order T 2, where T is the length of the time series. Our algorithm allows calculation speed of order T log...

  20. A Fast Fractional Difference Algorithm

    DEFF Research Database (Denmark)

    Jensen, Andreas Noack; Nielsen, Morten Ørregaard

    We provide a fast algorithm for calculating the fractional difference of a time series. In standard implementations, the calculation speed (number of arithmetic operations) is of order T 2, where T is the length of the time series. Our algorithm allows calculation speed of order T log...

  1. Marshall Rosenbluth and the Metropolis algorithm

    International Nuclear Information System (INIS)

    Gubernatis, J.E.

    2005-01-01

    The 1953 publication, 'Equation of State Calculations by Very Fast Computing Machines' by N. Metropolis, A. W. Rosenbluth and M. N. Rosenbluth, and M. Teller and E. Teller [J. Chem. Phys. 21, 1087 (1953)] marked the beginning of the use of the Monte Carlo method for solving problems in the physical sciences. The method described in this publication subsequently became known as the Metropolis algorithm, undoubtedly the most famous and most widely used Monte Carlo algorithm ever published. As none of the authors made subsequent use of the algorithm, they became unknown to the large simulation physics community that grew from this publication and their roles in its development became the subject of mystery and legend. At a conference marking the 50th anniversary of the 1953 publication, Marshall Rosenbluth gave his recollections of the algorithm's development. The present paper describes the algorithm, reconstructs the historical context in which it was developed, and summarizes Marshall's recollections

  2. External-Memory Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Arge, Lars; Zeh, Norbert

    2010-01-01

    The data sets involved in many modern applications are often too massive to fit in main memory of even the most powerful computers and must therefore reside on disk. Thus communication between internal and external memory, and not actual computation time, becomes the bottleneck in the computation....... This is due to the huge difference in access time of fast internal memory and slower external memory such as disks. The goal of theoretical work in the area of external memory algorithms (also called I/O algorithms or out-of-core algorithms) has been to develop algorithms that minimize the Input...... in parallel and the use of parallel disks has received a lot of theoretical attention. See below for recent surveys of theoretical results in the area of I/O-efficient algorithms. TPIE is designed to bridge the gap between the theory and practice of parallel I/O systems. It is intended to demonstrate all...

  3. Adaptive Filtering Algorithms and Practical Implementation

    CERN Document Server

    Diniz, Paulo S R

    2013-01-01

    In the fourth edition of Adaptive Filtering: Algorithms and Practical Implementation, author Paulo S.R. Diniz presents the basic concepts of adaptive signal processing and adaptive filtering in a concise and straightforward manner. The main classes of adaptive filtering algorithms are presented in a unified framework, using clear notations that facilitate actual implementation. The main algorithms are described in tables, which are detailed enough to allow the reader to verify the covered concepts. Many examples address problems drawn from actual applications. New material to this edition includes: Analytical and simulation examples in Chapters 4, 5, 6 and 10 Appendix E, which summarizes the analysis of set-membership algorithm Updated problems and references Providing a concise background on adaptive filtering, this book covers the family of LMS, affine projection, RLS and data-selective set-membership algorithms as well as nonlinear, sub-band, blind, IIR adaptive filtering, and more. Several problems are...

  4. Linear Algorithms for Radioelectric Spectrum Forecast

    Directory of Open Access Journals (Sweden)

    Luis F. Pedraza

    2016-12-01

    Full Text Available This paper presents the development and evaluation of two linear algorithms for forecasting reception power for different channels at an assigned spectrum band of global systems for mobile communications (GSM, in order to analyze the spatial opportunity for reuse of frequencies by secondary users (SUs in a cognitive radio (CR network. The algorithms employed correspond to seasonal autoregressive integrated moving average (SARIMA and generalized autoregressive conditional heteroskedasticity (GARCH, which allow for a forecast of channel occupancy status. Results are evaluated using the following criteria: availability and occupancy time for channels, different types of mean absolute error, and observation time. The contributions of this work include a more integral forecast as the algorithm not only forecasts reception power but also the occupancy and availability time of a channel to determine its precision percentage during the use by primary users (PUs and SUs within a CR system. Algorithm analyses demonstrate a better performance for SARIMA over GARCH algorithm in most of the evaluated variables.

  5. Handbook of Memetic Algorithms

    CERN Document Server

    Cotta, Carlos; Moscato, Pablo

    2012-01-01

    Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems.  The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes.   “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now.  A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem,  memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...

  6. Algorithms in invariant theory

    CERN Document Server

    Sturmfels, Bernd

    2008-01-01

    J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.

  7. Optimally stopped variational quantum algorithms

    Science.gov (United States)

    Vinci, Walter; Shabani, Alireza

    2018-04-01

    Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.

  8. Recent Advancements in Lightning Jump Algorithm Work

    Science.gov (United States)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2010-01-01

    In the past year, the primary objectives were to show the usefulness of total lightning as compared to traditional cloud-to-ground (CG) networks, test the lightning jump algorithm configurations in other regions of the country, increase the number of thunderstorms within our thunderstorm database, and to pinpoint environments that could prove difficult for any lightning jump configuration. A total of 561 thunderstorms have been examined in the past year (409 non-severe, 152 severe) from four regions of the country (North Alabama, Washington D.C., High Plains of CO/KS, and Oklahoma). Results continue to indicate that the 2 lightning jump algorithm configuration holds the most promise in terms of prospective operational lightning jump algorithms, with a probability of detection (POD) at 81%, a false alarm rate (FAR) of 45%, a critical success index (CSI) of 49% and a Heidke Skill Score (HSS) of 0.66. The second best performing algorithm configuration was the Threshold 4 algorithm, which had a POD of 72%, FAR of 51%, a CSI of 41% and an HSS of 0.58. Because a more complex algorithm configuration shows the most promise in terms of prospective operational lightning jump algorithms, accurate thunderstorm cell tracking work must be undertaken to track lightning trends on an individual thunderstorm basis over time. While these numbers for the 2 configuration are impressive, the algorithm does have its weaknesses. Specifically, low-topped and tropical cyclone thunderstorm environments are present issues for the 2 lightning jump algorithm, because of the suppressed vertical depth impact on overall flash counts (i.e., a relative dearth in lightning). For example, in a sample of 120 thunderstorms from northern Alabama that contained 72 missed events by the 2 algorithm 36% of the misses were associated with these two environments (17 storms).

  9. Optimization algorithms and applications

    CERN Document Server

    Arora, Rajesh Kumar

    2015-01-01

    Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc

  10. A Novel Hybrid Firefly Algorithm for Global Optimization.

    Directory of Open Access Journals (Sweden)

    Lina Zhang

    Full Text Available Global optimization is challenging to solve due to its nonlinearity and multimodality. Traditional algorithms such as the gradient-based methods often struggle to deal with such problems and one of the current trends is to use metaheuristic algorithms. In this paper, a novel hybrid population-based global optimization algorithm, called hybrid firefly algorithm (HFA, is proposed by combining the advantages of both the firefly algorithm (FA and differential evolution (DE. FA and DE are executed in parallel to promote information sharing among the population and thus enhance searching efficiency. In order to evaluate the performance and efficiency of the proposed algorithm, a diverse set of selected benchmark functions are employed and these functions fall into two groups: unimodal and multimodal. The experimental results show better performance of the proposed algorithm compared to the original version of the firefly algorithm (FA, differential evolution (DE and particle swarm optimization (PSO in the sense of avoiding local minima and increasing the convergence rate.

  11. An Innovative Thinking-Based Intelligent Information Fusion Algorithm

    Directory of Open Access Journals (Sweden)

    Huimin Lu

    2013-01-01

    Full Text Available This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.

  12. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    Directory of Open Access Journals (Sweden)

    Adis Alihodzic

    2014-01-01

    Full Text Available Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed.

  13. Development and Production of Array Barrier Detectors at SCD

    Science.gov (United States)

    Klipstein, P. C.; Avnon, E.; Benny, Y.; Berkowicz, E.; Cohen, Y.; Dobromislin, R.; Fraenkel, R.; Gershon, G.; Glozman, A.; Hojman, E.; Ilan, E.; Karni, Y.; Klin, O.; Kodriano, Y.; Krasovitsky, L.; Langof, L.; Lukomsky, I.; Nevo, I.; Nitzani, M.; Pivnik, I.; Rappaport, N.; Rosenberg, O.; Shtrichman, I.; Shkedy, L.; Snapi, N.; Talmor, R.; Tessler, R.; Weiss, E.; Tuito, A.

    2017-09-01

    XB n or XB p barrier detectors exhibit diffusion-limited dark currents comparable with mercury cadmium telluride Rule-07 and high quantum efficiencies. In 2011, SemiConductor Devices (SCD) introduced "HOT Pelican D", a 640 × 512/15- μm pitch InAsSb/AlSbAs XB n mid-wave infrared (MWIR) detector with a 4.2- μm cut-off and an operating temperature of ˜150 K. Its low power (˜3 W), high pixel operability (>99.5%) and long mean time to failure make HOT Pelican D a highly reliable integrated detector-cooler product with a low size, weight and power. More recently, "HOT Hercules" was launched with a 1280 × 1024/15- μm format and similar advantages. A 3-megapixel, 10- μm pitch version ("HOT Blackbird") is currently completing development. For long-wave infrared applications, SCD's 640 × 512/15- μm pitch "Pelican-D LW" XB p type II superlattice (T2SL) detector has a ˜9.3- μm cut-off wavelength. The detector contains InAs/GaSb and InAs/AlSb T2SLs, and is fabricated into focal plane array (FPA) detectors using standard production processes including hybridization to a digital silicon read-out integrated circuit (ROIC), glue underfill and substrate thinning. The ROIC has been designed so that the complete detector closely follows the interfaces of SCD's MWIR Pelican-D detector family. The Pelican-D LW FPA has a quantum efficiency of ˜50%, and operates at 77 K with a pixel operability of >99% and noise equivalent temperature difference of 13 mK at 30 Hz and F/2.7.

  14. Adiabatic quantum search algorithm for structured problems

    International Nuclear Information System (INIS)

    Roland, Jeremie; Cerf, Nicolas J.

    2003-01-01

    The study of quantum computation has been motivated by the hope of finding efficient quantum algorithms for solving classically hard problems. In this context, quantum algorithms by local adiabatic evolution have been shown to solve an unstructured search problem with a quadratic speedup over a classical search, just as Grover's algorithm. In this paper, we study how the structure of the search problem may be exploited to further improve the efficiency of these quantum adiabatic algorithms. We show that by nesting a partial search over a reduced set of variables into a global search, it is possible to devise quantum adiabatic algorithms with a complexity that, although still exponential, grows with a reduced order in the problem size

  15. A Direct Search Algorithm for Global Optimization

    Directory of Open Access Journals (Sweden)

    Enrique Baeyens

    2016-06-01

    Full Text Available A direct search algorithm is proposed for minimizing an arbitrary real valued function. The algorithm uses a new function transformation and three simplex-based operations. The function transformation provides global exploration features, while the simplex-based operations guarantees the termination of the algorithm and provides global convergence to a stationary point if the cost function is differentiable and its gradient is Lipschitz continuous. The algorithm’s performance has been extensively tested using benchmark functions and compared to some well-known global optimization algorithms. The results of the computational study show that the algorithm combines both simplicity and efficiency and is competitive with the heuristics-based strategies presently used for global optimization.

  16. Cross layer scheduling algorithm for LTE Downlink

    DEFF Research Database (Denmark)

    Popovska Avramova, Andrijana; Yan, Ying; Dittmann, Lars

    2012-01-01

    . This paper evaluates a cross layer scheduling algorithm that aims at minimizing the resource utilization. The algorithm makes decisions regarding the channel conditions and the size of transmission buffers and different QoS demands. The simulation results show that the new algorithm improves the resource...

  17. A tunable algorithm for collective decision-making.

    Science.gov (United States)

    Pratt, Stephen C; Sumpter, David J T

    2006-10-24

    Complex biological systems are increasingly understood in terms of the algorithms that guide the behavior of system components and the information pathways that link them. Much attention has been given to robust algorithms, or those that allow a system to maintain its functions in the face of internal or external perturbations. At the same time, environmental variation imposes a complementary need for algorithm versatility, or the ability to alter system function adaptively as external circumstances change. An important goal of systems biology is thus the identification of biological algorithms that can meet multiple challenges rather than being narrowly specified to particular problems. Here we show that emigrating colonies of the ant Temnothorax curvispinosus tune the parameters of a single decision algorithm to respond adaptively to two distinct problems: rapid abandonment of their old nest in a crisis and deliberative selection of the best available new home when their old nest is still intact. The algorithm uses a stepwise commitment scheme and a quorum rule to integrate information gathered by numerous individual ants visiting several candidate homes. By varying the rates at which they search for and accept these candidates, the ants yield a colony-level response that adaptively emphasizes either speed or accuracy. We propose such general but tunable algorithms as a design feature of complex systems, each algorithm providing elegant solutions to a wide range of problems.

  18. Localization Algorithms of Underwater Wireless Sensor Networks: A Survey

    Science.gov (United States)

    Han, Guangjie; Jiang, Jinfang; Shu, Lei; Xu, Yongjun; Wang, Feng

    2012-01-01

    In Underwater Wireless Sensor Networks (UWSNs), localization is one of most important technologies since it plays a critical role in many applications. Motivated by widespread adoption of localization, in this paper, we present a comprehensive survey of localization algorithms. First, we classify localization algorithms into three categories based on sensor nodes’ mobility: stationary localization algorithms, mobile localization algorithms and hybrid localization algorithms. Moreover, we compare the localization algorithms in detail and analyze future research directions of localization algorithms in UWSNs. PMID:22438752

  19. Engineering local optimality in quantum Monte Carlo algorithms

    Science.gov (United States)

    Pollet, Lode; Van Houcke, Kris; Rombouts, Stefan M. A.

    2007-08-01

    Quantum Monte Carlo algorithms based on a world-line representation such as the worm algorithm and the directed loop algorithm are among the most powerful numerical techniques for the simulation of non-frustrated spin models and of bosonic models. Both algorithms work in the grand-canonical ensemble and can have a winding number larger than zero. However, they retain a lot of intrinsic degrees of freedom which can be used to optimize the algorithm. We let us guide by the rigorous statements on the globally optimal form of Markov chain Monte Carlo simulations in order to devise a locally optimal formulation of the worm algorithm while incorporating ideas from the directed loop algorithm. We provide numerical examples for the soft-core Bose-Hubbard model and various spin- S models.

  20. Parameterized Analysis of Paging and List Update Algorithms

    DEFF Research Database (Denmark)

    Dorrigiv, Reza; Ehmsen, Martin R.; López-Ortiz, Alejandro

    2015-01-01

    that a larger cache leads to a better performance. We also apply the parameterized analysis framework to list update and show that certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results....... set model and express the performance of well known algorithms in terms of this parameter. This explicitly introduces parameterized-style analysis to online algorithms. The idea is that rather than normalizing the performance of an online algorithm by an (optimal) offline algorithm, we explicitly...... express the behavior of the algorithm in terms of two more natural parameters: the size of the cache and Denning’s working set measure. This technique creates a performance hierarchy of paging algorithms which better reflects their experimentally observed relative strengths. It also reflects the intuition...

  1. Comparison of Firefly algorithm and Artificial Immune System algorithm for lot streaming in -machine flow shop scheduling

    Directory of Open Access Journals (Sweden)

    G. Vijay Chakaravarthy

    2012-11-01

    Full Text Available Lot streaming is a technique used to split the processing of lots into several sublots (transfer batches to allow the overlapping of operations in a multistage manufacturing systems thereby shortening the production time (makespan. The objective of this paper is to minimize the makespan and total flow time of -job, -machine lot streaming problem in a flow shop with equal and variable size sublots and also to determine the optimal sublot size. In recent times researchers are concentrating and applying intelligent heuristics to solve flow shop problems with lot streaming. In this research, Firefly Algorithm (FA and Artificial Immune System (AIS algorithms are used to solve the problem. The results obtained by the proposed algorithms are also compared with the performance of other worked out traditional heuristics. The computational results shows that the identified algorithms are more efficient, effective and better than the algorithms already tested for this problem.

  2. Exotic quarkonium states in CMS experiment

    CERN Document Server

    Chen, Kai-Feng

    2013-01-01

    Using large data samples of di-muon events, CMS can perform detailed measurements and searches for new states in the field of exotic quarkonium. We present our results on the production of prompt and non-prompt $\\rm X(3872)$, detected in the ${\\rm J}/\\psi \\pi^+\\pi^-$ decay channel, which extend to higher $p_{\\rm T}$ values than in any previous measurement. The cross-section ratio with respect to the $\\psi(2S)$ is given differentially in $p_{\\rm T}$, as well as $p_{\\rm T}$ integrated. For the first time at the LHC, the fraction of $\\rm X(3872)$ coming from B hadron decays has been measured. After these studies of the charmonium $\\rm X$, we present a new search for its bottomonium counterpart, denoted as $\\rm X_b$, based on a data sample of pp collisions at 8 TeV collected by CMS in 2012. In analogy to the $\\rm X(3872)$ studies, the analysis uses the ${\\rm X_b} \\to \\Upsilon(1S) \\pi \\pi$ exclusive decay channel, with the $\\Upsilon(1S)$ decaying to $\\mu^+ \\mu^-$ pairs. No evidence for $\\rm X_b$ is observed and up...

  3. Shear bond strength evaluation of chemically-cured and light-cured orthodontic adhesives after enamel deproteinization with 5.25% sodium hypochlorite

    Science.gov (United States)

    Salim, J. C.; Krisnawati; Purbiati, M.

    2017-08-01

    This study aimed to assess the effect of enamel deproteinization with 5.25% sodium hypochlorite (NaOCl) before etching on the shear bond strength (SBS) of Unite (UN; 3M Unitek) and Xihu-BIOM adhesive (XB). Fifty-two maxillary first premolars were divided into four groups: (1) UN and (2) XB according to manufacturer’s recommendation and (3) UN and (4) XB deproteinized with 5.25% NaOCl. Brackets were bonded, and a mechanical test was performed using a universal testing machine. The mean SBS value for groups A1, A2, B1, and B2 was 13.51 ± 2.552, 14.36 ± 2.902, 16.43 ± 2.615, and 13.05 ± 2.348 MPa, respectively. A statistically significant difference in SBSs was observed between chemically cured groups and between group B (p 0.05). NaOCl enamel deproteinization before acid etching has a significant effect on the SBS of Unite adhesive, but not on that of the Xihu-BIOM adhesive. Furthermore, a significant difference in the SBS of Unite and Xihu-BIOM adhesives within the enamel deproteinization group was observed in this study.

  4. Measurement of Exclusive $π^0$ Electroproduction Structure Functions and their Relationship to Transverse Generalized Parton Distributions

    Energy Technology Data Exchange (ETDEWEB)

    Bedlinskiy, Ivan; Niccolai, Silvia; Stoler, Paul; Adhikari, Krishna; Aghasyan, Mher; Amaryan, Moskov; Anghinolfi, Marco; Avagyan, Harutyun; Baghdasaryan, Hovhannes; Ball, Jacques; Baltzell, Nathan; Battaglieri, Marco; Bennett, Robert; Biselli, Angela; Bookwalter, Craig; Boyarinov, Sergey; Briscoe, William; Brooks, Williams; Burkert, Volker; Carman, Daniel; Celentano, Andrea; Chandavar, Shloka; Charles, Gabriel; Contalbrigo, Marco; Crede, Volker; D& #x27; Angelo, Annalisa; Daniel, Aji; Dashyan, Natalya; De Vita, Raffaella; De Sanctis, Enzo; Deur, Alexandre; Djalali, Chaden; Doughty, David; Dupre, Raphael; Egiyan, Hovanes; El Alaoui, Ahmed; Elfassi, Lamiaa; Elouadrhiri, Latifa; Eugenio, Paul; Fedotov, Gleb; Fegan, Stuart; Fleming, Jamie; Forest, Tony; Garcon, Michel; Gevorgyan, Nerses; Giovanetti, Kevin; Girod, Francoi-Xavier; Gohn, Wesley; Gothe, Ralf; Graham, Lewis; Griffioen, Keith; Guegan, Baptiste; Guidal, Michel; Guo, Lei; Hafidi, Kawtar; Hakobyan, Hayk; Hanretty, Charles; Heddle, David; Hicks, Kenneth; Holtrop, Maurik; Ilieva, Yordanka; Ireland, David; Ishkhanov, Boris; Isupov, Evgeny; Jo, Hyon-Suk; Joo, Kyungseon; Keller, Dustin; Khanddaker, Mahbubul; Khertarpal, Puneet; Kim, Andrey; Kim, Wooyoung; Klein, Franz; Koirala, Suman; Kubarovsky, A; Kuhn, Sebastian; Kuleshov, Sergey; Kvaltine, Nicholas; Livingston, Kenneth; Lu, Haiyun; MacGregor, Ian; Mao, Yuqing; Markov, Nikolai; Martinez, D; Mayer, Michael; McKinnon, Bryan; Meyer, Curtis; Mineeva, Taisiya; Mirazita, Marco; Mokeev, Viktor; Moutarde, Herve; Munevar Espitia, Edwin; Munoz Camacho, Carlos; Nadel-Turonski, Pawel; Niculescu, Gabriel; Niculescu, Maria-Ioana; Osipenko, Mikhail; Ostrovidov, Alexander; Pappalardo, Luciano; Permuzyan, Rafayel; Park, Kijun; Park, Sungkyun; Pasyuk, Eugene; Pereira, Sergio; Phelps, Evan; Pisano, Silvia; Pogorelko, Oleg; Pozdnyakov, Sergey; Price, John; Procureur, Sebastien; Prok, Yelena; Protopopescu, Dan; Puckett, Andrew; Raue, Brian; Ricco, Giovanni; Rimal, Dipak; Ripani, Marco; Rosner, Guenther; Rossi, Patrizia; Sabatie, Franck; Saini, Mukesh; Salgado, Carlos; Saylor, Nicholas; Schott, Diane; Schumacher, Reinhard; Seder, Erin; Seraydaryan, Heghine; Sharabian, Youri; Smith, Gregory; Sober, Daniel; Sokhan, Daria; Stepanyan, Samuel; Strauch, Steffen; Taiuti, Mauro; Tang, Wei; Taylor, Charles; Tian, Ye; Tkachenko, Svyatoslav; Ungaro, Maurizio; Vineyard, Michael; Vlasov, Alexander; Voskanyan, Hakob; Voutier, Eric; Walford, Natalie; Watts, Daniel; Weinstein, Lawrence; Weygan, Dennis; Wood, Michael; Zachariou, Nicholas; Zhang, Jixie; Zhao, Zhiwen

    2012-09-01

    Exclusive $\\pi^0$ electroproduction at a beam energy of 5.75 GeV has been measured with the Jefferson Lab CLAS spectrometer. Differential cross sections were measured at more than 1800 kinematic values in $Q^2$, $x_B$, $t$, and $\\phi_\\pi$, in the $Q^2$ range from 1.0 to 4.6 GeV$^2$,\\ $-t$ up to 2 GeV$^2$, and $x_B$ from 0.1 to 0.58. Structure functions $\\sigma_T +\\epsilon \\sigma_L, \\sigma_{TT}$ and $\\sigma_{LT}$ were extracted as functions of $t$ for each of 17 combinations of $Q^2$ and $x_B$. The data were compared directly with two handbag-based calculations including both longitudinal and transversity GPDs. Inclusion of only longitudinal GPDs very strongly underestimates $\\sigma_T +\\epsilon \\sigma_L$ and fails to account for $\\sigma_{TT}$ and $\\sigma_{LT}$, while inclusion of transversity GPDs brings the calculations into substantially better agreement with the data. There is very strong sensitivity to the relative contributions of nucleon helicity flip and helicity non-flip processes. The results confirm that exclusive $\\pi^0$ electroproduction offers direct experimental access to the transversity GPDs.

  5. Measurement of exclusive π(0) electroproduction structure functions and their relationship to transverse generalized parton distributions.

    Science.gov (United States)

    Bedlinskiy, I; Kubarovsky, V; Niccolai, S; Stoler, P; Adhikari, K P; Aghasyan, M; Amaryan, M J; Anghinolfi, M; Avakian, H; Baghdasaryan, H; Ball, J; Baltzell, N A; Battaglieri, M; Bennett, R P; Biselli, A S; Bookwalter, C; Boiarinov, S; Briscoe, W J; Brooks, W K; Burkert, V D; Carman, D S; Celentano, A; Chandavar, S; Charles, G; Contalbrigo, M; Crede, V; D'Angelo, A; Daniel, A; Dashyan, N; De Vita, R; De Sanctis, E; Deur, A; Djalali, C; Doughty, D; Dupre, R; Egiyan, H; El Alaoui, A; El Fassi, L; Elouadrhiri, L; Eugenio, P; Fedotov, G; Fegan, S; Fleming, J A; Forest, T A; Fradi, A; Garçon, M; Gevorgyan, N; Giovanetti, K L; Girod, F X; Gohn, W; Gothe, R W; Graham, L; Griffioen, K A; Guegan, B; Guidal, M; Guo, L; Hafidi, K; Hakobyan, H; Hanretty, C; Heddle, D; Hicks, K; Holtrop, M; Ilieva, Y; Ireland, D G; Ishkhanov, B S; Isupov, E L; Jo, H S; Joo, K; Keller, D; Khandaker, M; Khetarpal, P; Kim, A; Kim, W; Klein, F J; Koirala, S; Kubarovsky, A; Kuhn, S E; Kuleshov, S V; Kvaltine, N D; Livingston, K; Lu, H Y; MacGregor, I J D; Mao, Y; Markov, N; Martinez, D; Mayer, M; McKinnon, B; Meyer, C A; Mineeva, T; Mirazita, M; Mokeev, V; Moutarde, H; Munevar, E; Munoz Camacho, C; Nadel-Turonski, P; Niculescu, G; Niculescu, I; Osipenko, M; Ostrovidov, A I; Pappalardo, L L; Paremuzyan, R; Park, K; Park, S; Pasyuk, E; Anefalos Pereira, S; Phelps, E; Pisano, S; Pogorelko, O; Pozdniakov, S; Price, J W; Procureur, S; Prok, Y; Protopopescu, D; Puckett, A J R; Raue, B A; Ricco, G; Rimal, D; Ripani, M; Rosner, G; Rossi, P; Sabatié, F; Saini, M S; Salgado, C; Saylor, N; Schott, D; Schumacher, R A; Seder, E; Seraydaryan, H; Sharabian, Y G; Smith, G D; Sober, D I; Sokhan, D; Stepanyan, S S; Stepanyan, S; Strauch, S; Taiuti, M; Tang, W; Taylor, C E; Tian, Ye; Tkachenko, S; Ungaro, M; Vineyard, M F; Vlassov, A; Voskanyan, H; Voutier, E; Walford, N K; Watts, D P; Weinstein, L B; Weygand, D P; Wood, M H; Zachariou, N; Zhang, J; Zhao, Z W; Zonta, I

    2012-09-14

    Exclusive π(0) electroproduction at a beam energy of 5.75 GeV has been measured with the Jefferson Lab CLAS spectrometer. Differential cross sections were measured at more than 1800 kinematic values in Q(2), x(B), t, and ϕ(π), in the Q(2) range from 1.0 to 4.6  GeV(2), -t up to 2  GeV(2), and x(B) from 0.1 to 0.58. Structure functions σ(T)+ϵσ(L), σ(TT), and σ(LT) were extracted as functions of t for each of 17 combinations of Q(2) and x(B). The data were compared directly with two handbag-based calculations including both longitudinal and transversity generalized parton distributions (GPDs). Inclusion of only longitudinal GPDs very strongly underestimates σ(T)+ϵσ(L) and fails to account for σ(TT) and σ(LT), while inclusion of transversity GPDs brings the calculations into substantially better agreement with the data. There is very strong sensitivity to the relative contributions of nucleon helicity-flip and helicity nonflip processes. The results confirm that exclusive π(0) electroproduction offers direct experimental access to the transversity GPDs.

  6. Measurement of Exclusive π0 Electroproduction Structure Functions and their Relationship to Transverse Generalized Parton Distributions

    Science.gov (United States)

    Bedlinskiy, I.; Kubarovsky, V.; Niccolai, S.; Stoler, P.; Adhikari, K. P.; Aghasyan, M.; Amaryan, M. J.; Anghinolfi, M.; Avakian, H.; Baghdasaryan, H.; Ball, J.; Baltzell, N. A.; Battaglieri, M.; Bennett, R. P.; Biselli, A. S.; Bookwalter, C.; Boiarinov, S.; Briscoe, W. J.; Brooks, W. K.; Burkert, V. D.; Carman, D. S.; Celentano, A.; Chandavar, S.; Charles, G.; Contalbrigo, M.; Crede, V.; D'Angelo, A.; Daniel, A.; Dashyan, N.; De Vita, R.; De Sanctis, E.; Deur, A.; Djalali, C.; Doughty, D.; Dupre, R.; Egiyan, H.; El Alaoui, A.; El Fassi, L.; Elouadrhiri, L.; Eugenio, P.; Fedotov, G.; Fegan, S.; Fleming, J. A.; Forest, T. A.; Fradi, A.; Garçon, M.; Gevorgyan, N.; Giovanetti, K. L.; Girod, F. X.; Gohn, W.; Gothe, R. W.; Graham, L.; Griffioen, K. A.; Guegan, B.; Guidal, M.; Guo, L.; Hafidi, K.; Hakobyan, H.; Hanretty, C.; Heddle, D.; Hicks, K.; Holtrop, M.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Isupov, E. L.; Jo, H. S.; Joo, K.; Keller, D.; Khandaker, M.; Khetarpal, P.; Kim, A.; Kim, W.; Klein, F. J.; Koirala, S.; Kubarovsky, A.; Kuhn, S. E.; Kuleshov, S. V.; Kvaltine, N. D.; Livingston, K.; Lu, H. Y.; MacGregor, I. J. D.; Mao, Y.; Markov, N.; Martinez, D.; Mayer, M.; McKinnon, B.; Meyer, C. A.; Mineeva, T.; Mirazita, M.; Mokeev, V.; Moutarde, H.; Munevar, E.; Munoz Camacho, C.; Nadel-Turonski, P.; Niculescu, G.; Niculescu, I.; Osipenko, M.; Ostrovidov, A. I.; Pappalardo, L. L.; Paremuzyan, R.; Park, K.; Park, S.; Pasyuk, E.; Anefalos Pereira, S.; Phelps, E.; Pisano, S.; Pogorelko, O.; Pozdniakov, S.; Price, J. W.; Procureur, S.; Prok, Y.; Protopopescu, D.; Puckett, A. J. R.; Raue, B. A.; Ricco, G.; Rimal, D.; Ripani, M.; Rosner, G.; Rossi, P.; Sabatié, F.; Saini, M. S.; Salgado, C.; Saylor, N.; Schott, D.; Schumacher, R. A.; Seder, E.; Seraydaryan, H.; Sharabian, Y. G.; Smith, G. D.; Sober, D. I.; Sokhan, D.; Stepanyan, S. S.; Stepanyan, S.; Strauch, S.; Taiuti, M.; Tang, W.; Taylor, C. E.; Tian, Ye; Tkachenko, S.; Ungaro, M.; Vineyard, M. F.; Vlassov, A.; Voskanyan, H.; Voutier, E.; Walford, N. K.; Watts, D. P.; Weinstein, L. B.; Weygand, D. P.; Wood, M. H.; Zachariou, N.; Zhang, J.; Zhao, Z. W.; Zonta, I.

    2012-09-01

    Exclusive π0 electroproduction at a beam energy of 5.75 GeV has been measured with the Jefferson Lab CLAS spectrometer. Differential cross sections were measured at more than 1800 kinematic values in Q2, xB, t, and ϕπ, in the Q2 range from 1.0 to 4.6GeV2, -t up to 2GeV2, and xB from 0.1 to 0.58. Structure functions σT+ɛσL, σTT, and σLT were extracted as functions of t for each of 17 combinations of Q2 and xB. The data were compared directly with two handbag-based calculations including both longitudinal and transversity generalized parton distributions (GPDs). Inclusion of only longitudinal GPDs very strongly underestimates σT+ɛσL and fails to account for σTT and σLT, while inclusion of transversity GPDs brings the calculations into substantially better agreement with the data. There is very strong sensitivity to the relative contributions of nucleon helicity-flip and helicity nonflip processes. The results confirm that exclusive π0 electroproduction offers direct experimental access to the transversity GPDs.

  7. Algebraic Algorithm Design and Local Search

    National Research Council Canada - National Science Library

    Graham, Robert

    1996-01-01

    .... Algebraic techniques have been applied successfully to algorithm synthesis by the use of algorithm theories and design tactics, an approach pioneered in the Kestrel Interactive Development System (KIDS...

  8. MO-FG-CAMPUS-TeP2-03: Multi-Criteria Optimization Using Taguchi Method for SRS of Multiple Lesions by Single Isocenter

    Energy Technology Data Exchange (ETDEWEB)

    Alani, S; Honig, N; Schlocker, A; Corn, B [Tel Aviv Medical Center, Tel Aviv (Israel)

    2016-06-15

    Purpose: This study utilizes the Taguchi Method to evaluate the VMAT planning parameters of single isocenter treatment plans for multiple brain metastases. An optimization model based on Taguchi and utility concept is employed to optimize the planning parameters including: arc arrangement, calculation grid size, calculation model, and beam energy on multiple performance characteristics namely conformity index and dose to normal brain. Methods: Treatment plans, each with 4 metastatic brain lesions were planned using single isocenter technique. The collimator angles were optimized to avoid open areas. In this analysis four planning parameters (a–d) were considered: (a)-Arc arrangements: set1: Gantry 181cw179 couch0; gantry179ccw0, couch315; and gantry0ccw181, couch45. set2: set1 plus additional arc: Gantry 0cw179, couch270. (b)-Energy: 6-MV; 6MV-FFF (c)-Calculation grid size: 1mm; 1.5mm (d)-Calculation models: AAA; Acuros Treatment planning was performed in Varian Eclipse (ver.11.0.30). A suitable orthogonal array was selected (L8) to perform the experiments. After conducting the experiments with the combinations of planning parameters the conformity index (CI) and the normal brain dose S/N ratio for each parameter was calculated. Optimum levels for the multiple response optimizations were determined. Results: We determined that the factors most affecting the conformity index are arc arrangement and beam energy. These tests were also used to evaluate dose to normal brain. In these evaluations, the significant parameters were grid size and calculation model. Using the utility concept we determined the combination of each of the four factors tested in this study that most significantly influence quality of the resulting treatment plans: (a)-arc arrangement-set2, (b)-6MV, (c)-calc.grid 1mm, (d)-Acuros algorithm. Overall, the dominant significant influences on plan quality are (a)-arcarrangement, and (b)-beamenergy. Conclusion: Results were analyzed using ANOVA and

  9. SU-F-BRA-12: End-User Oriented Tools and Procedures for Testing Brachytherapy TPSs Employing MBDCAs

    Energy Technology Data Exchange (ETDEWEB)

    Peppa, V; Pappas, E; Lahanas, V; Pantelis, E; Papagiannis, P [Medical Physics Laboratory, Medical School, University of Athens (Greece)

    2015-06-15

    Purpose: To develop user-oriented tools for commissioning and dosimetry testing of {sup 192}Ir brachytherapy treatment planning systems (TPSs) employing model based dose calculation algorithms (MBDCAs). Methods: A software tool (BrachyGuide) has been developed for the automatic generation of MCNP6 input files from any CT based plan exported in DICOM RT format from Elekta and Varian TPSs. BrachyGuide also facilitates the evaluation of imported Monte Carlo (MC) and TPS dose distributions in terms of % dose differences and gamma index (CT overlaid colormaps or relative frequency plots) as well as DVHs and related indices. For users not equipped to perform MC, a set of computational models was prepared in DICOM format, accompanied by treatment plans and corresponding MCNP6 generated reference data. BrachyGuide can then be used to compare institutional and reference data as per TG186. The model set includes a water sphere with the MBDCA WG {sup 192}Ir source placed centrically and in two eccentric positions, a water sphere with cubic bone and lung inhomogeneities and a five source dwells plan, and a patient equivalent model with an Accelerated Partial Breast Irradiation (APBI) plan. Results: The tools developed were used for the dosimetry testing of the Acuros and ACE MBDCAs implemented in BrachyVision v.13 and Oncentra Brachy v.4.5, respectively. Findings were consistent with previous results in the literature. Besides points close to the source dwells, Acuros was found to agree within type A uncertainties with the reference MC results. Differences greater than MC type A uncertainty were observed for ACE at distances >5cm from the source dwells and in bone. Conclusion: The tools developed are efficient for brachytherapy MBDCA planning commissioning and testing. Since they are appropriate for distribution over the web, they will be put at the AAPM WG MBDCA’s disposal. Research co-financed by the ESF and Greek funds. NSRF operational Program: Education and Lifelong

  10. The theory of hybrid stochastic algorithms

    International Nuclear Information System (INIS)

    Duane, S.; Kogut, J.B.

    1986-01-01

    The theory of hybrid stochastic algorithms is developed. A generalized Fokker-Planck equation is derived and is used to prove that the correct equilibrium distribution is generated by the algorithm. Systematic errors following from the discrete time-step used in the numerical implementation of the scheme are computed. Hybrid algorithms which simulate lattice gauge theory with dynamical fermions are presented. They are optimized in computer simulations and their systematic errors and efficiencies are studied. (orig.)

  11. Normalization based K means Clustering Algorithm

    OpenAIRE

    Virmani, Deepali; Taneja, Shweta; Malhotra, Geetika

    2015-01-01

    K-means is an effective clustering technique used to separate similar data into groups based on initial centroids of clusters. In this paper, Normalization based K-means clustering algorithm(N-K means) is proposed. Proposed N-K means clustering algorithm applies normalization prior to clustering on the available data as well as the proposed approach calculates initial centroids based on weights. Experimental results prove the betterment of proposed N-K means clustering algorithm over existing...

  12. DNABIT Compress – Genome compression algorithm

    OpenAIRE

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...

  13. Automatic Algorithm Selection for Complex Simulation Problems

    CERN Document Server

    Ewald, Roland

    2012-01-01

    To select the most suitable simulation algorithm for a given task is often difficult. This is due to intricate interactions between model features, implementation details, and runtime environment, which may strongly affect the overall performance. An automated selection of simulation algorithms supports users in setting up simulation experiments without demanding expert knowledge on simulation. Roland Ewald analyzes and discusses existing approaches to solve the algorithm selection problem in the context of simulation. He introduces a framework for automatic simulation algorithm selection and

  14. "Accelerated Perceptron": A Self-Learning Linear Decision Algorithm

    OpenAIRE

    Zuev, Yu. A.

    2003-01-01

    The class of linear decision rules is studied. A new algorithm for weight correction, called an "accelerated perceptron", is proposed. In contrast to classical Rosenblatt's perceptron this algorithm modifies the weight vector at each step. The algorithm may be employed both in learning and in self-learning modes. The theoretical aspects of the behaviour of the algorithm are studied when the algorithm is used for the purpose of increasing the decision reliability by means of weighted voting. I...

  15. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  16. M4GB : Efficient Groebner Basis algorithm

    NARCIS (Netherlands)

    R.H. Makarim (Rusydi); M.M.J. Stevens (Marc)

    2017-01-01

    textabstractWe introduce a new efficient algorithm for computing Groebner-bases named M4GB. Like Faugere's algorithm F4 it is an extension of Buchberger's algorithm that describes: how to store already computed (tail-)reduced multiples of basis polynomials to prevent redundant work in the reduction

  17. Quantum algorithms and learning theory

    NARCIS (Netherlands)

    Arunachalam, S.

    2018-01-01

    This thesis studies strengths and weaknesses of quantum computers. In the first part we present three contributions to quantum algorithms. 1) consider a search space of N elements. One of these elements is "marked" and our goal is to find this. We describe a quantum algorithm to solve this problem

  18. Tau reconstruction and identification algorithm

    Indian Academy of Sciences (India)

    CMS has developed sophisticated tau identification algorithms for tau hadronic decay modes. Production of tau lepton decaying to hadrons are studied at 7 TeV centre-of-mass energy with 2011 collision data collected by CMS detector and has been used to measure the performance of tau identification algorithms by ...

  19. Global alignment algorithms implementations | Fatumo ...

    African Journals Online (AJOL)

    In this paper, we implemented the two routes for sequence comparison, that is; the dotplot and Needleman-wunsch algorithm for global sequence alignment. Our algorithms were implemented in python programming language and were tested on Linux platform 1.60GHz, 512 MB of RAM SUSE 9.2 and 10.1 versions.

  20. Formal verification of algorithms for critical systems

    Science.gov (United States)

    Rushby, John M.; Von Henke, Friedrich

    1993-01-01

    We describe our experience with formal, machine-checked verification of algorithms for critical applications, concentrating on a Byzantine fault-tolerant algorithm for synchronizing the clocks in the replicated computers of a digital flight control system. First, we explain the problems encountered in unsynchronized systems and the necessity, and criticality, of fault-tolerant synchronization. We give an overview of one such algorithm, and of the arguments for its correctness. Next, we describe a verification of the algorithm that we performed using our EHDM system for formal specification and verification. We indicate the errors we found in the published analysis of the algorithm, and other benefits that we derived from the verification. Based on our experience, we derive some key requirements for a formal specification and verification system adequate to the task of verifying algorithms of the type considered. Finally, we summarize our conclusions regarding the benefits of formal verification in this domain, and the capabilities required of verification systems in order to realize those benefits.

  1. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  2. Detection of algorithmic trading

    Science.gov (United States)

    Bogoev, Dimitar; Karam, Arzé

    2017-10-01

    We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.

  3. Fast compact algorithms and software for spline smoothing

    CERN Document Server

    Weinert, Howard L

    2012-01-01

    Fast Compact Algorithms and Software for Spline Smoothing investigates algorithmic alternatives for computing cubic smoothing splines when the amount of smoothing is determined automatically by minimizing the generalized cross-validation score. These algorithms are based on Cholesky factorization, QR factorization, or the fast Fourier transform. All algorithms are implemented in MATLAB and are compared based on speed, memory use, and accuracy. An overall best algorithm is identified, which allows very large data sets to be processed quickly on a personal computer.

  4. The serial message-passing schedule for LDPC decoding algorithms

    Science.gov (United States)

    Liu, Mingshan; Liu, Shanshan; Zhou, Yuan; Jiang, Xue

    2015-12-01

    The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.

  5. An assembly sequence planning method based on composite algorithm

    Directory of Open Access Journals (Sweden)

    Enfu LIU

    2016-02-01

    Full Text Available To solve the combination explosion problem and the blind searching problem in assembly sequence planning of complex products, an assembly sequence planning method based on composite algorithm is proposed. In the composite algorithm, a sufficient number of feasible assembly sequences are generated using formalization reasoning algorithm as the initial population of genetic algorithm. Then fuzzy knowledge of assembly is integrated into the planning process of genetic algorithm and ant algorithm to get the accurate solution. At last, an example is conducted to verify the feasibility of composite algorithm.

  6. Successive combination jet algorithm for hadron collisions

    International Nuclear Information System (INIS)

    Ellis, S.D.; Soper, D.E.

    1993-01-01

    Jet finding algorithms, as they are used in e + e- and hadron collisions, are reviewed and compared. It is suggested that a successive combination style algorithm, similar to that used in e + e- physics, might be useful also in hadron collisions, where cone style algorithms have been used previously

  7. Learning Intelligent Genetic Algorithms Using Japanese Nonograms

    Science.gov (United States)

    Tsai, Jinn-Tsong; Chou, Ping-Yi; Fang, Jia-Cen

    2012-01-01

    An intelligent genetic algorithm (IGA) is proposed to solve Japanese nonograms and is used as a method in a university course to learn evolutionary algorithms. The IGA combines the global exploration capabilities of a canonical genetic algorithm (CGA) with effective condensed encoding, improved fitness function, and modified crossover and…

  8. The Top Ten Algorithms in Data Mining

    CERN Document Server

    Wu, Xindong

    2009-01-01

    From classification and clustering to statistical learning, association analysis, and link mining, this book covers the most important topics in data mining research. It presents the ten most influential algorithms used in the data mining community today. Each chapter provides a detailed description of the algorithm, a discussion of available software implementation, advanced topics, and exercises. With a simple data set, examples illustrate how each algorithm works and highlight the overall performance of each algorithm in a real-world application. Featuring contributions from leading researc

  9. Research on Palmprint Identification Method Based on Quantum Algorithms

    Directory of Open Access Journals (Sweden)

    Hui Li

    2014-01-01

    Full Text Available Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%.

  10. Particle swarm genetic algorithm and its application

    International Nuclear Information System (INIS)

    Liu Chengxiang; Yan Changxiang; Wang Jianjun; Liu Zhenhai

    2012-01-01

    To solve the problems of slow convergence speed and tendency to fall into the local optimum of the standard particle swarm optimization while dealing with nonlinear constraint optimization problem, a particle swarm genetic algorithm is designed. The proposed algorithm adopts feasibility principle handles constraint conditions and avoids the difficulty of penalty function method in selecting punishment factor, generates initial feasible group randomly, which accelerates particle swarm convergence speed, and introduces genetic algorithm crossover and mutation strategy to avoid particle swarm falls into the local optimum Through the optimization calculation of the typical test functions, the results show that particle swarm genetic algorithm has better optimized performance. The algorithm is applied in nuclear power plant optimization, and the optimization results are significantly. (authors)

  11. Ant Colony Clustering Algorithm and Improved Markov Random Fusion Algorithm in Image Segmentation of Brain Images

    Directory of Open Access Journals (Sweden)

    Guohua Zou

    2016-12-01

    Full Text Available New medical imaging technology, such as Computed Tomography and Magnetic Resonance Imaging (MRI, has been widely used in all aspects of medical diagnosis. The purpose of these imaging techniques is to obtain various qualitative and quantitative data of the patient comprehensively and accurately, and provide correct digital information for diagnosis, treatment planning and evaluation after surgery. MR has a good imaging diagnostic advantage for brain diseases. However, as the requirements of the brain image definition and quantitative analysis are always increasing, it is necessary to have better segmentation of MR brain images. The FCM (Fuzzy C-means algorithm is widely applied in image segmentation, but it has some shortcomings, such as long computation time and poor anti-noise capability. In this paper, firstly, the Ant Colony algorithm is used to determine the cluster centers and the number of FCM algorithm so as to improve its running speed. Then an improved Markov random field model is used to improve the algorithm, so that its antinoise ability can be improved. Experimental results show that the algorithm put forward in this paper has obvious advantages in image segmentation speed and segmentation effect.

  12. Joint control algorithm in access network

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    To deal with long probing delay and inaccurate probing results in the endpoint admission control method,a joint local and end-to-end admission control algorithm is proposed,which introduces local probing of access network besides end-to-end probing.Through local probing,the algorithm accurately estimated the resource status of the access network.Simulation shows that this algorithm can improve admission control performance and reduce users' average waiting time when the access network is heavily loaded.

  13. System engineering approach to GPM retrieval algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Rose, C. R. (Chris R.); Chandrasekar, V.

    2004-01-01

    System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Ground validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do

  14. Green cloud environment by using robust planning algorithm

    Directory of Open Access Journals (Sweden)

    Jyoti Thaman

    2017-11-01

    Full Text Available Cloud computing provided a framework for seamless access to resources through network. Access to resources is quantified through SLA between service providers and users. Service provider tries to best exploit their resources and reduce idle times of the resources. Growing energy concerns further makes the life of service providers miserable. User’s requests are served by allocating users tasks to resources in Clouds and Grid environment through scheduling algorithms and planning algorithms. With only few Planning algorithms in existence rarely planning and scheduling algorithms are differentiated. This paper proposes a robust hybrid planning algorithm, Robust Heterogeneous-Earliest-Finish-Time (RHEFT for binding tasks to VMs. The allocation of tasks to VMs is based on a novel task matching algorithm called Interior Scheduling. The consistent performance of proposed RHEFT algorithm is compared with Heterogeneous-Earliest-Finish-Time (HEFT and Distributed HEFT (DHEFT for various parameters like utilization ratio, makespan, Speed-up and Energy Consumption. RHEFT’s consistent performance against HEFT and DHEFT has established the robustness of the hybrid planning algorithm through rigorous simulations.

  15. Iterative group splitting algorithm for opportunistic scheduling systems

    KAUST Repository

    Nam, Haewoon

    2014-05-01

    An efficient feedback algorithm for opportunistic scheduling systems based on iterative group splitting is proposed in this paper. Similar to the opportunistic splitting algorithm, the proposed algorithm adjusts (or lowers) the feedback threshold during a guard period if no user sends a feedback. However, when a feedback collision occurs at any point of time, the proposed algorithm no longer updates the threshold but narrows down the user search space by dividing the users into multiple groups iteratively, whereas the opportunistic splitting algorithm keeps adjusting the threshold until a single user is found. Since the threshold is only updated when no user sends a feedback, it is shown that the proposed algorithm significantly alleviates the signaling overhead for the threshold distribution to the users by the scheduler. More importantly, the proposed algorithm requires a less number of mini-slots than the opportunistic splitting algorithm to make a user selection with a given level of scheduling outage probability or provides a higher ergodic capacity given a certain number of mini-slots. © 2013 IEEE.

  16. Duality quantum algorithm efficiently simulates open quantum systems

    Science.gov (United States)

    Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu

    2016-01-01

    Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855

  17. DiamondTorre Algorithm for High-Performance Wave Modeling

    Directory of Open Access Journals (Sweden)

    Vadim Levchenko

    2016-08-01

    Full Text Available Effective algorithms of physical media numerical modeling problems’ solution are discussed. The computation rate of such problems is limited by memory bandwidth if implemented with traditional algorithms. The numerical solution of the wave equation is considered. A finite difference scheme with a cross stencil and a high order of approximation is used. The DiamondTorre algorithm is constructed, with regard to the specifics of the GPGPU’s (general purpose graphical processing unit memory hierarchy and parallelism. The advantages of these algorithms are a high level of data localization, as well as the property of asynchrony, which allows one to effectively utilize all levels of GPGPU parallelism. The computational intensity of the algorithm is greater than the one for the best traditional algorithms with stepwise synchronization. As a consequence, it becomes possible to overcome the above-mentioned limitation. The algorithm is implemented with CUDA. For the scheme with the second order of approximation, the calculation performance of 50 billion cells per second is achieved. This exceeds the result of the best traditional algorithm by a factor of five.

  18. A Newton-type neural network learning algorithm

    International Nuclear Information System (INIS)

    Ivanov, V.V.; Puzynin, I.V.; Purehvdorzh, B.

    1993-01-01

    First- and second-order learning methods for feed-forward multilayer networks are considered. A Newton-type algorithm is proposed and compared with the common back-propagation algorithm. It is shown that the proposed algorithm provides better learning quality. Some recommendations for their usage are given. 11 refs.; 1 fig.; 1 tab

  19. Location-Aware Mobile Learning of Spatial Algorithms

    Science.gov (United States)

    Karavirta, Ville

    2013-01-01

    Learning an algorithm--a systematic sequence of operations for solving a problem with given input--is often difficult for students due to the abstract nature of the algorithms and the data they process. To help students understand the behavior of algorithms, a subfield in computing education research has focused on algorithm…

  20. Calculating Graph Algorithms for Dominance and Shortest Path

    DEFF Research Database (Denmark)

    Sergey, Ilya; Midtgaard, Jan; Clarke, Dave

    2012-01-01

    We calculate two iterative, polynomial-time graph algorithms from the literature: a dominance algorithm and an algorithm for the single-source shortest path problem. Both algorithms are calculated directly from the definition of the properties by fixed-point fusion of (1) a least fixed point...... expressing all finite paths through a directed graph and (2) Galois connections that capture dominance and path length. The approach illustrates that reasoning in the style of fixed-point calculus extends gracefully to the domain of graph algorithms. We thereby bridge common practice from the school...... of program calculation with common practice from the school of static program analysis, and build a novel view on iterative graph algorithms as instances of abstract interpretation...