WorldWideScience

Sample records for matrix pencil method

  1. A Novel Method to Implement the Matrix Pencil Super Resolution Algorithm for Indoor Positioning

    Directory of Open Access Journals (Sweden)

    Tariq Jamil Saifullah Khanzada

    2011-10-01

    Full Text Available This article highlights the estimation of the results for the algorithms implemented in order to estimate the delays and distances for the indoor positioning system. The data sets for the transmitted and received signals are captured at a typical outdoor and indoor area. The estimation super resolution algorithms are applied. Different state of art and super resolution techniques based algorithms are applied to avail the optimal estimates of the delays and distances between the transmitted and received signals and a novel method for matrix pencil algorithm is devised. The algorithms perform variably at different scenarios of transmitted and received positions. Two scenarios are experienced, for the single antenna scenario the super resolution techniques like ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique and theMatrix Pencil algorithms give optimal performance compared to the conventional techniques. In two antenna scenario RootMUSIC and Matrix Pencil algorithm performed better than other algorithms for the distance estimation, however, the accuracy of all the algorithms is worst than the single antenna scenario. In all cases our devised Matrix Pencil algorithm achieved the best estimation results.

  2. Polynomial two-parameter eigenvalue problems and matrix pencil methods for stability of delay-differential equations

    NARCIS (Netherlands)

    Jarlebring, E.; Hochstenbach, M.E.

    2009-01-01

    Several recent methods used to analyze asymptotic stability of delay-differential equations (DDEs) involve determining the eigenvalues of a matrix, a matrix pencil or a matrix polynomial constructed by Kronecker products. Despite some similarities between the different types of these so-called

  3. Drawing a different picture with pencil lead as matrix-assisted laser desorption/ionization matrix for fullerene derivatives.

    Science.gov (United States)

    Nye, Leanne C; Hungerbühler, Hartmut; Drewello, Thomas

    2018-02-01

    Inspired by reports on the use of pencil lead as a matrix-assisted laser desorption/ionization matrix, paving the way towards matrix-free matrix-assisted laser desorption/ionization, the present investigation evaluates its usage with organic fullerene derivatives. Currently, this class of compounds is best analysed using the electron transfer matrix trans-2-[3-(4-tert-butylphenyl)-2-methyl-2-propenylidene] malononitrile (DCTB), which was employed as the standard here. The suitability of pencil lead was additionally compared to direct (i.e. no matrix) laser desorption/ionization-mass spectrometry. The use of (DCTB) was identified as the by far gentler method, producing spectra with abundant molecular ion signals and much reduced fragmentation. Analytically, pencil lead was found to be ineffective as a matrix, however, appears to be an extremely easy and inexpensive method for producing sodium and potassium adducts.

  4. Matrix pencil method-based reference current generation for shunt active power filters

    DEFF Research Database (Denmark)

    Terriche, Yacine; Golestan, Saeed; Guerrero, Josep M.

    2018-01-01

    response and works well under distorted and unbalanced voltage. Moreover, the proposed method can estimate the voltage phase accurately; this property enables the algorithm to compensate for both power factor and current unbalance. The effectiveness of the proposed method is verified using simulation...... are using the discrete Fourier transform (DFT) in the frequency domain or the instantaneous p–q theory and the synchronous reference frame in the time domain. The DFT, however, suffers from the picket-fence effect and spectral leakage. On the other hand, the DFT takes at least one cycle of the nominal...... frequency. The time-domain methods show a weakness under voltage distortion, which requires prior filtering techniques. The aim of this study is to present a fast yet effective method for generating the RCC for SAPFs. The proposed method, which is based on the matrix pencil method, has a fast dynamic...

  5. Detection and identification of concealed weapons using matrix pencil

    Science.gov (United States)

    Adve, Raviraj S.; Thayaparan, Thayananthan

    2011-06-01

    The detection and identification of concealed weapons is an extremely hard problem due to the weak signature of the target buried within the much stronger signal from the human body. This paper furthers the automatic detection and identification of concealed weapons by proposing the use of an effective approach to obtain the resonant frequencies in a measurement. The technique, based on Matrix Pencil, a scheme for model based parameter estimation also provides amplitude information, hence providing a level of confidence in the results. Of specific interest is the fact that Matrix Pencil is based on a singular value decomposition, making the scheme robust against noise.

  6. Correction of failure in antenna array using matrix pencil technique

    International Nuclear Information System (INIS)

    Khan, SU; Rahim, MKA

    2017-01-01

    In this paper a non-iterative technique is developed for the correction of faulty antenna array based on matrix pencil technique (MPT). The failure of a sensor in antenna array can damage the radiation power pattern in terms of sidelobes level and nulls. In the developed technique, the radiation pattern of the array is sampled to form discrete power pattern information set. Then this information set can be arranged in the form of Hankel matrix (HM) and execute the singular value decomposition (SVD). By removing nonprincipal values, we obtain an optimum lower rank estimation of HM. This lower rank matrix corresponds to the corrected pattern. Then the proposed technique is employed to recover the weight excitation and position allocations from the estimated matrix. Numerical simulations confirm the efficiency of the proposed technique, which is compared with the available techniques in terms of sidelobes level and nulls. (paper)

  7. Application of Matrix Pencil Algorithm to Mobile Robot Localization Using Hybrid DOA/TOA Estimation

    Directory of Open Access Journals (Sweden)

    Lan Anh Trinh

    2012-12-01

    Full Text Available Localization plays an important role in robotics for the tasks of monitoring, tracking and controlling a robot. Much effort has been made to address robot localization problems in recent years. However, despite many proposed solutions and thorough consideration, in terms of developing a low-cost and fast processing method for multiple-source signals, the robot localization problem is still a challenge. In this paper, we propose a solution for robot localization with regards to these concerns. In order to locate the position of a robot, both the coordinate and the orientation of a robot are necessary. We develop a localization method using the Matrix Pencil (MP algorithm for hybrid detection of direction of arrival (DOA and time of arrival (TOA. TOA of the signal is estimated for computing the distance between the mobile robot and a base station (BS. Based on the distance and the estimated DOA, we can estimate the mobile robot's position. The characteristics of the algorithm are examined through analysing simulated experiments and the results demonstrate the advantages of our method over previous works in dealing with the above challenges. The method is constructed based on the low-cost infrastructure of radio frequency devices; the DOA/TOA estimation is performed with just single value decomposition for fast processing. Finally, the MP algorithm combined with tracking using a Kalman filter allows our proposed method to locate the positions of multiple source signals.

  8. SU-D-BRC-01: An Automatic Beam Model Commissioning Method for Monte Carlo Simulations in Pencil-Beam Scanning Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Qin, N; Shen, C; Tian, Z; Jiang, S; Jia, X [UT Southwestern Medical Ctr, Dallas, TX (United States)

    2016-06-15

    Purpose: Monte Carlo (MC) simulation is typically regarded as the most accurate dose calculation method for proton therapy. Yet for real clinical cases, the overall accuracy also depends on that of the MC beam model. Commissioning a beam model to faithfully represent a real beam requires finely tuning a set of model parameters, which could be tedious given the large number of pencil beams to commmission. This abstract reports an automatic beam-model commissioning method for pencil-beam scanning proton therapy via an optimization approach. Methods: We modeled a real pencil beam with energy and spatial spread following Gaussian distributions. Mean energy, and energy and spatial spread are model parameters. To commission against a real beam, we first performed MC simulations to calculate dose distributions of a set of ideal (monoenergetic, zero-size) pencil beams. Dose distribution for a real pencil beam is hence linear superposition of doses for those ideal pencil beams with weights in the Gaussian form. We formulated the commissioning task as an optimization problem, such that the calculated central axis depth dose and lateral profiles at several depths match corresponding measurements. An iterative algorithm combining conjugate gradient method and parameter fitting was employed to solve the optimization problem. We validated our method in simulation studies. Results: We calculated dose distributions for three real pencil beams with nominal energies 83, 147 and 199 MeV using realistic beam parameters. These data were regarded as measurements and used for commission. After commissioning, average difference in energy and beam spread between determined values and ground truth were 4.6% and 0.2%. With the commissioned model, we recomputed dose. Mean dose differences from measurements were 0.64%, 0.20% and 0.25%. Conclusion: The developed automatic MC beam-model commissioning method for pencil-beam scanning proton therapy can determine beam model parameters with

  9. Pencil-on-Paper Capacitors for Hand-Drawn RC Circuits and Capacitive Sensing

    Directory of Open Access Journals (Sweden)

    Jonathan E. Thompson

    2017-01-01

    Full Text Available Electronic capacitors were constructed via hand-printing on paper using pencil graphite. Graphite traces were used to draw conductive connections and capacitor plates on opposing sides of a sheet of standard notebook paper. The paper served as the dielectric separating the plates. Capacitance of the devices was generally < 1000 pF and scaled with surface area of the plate electrodes. By combining a pencil-drawn capacitor with an additional resistive pencil trace, an RC low-pass filter was demonstrated. Further utility of the pencil-on-paper devices was demonstrated through description of a capacitive force transducer and reversible chemical sensing. The latter was achieved for water vapor when the hygroscopic cellulose matrix of the paper capacitor’s dielectric adsorbed water. The construction and demonstration of pencil-on-paper capacitive elements broadens the scope of paper-based electronic circuits while allowing new opportunities in the rapidly expanding field of paper-based sensors.

  10. Examination of the equivalence of self-report survey-based paper-and-pencil and internet data collection methods.

    Science.gov (United States)

    Weigold, Arne; Weigold, Ingrid K; Russell, Elizabeth J

    2013-03-01

    Self-report survey-based data collection is increasingly carried out using the Internet, as opposed to the traditional paper-and-pencil method. However, previous research on the equivalence of these methods has yielded inconsistent findings. This may be due to methodological and statistical issues present in much of the literature, such as nonequivalent samples in different conditions due to recruitment, participant self-selection to conditions, and data collection procedures, as well as incomplete or inappropriate statistical procedures for examining equivalence. We conducted 2 studies examining the equivalence of paper-and-pencil and Internet data collection that accounted for these issues. In both studies, we used measures of personality, social desirability, and computer self-efficacy, and, in Study 2, we used personal growth initiative to assess quantitative equivalence (i.e., mean equivalence), qualitative equivalence (i.e., internal consistency and intercorrelations), and auxiliary equivalence (i.e., response rates, missing data, completion time, and comfort completing questionnaires using paper-and-pencil and the Internet). Study 1 investigated the effects of completing surveys via paper-and-pencil or the Internet in both traditional (i.e., lab) and natural (i.e., take-home) settings. Results indicated equivalence across conditions, except for auxiliary equivalence aspects of missing data and completion time. Study 2 examined mailed paper-and-pencil and Internet surveys without contact between experimenter and participants. Results indicated equivalence between conditions, except for auxiliary equivalence aspects of response rate for providing an address and completion time. Overall, the findings show that paper-and-pencil and Internet data collection methods are generally equivalent, particularly for quantitative and qualitative equivalence, with nonequivalence only for some aspects of auxiliary equivalence. PsycINFO Database Record (c) 2013 APA, all

  11. The optimization of pencil beam widths for use in an electron pencil beam algorithm

    International Nuclear Information System (INIS)

    McParland, Brian J.; Cunningham, John R.; Woo, Milton K.

    1988-01-01

    Pencil beam algorithms for the calculation of electron beam dose distributions have come into widespread use. These algorithms, however, have generally exhibited difficulties in reproducing dose distributions for small field dimensions or, more specifically, for those conditions in which lateral scatter equilibrium does not exist. The work described here has determined that this difficulty can arise from the manner in which the width of the pencil beam is calculated. A unique approach for determining the pencil beam widths required to accurately reproduce small field dose distributions in a homogeneous phantom is described and compared with measurements and the results of other calculations. This method has also been extended to calculate electron beam dose distributions in heterogeneous media and the results of this work are presented. Suggestions for further improvements are discussed.

  12. In-cell refabrication of experimental pencils from pencils pre-irradiated in a power reactor

    International Nuclear Information System (INIS)

    Vignesoult, N.; Atabek, R.; Ducas, S.

    1980-05-01

    For the fuel-cladding study, small irradiated pencils were fabricated in a hot cell from long elements taken from power reactors. This reconstitution in a hot cell makes it possible to: - avoid long and costly fabrications of pencils and pre-irradiations in experimental reactors, - perform re-irradiations on very long fuel elements from power reactors, - fabricate several small pencils from one pre-irradiation pencil having homogeneous characteristics. This paper describes (a) the various in-cell fabrication stages of small pre-irradiated pencils, stressing the precautions taken to avoid any pollution and modifications in the characteristics of the pencil, in order to carry out a perfectly representative re-irradiation, (b) the equipment used and the quality control made, and (c) the results achieved and the qualification programme of this operation [fr

  13. Electronic versus paper-pencil methods for assessing chemotherapy-induced peripheral neuropathy.

    Science.gov (United States)

    Knoerl, Robert; Gray, Evan; Stricker, Carrie; Mitchell, Sandra A; Kippe, Kelsey; Smith, Gloria; Dudley, William N; Lavoie Smith, Ellen M

    2017-11-01

    The aim of this study is to examine and compare with the validated, paper/pencil European Organisation for Research and Treatment of Cancer Quality of Life Questionnaire-Chemotherapy-Induced Peripheral Neuropathy Scale (QLQ-CIPN20), the psychometric properties of three electronically administered patient reported outcome (PRO) measures of chemotherapy-induced peripheral neuropathy (CIPN): (1) the two neuropathy items from the National Cancer Institute's Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events (PRO-CTCAE), (2) the QLQ-CIPN20, and (3) the 0-10 Neuropathy Screening Question (NSQ). We employed a descriptive, cross-sectional design and recruited 25 women with breast cancer who were receiving neurotoxic chemotherapy at an academic hospital. Participants completed the paper/pencil QLQ-CIPN20 and electronic versions of the QLQ-CIPN20, PRO-CTCAE, and NSQ. Internal consistency reliability, intraclass correlation, and concurrent and discriminant validity analyses were conducted. The alpha coefficients for the electronic QLQ-CIPN20 sensory and motor subscales were 0.76 and 0.75. Comparison of the electronic and paper/pencil QLQ-CIPN20 subscales supported mode equivalence (intraclass correlation range >0.91). Participants who reported the presence of numbness/tingling via the single-item NSQ reported higher mean QLQ-CIPN20 sensory subscale scores (p neuropathy severity and interference items correlated well with the QLQ-CIPN20 electronic and paper/pencil sensory (r = 0.76; r = 0.70) and motor (r = 0.55; r = 0.62) subscales, and with the NSQ (r = 0.72; r = 0.44). These data support the validity of the electronically administered PRO-CTCAE neuropathy items, NSQ, and QLQ-CIPN20 for neuropathy screening in clinical practice. The electronic and paper/pencil versions of the QLQ-CIPN can be used interchangeably based on evidence of mode equivalence.

  14. Pencil beam proton radiography using a multilayer ionization chamber

    NARCIS (Netherlands)

    Farace, Paolo; Righetto, Roberto; Meijers, Arturs

    2016-01-01

    A pencil beam proton radiography (PR) method, using a commercial multilayer ionization chamber (MLIC) integrated with a treatment planning system (TPS) was developed. A Giraffe (IBA Dosimetry) MLIC (+/- 0.5 mm accuracy) was used to obtain pencil beam PR by delivering spots uniformly positioned at a

  15. Technical Note : A direct ray-tracing method to compute integral depth dose in pencil beam proton radiography with a multilayer ionization chamber

    NARCIS (Netherlands)

    Farace, Paolo; Righetto, Roberto; Deffet, Sylvain; Meijers, Arturs; Vander Stappen, Francois

    2016-01-01

    Purpose: To introduce a fast ray-tracing algorithm in pencil proton radiography (PR) with a multilayer ionization chamber (MLIC) for in vivo range error mapping. Methods: Pencil beam PR was obtained by delivering spots uniformly positioned in a square (45x45 mm(2) field-of-view) of 9x9 spots capable

  16. Cubic Pencils and Painlev\\'e Hamiltonians

    OpenAIRE

    Kajiwara, Kenji; Masuda, Tetsu; Noumi, Masatoshi; Ohta, Yasuhiro; Yamada, Yasuhiko

    2004-01-01

    We present a simple heuristic method to derive the Painlev\\'e differential equations from the corresponding geometry of rational surafces. We also give a direct relationship between the cubic pencils and Seiberg-Witten curves.

  17. Chitosan/graphene oxide biocomposite film from pencil rod

    Science.gov (United States)

    Gea, S.; Sari, J. N.; Bulan, R.; Piliang, A.; Amaturrahim, S. A.; Hutapea, Y. A.

    2018-03-01

    Graphene Oxide (GO) has been succesfully synthesized using Hummber method from graphite powder of pencil rod. The excellent solubility of graphene oxide (GO)in water imparts its feasibilty as new filler for reinforcement hydrophilic biopolymers. In this research, the biocomposite film was fabricated from chitosan/graphene oxide. The characteristics of graphene oxide were investigated using Fourier Transform Infrared (FT-IR) and X-ray Diffraction (XRD). The results of the XRD showed graphene structur in 2θ, appeared at 9.0715°with interlayer spacing was about 9.74063Å. Preparation films with several variations of chitosan/graphene oxide was done by casting method and characterized by mechanical and morphological analysis. The mechanical properties of the tensile test in the film show that the film CS/GO (85: 15)% has the optimum Young’s modulus size of 2.9 GPa compared to other variations of CS / GO film. Morphological analysis film CS/GO (85:15)% by Scanning Electron Microscopy (SEM), the obtained biocomposites film showed fine dispersion of GO in the CS matrix and could mix each other homogeneously.

  18. Comparison between calibration methods of ionization chamber type pencil in greatness P_K_L

    International Nuclear Information System (INIS)

    Macedo, E.M.; Pereira, L.C.S.; Ferreira, M.J.; Navarro, V.C.C.; Garcia, I.F.M.; Pires, E.J.; Navarro, M.V.T.

    2016-01-01

    Calibration of radiation meters is indispensable on Quality Assurance Program in Radiodiagnostic procedures, mainly Computed Tomography. Thus, this study aims evaluate two calibration methods of pencil ionization chambers in terms of Kerma-length Product (P_K_L) (a direct substitution method and an indirect one, through Kerma and length measurements). The results showed a good equivalence, with minimal concordance of 98,5% between calibration factors. About uncertainties, both showed similar results (substitution 2.2% and indirect 2.3%), indicating that the last one is better, due the costs reduction to implant this calibration procedure. (author)

  19. International Conference on Matrix Analysis and its Applications 2015

    CERN Document Server

    2017-01-01

    This volume presents recent advances in the field of matrix analysis based on contributions at the MAT-TRIAD 2015 conference. Topics covered include interval linear algebra and computational complexity, Birkhoff polynomial basis, tensors, graphs, linear pencils, K-theory and statistic inference, showing the ubiquity of matrices in different mathematical areas. With a particular focus on matrix and operator theory, statistical models and computation, the International Conference on Matrix Analysis and its Applications 2015, held in Coimbra, Portugal, was the sixth in a series of conferences. Applied and Computational Matrix Analysis will appeal to graduate students and researchers in theoretical and applied mathematics, physics and engineering who are seeking an overview of recent problems and methods in matrix analysis.

  20. Pencil It in: Exploring the Feasibility of Hand-Drawn Pencil Electrochemical Sensors and Their Direct Comparison to Screen-Printed Electrodes

    Directory of Open Access Journals (Sweden)

    Elena Bernalte

    2016-08-01

    Full Text Available We explore the fabrication, physicochemical characterisation (SEM, Raman, EDX and XPS and electrochemical application of hand-drawn pencil electrodes (PDEs upon an ultra-flexible polyester substrate; investigating the number of draws (used for their fabrication, the pencil grade utilised (HB to 9B and the electrochemical properties of an array of batches (i.e, pencil boxes. Electrochemical characterisation of the PDEs, using different batches of HB grade pencils, is undertaken using several inner- and outer-sphere redox probes and is critically compared to screen-printed electrodes (SPEs. Proof-of-concept is demonstrated for the electrochemical sensing of dopamine and acetaminophen using PDEs, which are found to exhibit competitive limits of detection (3σ upon comparison to SPEs. Nonetheless, it is important to note that a clear lack of reproducibility was demonstrated when utilising these PDEs fabricated using the HB pencils from different batches. We also explore the suitability and feasibility of a pencil-drawn reference electrode compared to screen-printed alternatives, to see if one can draw the entire sensing platform. This article reports a critical assessment of these PDEs against that of its screen-printed competitors, questioning the overall feasibility of PDEs’ implementation as a sensing platform.

  1. Pencil drawn strain gauges and chemiresistors on paper.

    Science.gov (United States)

    Lin, Cheng-Wei; Zhao, Zhibo; Kim, Jaemyung; Huang, Jiaxing

    2014-01-22

    Pencil traces drawn on print papers are shown to function as strain gauges and chemiresistors. Regular graphite/clay pencils can leave traces composed of percolated networks of fine graphite powders, which exhibit reversible resistance changes upon compressive or tensile deflections. Flexible toy pencils can leave traces that are essentially thin films of graphite/polymer composites, which show reversible changes in resistance upon exposure to volatile organic compounds due to absorption/desorption induced swelling/recovery of the polymer binders. Pencil-on-paper devices are low-cost, extremely simple and rapid to fabricate. They are light, flexible, portable, disposable, and do not generate potentially negative environmental impact during processing and device fabrication. One can envision many other types of pencil drawn paper electronic devices that can take on a great variety of form factors. Hand drawn devices could be useful in resource-limited or emergency situations. They could also lead to new applications integrating art and electronics.

  2. 76 FR 11267 - Cased Pencils From China

    Science.gov (United States)

    2011-03-01

    ... China AGENCY: United States International Trade Commission. ACTION: Scheduling of an expedited five-year review concerning the antidumping duty order on cased pencils from China. SUMMARY: The Commission hereby... cased pencils from China would be likely to lead to continuation or recurrence of material injury within...

  3. A pencil beam algorithm for helium ion beam therapy

    Energy Technology Data Exchange (ETDEWEB)

    Fuchs, Hermann; Stroebele, Julia; Schreiner, Thomas; Hirtl, Albert; Georg, Dietmar [Christian Doppler Laboratory for Medical Radiation Research for Radiation Oncology, Medical University of Vienna, 1090 Vienna (Austria); Department of Radiation Oncology, Medical University of Vienna/AKH Vienna, 1090 Vienna (Austria) and Comprehensive Cancer Center, Medical University of Vienna/AKH Vienna, 1090 Vienna (Austria); Department of Radiation Oncology, Medical University of Vienna/AKH Vienna (Austria) and Comprehensive Cancer Center, Medical University of Vienna/AKH Vienna, 1090 Vienna (Austria); PEG MedAustron, 2700 Wiener Neustadt (Austria); Department of Nuclear Medicine, Medical University of Vienna, 1090 Vienna (Austria); Christian Doppler Laboratory for Medical Radiation Research for Radiation Oncology, Medical University of Vienna, 1090 Vienna (Austria); Department of Radiation Oncology, Medical University of Vienna/AKH Vienna, 1090 Vienna (Austria) and Comprehensive Cancer Center, Medical University of Vienna/AKH Vienna, 1090 Vienna (Austria)

    2012-11-15

    Purpose: To develop a flexible pencil beam algorithm for helium ion beam therapy. Dose distributions were calculated using the newly developed pencil beam algorithm and validated using Monte Carlo (MC) methods. Methods: The algorithm was based on the established theory of fluence weighted elemental pencil beam (PB) kernels. Using a new real-time splitting approach, a minimization routine selects the optimal shape for each sub-beam. Dose depositions along the beam path were determined using a look-up table (LUT). Data for LUT generation were derived from MC simulations in water using GATE 6.1. For materials other than water, dose depositions were calculated by the algorithm using water-equivalent depth scaling. Lateral beam spreading caused by multiple scattering has been accounted for by implementing a non-local scattering formula developed by Gottschalk. A new nuclear correction was modelled using a Voigt function and implemented by a LUT approach. Validation simulations have been performed using a phantom filled with homogeneous materials or heterogeneous slabs of up to 3 cm. The beams were incident perpendicular to the phantoms surface with initial particle energies ranging from 50 to 250 MeV/A with a total number of 10{sup 7} ions per beam. For comparison a special evaluation software was developed calculating the gamma indices for dose distributions. Results: In homogeneous phantoms, maximum range deviations between PB and MC of less than 1.1% and differences in the width of the distal energy falloff of the Bragg-Peak from 80% to 20% of less than 0.1 mm were found. Heterogeneous phantoms using layered slabs satisfied a {gamma}-index criterion of 2%/2mm of the local value except for some single voxels. For more complex phantoms using laterally arranged bone-air slabs, the {gamma}-index criterion was exceeded in some areas giving a maximum {gamma}-index of 1.75 and 4.9% of the voxels showed {gamma}-index values larger than one. The calculation precision of the

  4. Introduction to the spectral theory of polynomial operator pencils

    CERN Document Server

    Markus, A S

    1988-01-01

    This monograph contains an exposition of the foundations of the spectral theory of polynomial operator pencils acting in a Hilbert space. Spectral problems for polynomial pencils have attracted a steady interest in the last 35 years, mainly because they arise naturally in such diverse areas of mathematical physics as differential equations and boundary value problems, controllable systems, the theory of oscillations and waves, elasticity theory, and hydromechanics. In this book, the author devotes most of his attention to the fundamental results of Keldysh on multiple completeness of the eigenvectors and associate vectors of a pencil, and on the asymptotic behavior of its eigenvalues and generalizations of these results. The author also presents various theorems on spectral factorization of pencils which grew out of known results of M. G. Kreibreven and Heinz Langer. A large portion of the book involves the theory of selfadjoint pencils, an area having numerous applications. Intended for mathematicians, resea...

  5. GPU-based fast pencil beam algorithm for proton therapy

    International Nuclear Information System (INIS)

    Fujimoto, Rintaro; Nagamine, Yoshihiko; Kurihara, Tsuneya

    2011-01-01

    Performance of a treatment planning system is an essential factor in making sophisticated plans. The dose calculation is a major time-consuming process in planning operations. The standard algorithm for proton dose calculations is the pencil beam algorithm which produces relatively accurate results, but is time consuming. In order to shorten the computational time, we have developed a GPU (graphics processing unit)-based pencil beam algorithm. We have implemented this algorithm and calculated dose distributions in the case of a water phantom. The results were compared to those obtained by a traditional method with respect to the computational time and discrepancy between the two methods. The new algorithm shows 5-20 times faster performance using the NVIDIA GeForce GTX 480 card in comparison with the Intel Core-i7 920 processor. The maximum discrepancy of the dose distribution is within 0.2%. Our results show that GPUs are effective for proton dose calculations.

  6. Irradiation of four pencils fuel element clusters in the periphery of OSIRIS. Qualification of the calculation method by dosimetry

    International Nuclear Information System (INIS)

    Alberman, A.; Morin, C.

    1983-09-01

    The programs of qualification of PWR fuels required many irradiations in research reactors. In the periphery of the OSIRIS reactor (70 MW), two devices (IRENE and ISABELLE loops) recreating the environment of the fuel rods in power reactors have been put into service. In each device a fuel element cluster including four pencils was irradiated. The problem set by dosimetry was to calculate the enrichments of the pencils to obtain the required power level and to compensate the neutron flux gradient (in front of/behind) to obtain the same power on each one of the four pencils. The required accuracy is about 5%. Fuels dosimetry achieved on loops mockups in the ISIS reactor allowed to test the validity of the calculations and to calibrate the probes according to the nuclear power [fr

  7. Asymptomatic Intracorneal Graphite Deposits following Graphite Pencil Injury

    OpenAIRE

    Philip, Swetha Sara; John, Deepa; John, Sheeja Susan

    2012-01-01

    Reports of graphite pencil lead injuries to the eye are rare. Although graphite is considered to remain inert in the eye, it has been known to cause severe inflammation and damage to ocular structures. We report a case of a 12-year-old girl with intracorneal graphite foreign bodies following a graphite pencil injury.

  8. Optimization of source pencil deployment based on plant growth simulation algorithm

    International Nuclear Information System (INIS)

    Yang Lei; Liu Yibao; Liu Yujuan

    2009-01-01

    A plant growth simulation algorithm was proposed for optimizing source pencil deployment for a 60 Co irradiator. A method used to evaluate the calculation results was presented with the objective function defined by relative standard deviation of the exposure rate at the reference points, and the method to transform two kinds of control variables, i.e., position coordinates x j and y j of source pencils in the source plaque, into proper integer variables was also analyzed and solved. The results show that the plant growth simulation algorithm, which possesses both random and directional search mechanism, has good global search ability and can be used conveniently. The results are affected a little by initial conditions, and improve the uniformity in the irradiation fields. It creates a dependable field for the optimization of source bars arrangement at irradiation facility. (authors)

  9. Characterization of Reagent Pencils for Deposition of Reagents onto Paper-Based Microfluidic Devices

    Directory of Open Access Journals (Sweden)

    Cheyenne H. Liu

    2017-08-01

    Full Text Available Reagent pencils allow for solvent-free deposition of reagents onto paper-based microfluidic devices. The pencils are portable, easy to use, extend the shelf-life of reagents, and offer a platform for customizing diagnostic devices at the point of care. In this work, reagent pencils were characterized by measuring the wear resistance of pencil cores made from polyethylene glycols (PEGs with different molecular weights and incorporating various concentrations of three different reagents using a standard pin abrasion test, as well as by measuring the efficiency of reagent delivery from the pencils to the test zones of paper-based microfluidic devices using absorption spectroscopy and digital image colorimetry. The molecular weight of the PEG, concentration of the reagent, and the molecular weight of the reagent were all found to have an inverse correlation with the wear of the pencil cores, but the amount of reagent delivered to the test zone of a device correlated most strongly with the concentration of the reagent in the pencil core. Up to 49% of the total reagent deposited on a device with a pencil was released into the test zone, compared to 58% for reagents deposited from a solution. The results suggest that reagent pencils can be prepared for a variety of reagents using PEGs with molecular weights in the range of 2000 to 6000 g/mol.

  10. Evaluation of a special pencil ionization chamber by the Monte Carlo method

    International Nuclear Information System (INIS)

    Mendonca, Dalila; Neves, Lucio P.; Perini, Ana P.

    2015-01-01

    A special pencil type ionization chamber, developed at the Instituto de Pesquisas Energeticas e Nucleares, was characterized by means of Monte Carlo simulation to determine the influence of its components on its response. The main differences between this ionization chamber and commercial ionization chambers are related to its configuration and constituent materials. The simulations were made employing the MCNP-4C Monte Carlo code. The highest influence was obtained for the body of PMMA: 7.0%. (author)

  11. Pencil and paper

    DEFF Research Database (Denmark)

    Wong, Bang; Kjærgaard, Rikke Schmidt

    2012-01-01

    Creating pictures is integral to scientific thinking. In the visualization process, putting pencil to paper is an essential act of inward reflec- tion and outward expression. It is a constructive activity that makes our thinking specific and explicit. Compared to other constructive approaches...... such as writing or verbal explanations, visual representa- tion places distinct demands on our reasoning skills by forcing us to contextualize our understanding spatially....

  12. Spacing grids for a fuel pencil bundle in a nuclear reactor assembly

    International Nuclear Information System (INIS)

    Feutrel, Claude.

    1977-01-01

    This invention relates to the lattices forming the spacing of a bundle of clad fuel pencils in a nuclear reactor assembly, particularly in a water cooled or fast reactor, the purpose of such lattices being to maintain these pencils parallel with respect to each other and according to a given lattice arrangement, whilst also providing these pencils with a flexible support according to different successive areas apportioned with their length in order to present them from vibrating under the effect of the circulation of a liquid coolant environment flowing in contact with these pencils [fr

  13. Performance of a pencil ionization chamber in various radiation beams

    International Nuclear Information System (INIS)

    Maia, A.F.; Caldas, L.V.E.

    2003-01-01

    Pencil ionization chambers were recommended for use exclusively in the computed tomography (CT) dosimetry, and, from the start, they were developed only with this application in view. In this work, we studied the behavior of a pencil ionization chamber in various radiation beams with the objective of extending its application. Stability tests were performed, and calibration coefficients were obtained for several standard radiation qualities of the therapeutical and diagnostic levels. The results show that the pencil ionization chamber can be used in several radiation beams other than those used in CT

  14. Anorthite glass: a potential host matrix for 90Sr pencil

    International Nuclear Information System (INIS)

    Sengupta, Pranesh; Dey, G.K.; Fanara, Sara; Chakraborty, Sumit; Mishra, R.K.; Kaushik, C.P.

    2011-01-01

    With rising global concerns over health hazards, environmental pollution and possible malicious applications of radioactive materials, there is an increasing consciousness among public and Governmental agencies for its better control, accounting and security. Investigations carried out by International Atomic Energy Agency and other monitoring bodies reveal that among various radioactive materials, the easily dispersible ones are high activity sealed sources (generally called radioactive pencils) used for various peaceful applications. Ideally, these sealed sources should be safely secured within specialized facilities, but in practice, it is not always done. Hence, there is a need to take an extra precautionary measure to ensure that the matrices currently used for hosting the radionuclides within sealed sources are durable enough under harsh service conditions and situations arising due to possible mishaps (accidents, misplaced, stolen etc). Among the variety of useful radionuclides, 90 Sr is one which is regularly used to (i) combat bone cancer, (ii) destroy unwanted tissue on the surface of eye/skin, (iii) light up/provide energy to remotely accessible areas etc. However, due to its (i) toxicity, (ii) mobility, (iii) easy incorporation within human body, (iv) considerable half-life (∼ 29 years), (v) emission of beta (β - ) particles along with high energy gamma ( γ)-rays, and (vi) retention of significant toxicity within sources even after service life, release of 90 Sr poses a serious threat to the biosphere. Hence, there is a need to ensure that existing 90 Sr host matrices are capable of withstanding all sorts of adversity that may arise during service and under storage/disposal

  15. Modelling lateral beam quality variations in pencil kernel based photon dose calculations

    International Nuclear Information System (INIS)

    Nyholm, T; Olofsson, J; Ahnesjoe, A; Karlsson, M

    2006-01-01

    Standard treatment machines for external radiotherapy are designed to yield flat dose distributions at a representative treatment depth. The common method to reach this goal is to use a flattening filter to decrease the fluence in the centre of the beam. A side effect of this filtering is that the average energy of the beam is generally lower at a distance from the central axis, a phenomenon commonly referred to as off-axis softening. The off-axis softening results in a relative change in beam quality that is almost independent of machine brand and model. Central axis dose calculations using pencil beam kernels show no drastic loss in accuracy when the off-axis beam quality variations are neglected. However, for dose calculated at off-axis positions the effect should be considered, otherwise errors of several per cent can be introduced. This work proposes a method to explicitly include the effect of off-axis softening in pencil kernel based photon dose calculations for arbitrary positions in a radiation field. Variations of pencil kernel values are modelled through a generic relation between half value layer (HVL) thickness and off-axis position for standard treatment machines. The pencil kernel integration for dose calculation is performed through sampling of energy fluence and beam quality in sectors of concentric circles around the calculation point. The method is fully based on generic data and therefore does not require any specific measurements for characterization of the off-axis softening effect, provided that the machine performance is in agreement with the assumed HVL variations. The model is verified versus profile measurements at different depths and through a model self-consistency check, using the dose calculation model to estimate HVL values at off-axis positions. A comparison between calculated and measured profiles at different depths showed a maximum relative error of 4% without explicit modelling of off-axis softening. The maximum relative error

  16. Matrix method for acoustic levitation simulation.

    Science.gov (United States)

    Andrade, Marco A B; Perez, Nicolas; Buiochi, Flavio; Adamowski, Julio C

    2011-08-01

    A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort.

  17. Response matrix method for large LMFBR analysis

    International Nuclear Information System (INIS)

    King, M.J.

    1977-06-01

    The feasibility of using response matrix techniques for computational models of large LMFBRs is examined. Since finite-difference methods based on diffusion theory have generally found a place in fast-reactor codes, a brief review of their general matrix foundation is given first in order to contrast it to the general strategy of response matrix methods. Then, in order to present the general method of response matrix technique, two illustrative examples are given. Matrix algorithms arising in the application to large LMFBRs are discussed, and the potential of the response matrix method is explored for a variety of computational problems. Principal properties of the matrices involved are derived with a view to application of numerical methods of solution. The Jacobi iterative method as applied to the current-balance eigenvalue problem is discussed

  18. Comparison of cutting and pencil-point spinal needle in spinal anesthesia regarding postdural puncture headache

    Science.gov (United States)

    Xu, Hong; Liu, Yang; Song, WenYe; Kan, ShunLi; Liu, FeiFei; Zhang, Di; Ning, GuangZhi; Feng, ShiQing

    2017-01-01

    Abstract Background: Postdural puncture headache (PDPH), mainly resulting from the loss of cerebral spinal fluid (CSF), is a well-known iatrogenic complication of spinal anesthesia and diagnostic lumbar puncture. Spinal needles have been modified to minimize complications. Modifiable risk factors of PDPH mainly included needle size and needle shape. However, whether the incidence of PDPH is significantly different between cutting-point and pencil-point needles was controversial. Then we did a meta-analysis to assess the incidence of PDPH of cutting spinal needle and pencil-point spinal needle. Methods: We included all randomly designed trials, assessing the clinical outcomes in patients given elective spinal anesthesia or diagnostic lumbar puncture with either cutting or pencil-point spinal needle as eligible studies. All selected studies and the risk of bias of them were assessed by 2 investigators. Clinical outcomes including success rates, frequency of PDPH, reported severe PDPH, and the use of epidural blood patch (EBP) were recorded as primary results. Results were evaluated using risk ratio (RR) with 95% confidence interval (CI) for dichotomous variables. Rev Man software (version 5.3) was used to analyze all appropriate data. Results: Twenty-five randomized controlled trials (RCTs) were included in our study. The analysis result revealed that pencil-point spinal needle would result in lower rate of PDPH (RR 2.50; 95% CI [1.96, 3.19]; P < 0.00001) and severe PDPH (RR 3.27; 95% CI [2.15, 4.96]; P < 0.00001). Furthermore, EBP was less used in pencil-point spine needle group (RR 3.69; 95% CI [1.96, 6.95]; P < 0.0001). Conclusions: Current evidences suggest that pencil-point spinal needle was significantly superior compared with cutting spinal needle regarding the frequency of PDPH, PDPH severity, and the use of EBP. In view of this, we recommend the use of pencil-point spinal needle in spinal anesthesia and lumbar puncture. PMID:28383416

  19. Method of forming a ceramic matrix composite and a ceramic matrix component

    Science.gov (United States)

    de Diego, Peter; Zhang, James

    2017-05-30

    A method of forming a ceramic matrix composite component includes providing a formed ceramic member having a cavity, filling at least a portion of the cavity with a ceramic foam. The ceramic foam is deposited on a barrier layer covering at least one internal passage of the cavity. The method includes processing the formed ceramic member and ceramic foam to obtain a ceramic matrix composite component. Also provided is a method of forming a ceramic matrix composite blade and a ceramic matrix composite component.

  20. New trace formulae for a quadratic pencil of the Schroedinger operator

    International Nuclear Information System (INIS)

    Yang Chuanfu

    2010-01-01

    This work deals with the eigenvalue problem for a quadratic pencil of the Schroedinger operator on a finite closed interval with the two-point boundary conditions. We will obtain new regularized trace formulas for this class of differential pencil.

  1. Establishment of a new calibration method of pencil ionization chamber for dosimetry in computed tomography

    International Nuclear Information System (INIS)

    Dias, Daniel Menezes

    2010-01-01

    Pencil ionization chambers are used for beam dosimetry in computed tomography equipment (CT). In this study, a new calibration methodology was established, in order to make the Calibration Laboratory of Instituto de Pesquisas Energeticas e Nucleares (LCI) suitable to international metrological standards, dealing with specific procedures for calibration of these chambers used in CT. Firstly, the setup for the new RQT radiation qualities was mounted, in agreement with IEC61267 from the International Electrotechnical Commission (IEC). After the establishment of these radiation qualities, a specific calibration methodology for pencil ionization chambers was set, according to Technical Report Series No. 457, from the International Atomic Energy Agency (IAEA), which describes particularities of the procedure to be followed by the Secondary Standard Dosimetry Laboratories (SSDL's), concerning to collimation and positioning related to the radiation beam. Initially, PPV (kV) measurements and the determination of copper additional filtrations were carried out, measuring the half value layers (HVL) recommended by the IEC 61267 standard, after that the RQT 8, RQT 9 and RQT 10 radiation quality references were established. For additional filters, aluminum and copper of high purity (around 99.9%) were used. RQT's in thickness of copper filters equivalent to the set 'RQR (Al) + Additional Filtration (Cu)' was directly found by an alternative methodology used to determine additional filtrations, which is a good option when RQR's have not the possibility of be setting up. With the establishment of this new methodology for the ionization pencil chambers calibration, the LCI is ready to calibrate these instruments according to the most recent international standards. Therefore, an improvement in calibration traceability, as well as in metrological services offered by IPEN to all Brazil is achieved. (author)

  2. Field electron emission from pencil-drawn cold cathodes

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Jiangtao; Yang, Bingjun; Liu, Xiahui; Yang, Juan; Yan, Xingbin, E-mail: xbyan@licp.cas.cn [Laboratory of Clean Energy Chemistry and Materials, State Key Laboratory of Solid Lubrication, Lanzhou Institute of Chemical Physics, Chinese Academy of Sciences, Lanzhou 730000 (China)

    2016-05-09

    Field electron emitters with flat, curved, and linear profiles are fabricated on flexible copy papers by direct pencil-drawing method. This one-step method is free of many restricted conditions such as high-temperature, high vacuum, organic solvents, and multistep. The cold cathodes display good field emission performance and achieve high emission current density of 78 mA/cm{sup 2} at an electric field of 3.73 V/μm. The approach proposed here would bring a rapid, low-cost, and eco-friendly route to fabricate but not limited to flexible field emitter devices.

  3. Interface matrix method in AFEN framework

    Energy Technology Data Exchange (ETDEWEB)

    Pogosbekyan, Leonid; Cho, Jin Young; Kim, Young Jin [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    In this study, we extend the application of the interface-matrix(IM) method for reflector modeling to Analytic Flux Expansion Nodal (AFEN) method. This include the modifications of the surface-averaged net current continuity and the net leakage balance conditions for IM method in accordance with AFEN formula. AFEN-interface matrix (AFEN-IM) method has been tested against ZION-1 benchmark problem. The numerical result of AFEN-IM method shows 1.24% of maximum error and 0.42% of root-mean square error in assembly power distribution, and 0.006% {Delta} k of neutron multiplication factor. This result proves that the interface-matrix method for reflector modeling can be useful in AFEN method. 3 refs., 4 figs. (Author)

  4. Interface matrix method in AFEN framework

    Energy Technology Data Exchange (ETDEWEB)

    Pogosbekyan, Leonid; Cho, Jin Young; Kim, Young Jin [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    In this study, we extend the application of the interface-matrix(IM) method for reflector modeling to Analytic Flux Expansion Nodal (AFEN) method. This include the modifications of the surface-averaged net current continuity and the net leakage balance conditions for IM method in accordance with AFEN formula. AFEN-interface matrix (AFEN-IM) method has been tested against ZION-1 benchmark problem. The numerical result of AFEN-IM method shows 1.24% of maximum error and 0.42% of root-mean square error in assembly power distribution, and 0.006% {Delta} k of neutron multiplication factor. This result proves that the interface-matrix method for reflector modeling can be useful in AFEN method. 3 refs., 4 figs. (Author)

  5. Matrix Krylov subspace methods for image restoration

    Directory of Open Access Journals (Sweden)

    khalide jbilou

    2015-09-01

    Full Text Available In the present paper, we consider some matrix Krylov subspace methods for solving ill-posed linear matrix equations and in those problems coming from the restoration of blurred and noisy images. Applying the well known Tikhonov regularization procedure leads to a Sylvester matrix equation depending the Tikhonov regularized parameter. We apply the matrix versions of the well known Krylov subspace methods, namely the Least Squared (LSQR and the conjugate gradient (CG methods to get approximate solutions representing the restored images. Some numerical tests are presented to show the effectiveness of the proposed methods.

  6. Numerical methods in matrix computations

    CERN Document Server

    Björck, Åke

    2015-01-01

    Matrix algorithms are at the core of scientific computing and are indispensable tools in most applications in engineering. This book offers a comprehensive and up-to-date treatment of modern methods in matrix computation. It uses a unified approach to direct and iterative methods for linear systems, least squares and eigenvalue problems. A thorough analysis of the stability, accuracy, and complexity of the treated methods is given. Numerical Methods in Matrix Computations is suitable for use in courses on scientific computing and applied technical areas at advanced undergraduate and graduate level. A large bibliography is provided, which includes both historical and review papers as well as recent research papers. This makes the book useful also as a reference and guide to further study and research work. Åke Björck is a professor emeritus at the Department of Mathematics, Linköping University. He is a Fellow of the Society of Industrial and Applied Mathematics.

  7. The J-Matrix Method Developments and Applications

    CERN Document Server

    Alhaidari, Abdulaziz D; Heller, Eric J; Abdelmonem, Mohamed S

    2008-01-01

    This volume aims to provide the fundamental knowledge to appreciate the advantages of the J-matrix method and to encourage its use and further development. The J-matrix method is an algebraic method of quantum scattering with substantial success in atomic and nuclear physics. The accuracy and convergence property of the method compares favourably with other successful scattering calculation methods. Despite its thirty-year long history new applications are being found for the J-matrix method. This book gives a brief account of the recent developments and some selected applications of the method in atomic and nuclear physics. New findings are reported in which experimental results are compared to theoretical calculations. Modifications, improvements and extensions of the method are discussed using the language of the J-matrix. The volume starts with a Foreword by the two co-founders of the method, E.J. Heller and H.A. Yamani and it contains contributions from 24 prominent international researchers.

  8. Linear operator pencils on Lie algebras and Laurent biorthogonal polynomials

    International Nuclear Information System (INIS)

    Gruenbaum, F A; Vinet, Luc; Zhedanov, Alexei

    2004-01-01

    We study operator pencils on generators of the Lie algebras sl 2 and the oscillator algebra. These pencils are linear in a spectral parameter λ. The corresponding generalized eigenvalue problem gives rise to some sets of orthogonal polynomials and Laurent biorthogonal polynomials (LBP) expressed in terms of the Gauss 2 F 1 and degenerate 1 F 1 hypergeometric functions. For special choices of the parameters of the pencils, we identify the resulting polynomials with the Hendriksen-van Rossum LBP which are widely believed to be the biorthogonal analogues of the classical orthogonal polynomials. This places these examples under the umbrella of the generalized bispectral problem which is considered here. Other (non-bispectral) cases give rise to some 'nonclassical' orthogonal polynomials including Tricomi-Carlitz and random-walk polynomials. An application to solutions of relativistic Toda chain is considered

  9. Dosimetric assessment of the PRESAGE dosimeter for a proton pencil beam

    International Nuclear Information System (INIS)

    Wuu, C-S; Qian, X; Xu, Y; Adamovics, J; Cascio, E; Lu, H-M

    2013-01-01

    The objective of this study is to assess the feasibility of using PRESAGE dosimeters for proton pencil beam dosimetry. Two different formulations of phantom materials were tested for their suitability in characterizing a single proton pencil beam. The dosimetric response of PRESAGE was found to be linear up to 4Gy. First-generation optical CT scanner, OCTOPUS TM was used to implement dose distributions for proton pencil beams since it provides most accurate readout. Percentage depth dose curves and beam profiles for two proton energy, 110 MeV, and 93 MeV, were used to evaluate the dosimetric performance of two PRESAGE phantom formulas. The findings from this study show that the dosimetric properties of the phantom materials match with basic physics of proton beams.

  10. The finite element response matrix method

    International Nuclear Information System (INIS)

    Nakata, H.; Martin, W.R.

    1983-02-01

    A new technique is developed with an alternative formulation of the response matrix method implemented with the finite element scheme. Two types of response matrices are generated from the Galerkin solution to the weak form of the diffusion equation subject to an arbitrary current and source. The piecewise polynomials are defined in two levels, the first for the local (assembly) calculations and the second for the global (core) response matrix calculations. This finite element response matrix technique was tested in two 2-dimensional test problems, 2D-IAEA benchmark problem and Biblis benchmark problem, with satisfatory results. The computational time, whereas the current code is not extensively optimized, is of the same order of the well estabilished coarse mesh codes. Furthermore, the application of the finite element technique in an alternative formulation of response matrix method permits the method to easily incorporate additional capabilities such as treatment of spatially dependent cross-sections, arbitrary geometrical configurations, and high heterogeneous assemblies. (Author) [pt

  11. Disposable pencil graphite electrode modified with peptide nanotubes for Vitamin B12 analysis

    International Nuclear Information System (INIS)

    Pala, Betül Bozdoğan; Vural, Tayfun; Kuralay, Filiz; Çırak, Tamer; Bolat, Gülçin; Abacı, Serdar; Denkbaş, Emir Baki

    2014-01-01

    In this study, peptide nanostructures from diphenylalanine were synthesized in various solvents with various polarities and characterized with Scanning Electron Microscopy (SEM) and Powder X-ray Diffraction (PXRD) techniques. Formation of peptide nanofibrils, nanovesicles, nanoribbons, and nanotubes was observed in different solvent mediums. In order to investigate the effects of peptide nanotubes (PNT) on electrochemical behavior of disposable pencil graphite electrodes (PGE), electrode surfaces were modified with fabricated peptide nanotubes. Electrochemical activity of the pencil graphite electrode was increased with the deposition of PNTs on the surface. The effects of the solvent type, the peptide nanotube concentration, and the passive adsorption time of peptide nanotubes on pencil graphite electrode were studied. For further electrochemical studies, electrodes were modified for 30 min by immobilizing PNTs, which were prepared in water at 6 mg/mL concentration. Vitamin B 12 analyses were performed by the Square Wave (SW) voltammetry method using modified PGEs. The obtained data showed linearity over the range of 0.2 μM and 9.50 μM Vitamin B 12 concentration with high sensitivity. Results showed that PNT modified PGEs were highly simple, fast, cost effective, and feasible for the electro-analytical determination of Vitamin B 12 in real samples.

  12. Pencil graphite leads as simple amperometric sensors for microchip electrophoresis.

    Science.gov (United States)

    Natiele Tiago da Silva, Eiva; Marques Petroni, Jacqueline; Gabriel Lucca, Bruno; Souza Ferreira, Valdir

    2017-11-01

    In this work we demonstrate, for the first time, the use of inexpensive commercial pencil graphite leads as simple amperometric sensors for microchip electrophoresis. A PDMS support containing one channel was fabricated through soft lithography and sanded pencil graphite leads were inserted into this channel to be used as working electrodes. The electrochemical and morphological characterization of the sensor was carried out. The graphite electrode was coupled to PDMS microchips in end-channel configuration and electrophoretic experiments were performed using nitrite and ascorbate as probe analytes. The analytes were successfully separated and detected in well-defined peaks with satisfactory resolution using the microfluidic platform proposed. The repeatability of the pencil graphite electrode was satisfactory (RSD values of 1.6% for nitrite and 12.3% for ascorbate, regarding the peak currents) and its lifetime was estimated to be ca. 700 electrophoretic runs over a cost of ca. $ 0.05 per electrode. The limits of detection achieved with this system were 2.8 μM for nitrite and 5.7 μM for ascorbate. For proof of principle, the pencil graphite electrode was employed for the real analysis of well water samples and nitrite was successfully quantified at levels below its maximum contaminant level established in Brazil and US. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. A pencil beam dose calculation model for CyberKnife system

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Bin; Li, Yongbao; Liu, Bo; Zhou, Fugen [Image Processing Center, Beihang University, Beijing 100191 (China); Xu, Shouping [Department of Radiation Oncology, PLA General Hospital, Beijing 100853 (China); Wu, Qiuwen, E-mail: Qiuwen.Wu@Duke.edu [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 (United States)

    2016-10-15

    Purpose: CyberKnife system is initially equipped with fixed circular cones for stereotactic radiosurgery. Two dose calculation algorithms, Ray-Tracing and Monte Carlo, are available in the supplied treatment planning system. A multileaf collimator system was recently introduced in the latest generation of system, capable of arbitrarily shaped treatment field. The purpose of this study is to develop a model based dose calculation algorithm to better handle the lateral scatter in an irregularly shaped small field for the CyberKnife system. Methods: A pencil beam dose calculation algorithm widely used in linac based treatment planning system was modified. The kernel parameters and intensity profile were systematically determined by fitting to the commissioning data. The model was tuned using only a subset of measured data (4 out of 12 cones) and applied to all fixed circular cones for evaluation. The root mean square (RMS) of the difference between the measured and calculated tissue-phantom-ratios (TPRs) and off-center-ratio (OCR) was compared. Three cone size correction techniques were developed to better fit the OCRs at the penumbra region, which are further evaluated by the output factors (OFs). The pencil beam model was further validated against measurement data on the variable dodecagon-shaped Iris collimators and a half-beam blocked field. Comparison with Ray-Tracing and Monte Carlo methods was also performed on a lung SBRT case. Results: The RMS between the measured and calculated TPRs is 0.7% averaged for all cones, with the descending region at 0.5%. The RMSs of OCR at infield and outfield regions are both at 0.5%. The distance to agreement (DTA) at the OCR penumbra region is 0.2 mm. All three cone size correction models achieve the same improvement in OCR agreement, with the effective source shift model (SSM) preferred, due to their ability to predict more accurately the OF variations with the source to axis distance (SAD). In noncircular field validation

  14. Paper-based potentiometric pH sensor using carbon electrode drawn by pencil

    Science.gov (United States)

    Kawahara, Ryotaro; Sahatiya, Parikshit; Badhulika, Sushmee; Uno, Shigeyasu

    2018-04-01

    A flexible and disposable paper-based pH sensor fabricated with a pencil-drawn working electrode and a Ag/AgCl paste reference electrode is demonstrated for the first time to show pH response by the potentiometric principle. The sensor substrate is made of chromatography paper with a wax-printed hydrophobic area, and various types of carbon pencils are tested as working electrodes. The pH sensitivities of the electrodes drawn by carbon pencils with different hardnesses range from 16.5 to 26.9 mV/pH. The proposed sensor is expected to be more robust against shape change in electrodes on a flexible substrate than other types of chemiresistive/amperometric pH sensors.

  15. Psychometric comparison of paper-and-pencil and online personality assessments in a selection setting

    Directory of Open Access Journals (Sweden)

    Tina Joubert

    2009-07-01

    Full Text Available The goal of the study was to determine whether the Occupational Personality Questionnaire (OPQ32i yielded comparable results when two different modes of administration, namely paper and-pencil and Internet- based administration, were used in real-life, high-stakes selection settings. Two studies were conducted in which scores obtained online in unproctored settings were compared with scores obtained during proctored paper-and-pencil settings. The psychometric properties of the paper-and-pencil and Internet-based applications were strikingly similar. Structural equation modelling with EQS indicated substantial support for the hypothesis that covariance matrices of the paper-and-pencil and online applications in both studies were identical. It was concluded that relationships between the OPQ32i scales were not affected by mode of administration or supervision.

  16. A comparison study for dose calculation in radiation therapy: pencil beam Kernel based vs. Monte Carlo simulation vs. measurements

    Energy Technology Data Exchange (ETDEWEB)

    Cheong, Kwang-Ho; Suh, Tae-Suk; Lee, Hyoung-Koo; Choe, Bo-Young [The Catholic Univ. of Korea, Seoul (Korea, Republic of); Kim, Hoi-Nam; Yoon, Sei-Chul [Kangnam St. Mary' s Hospital, Seoul (Korea, Republic of)

    2002-07-01

    Accurate dose calculation in radiation treatment planning is most important for successful treatment. Since human body is composed of various materials and not an ideal shape, it is not easy to calculate the accurate effective dose in the patients. Many methods have been proposed to solve inhomogeneity and surface contour problems. Monte Carlo simulations are regarded as the most accurate method, but it is not appropriate for routine planning because it takes so much time. Pencil beam kernel based convolution/superposition methods were also proposed to correct those effects. Nowadays, many commercial treatment planning systems have adopted this algorithm as a dose calculation engine. The purpose of this study is to verify the accuracy of the dose calculated from pencil beam kernel based treatment planning system comparing to Monte Carlo simulations and measurements especially in inhomogeneous region. Home-made inhomogeneous phantom, Helax-TMS ver. 6.0 and Monte Carlo code BEAMnrc and DOSXYZnrc were used in this study. In homogeneous media, the accuracy was acceptable but in inhomogeneous media, the errors were more significant. However in general clinical situation, pencil beam kernel based convolution algorithm is thought to be a valuable tool to calculate the dose.

  17. Thermoluminescence properties of irradiated commercial color pencils for accidental retrospective dosimetry

    International Nuclear Information System (INIS)

    Meriç, Niyazi; Şahiner, Eren; Bariş, Aytaç; Polymeris, George S.

    2015-01-01

    Color pencils are widely used mostly in kindergartens, in schools and could be found in all houses with families having young children. Their wide spread use in modern times as well as their chemical composition, consisting mostly of Si and Al, constitute two strong motivations towards exploiting their use as accidental retrospective thermoluminescent dosimeters. The present manuscript reports on the study of colored pencils manufactured by a commercial brand in China which is very common throughout Turkey. The preliminary results discussed in the present work illustrated encouraging characteristics, such as the presence of a trapping level giving rise to natural TL in a temperature range that is sufficiently high. Specific thermoluminescence features of this peak, such as glow peak shape and analysis, anomalous fading, thermal quenching, reproducibility, linearity and recovery ability to low attributed doses were studied. The results suggest that the color pencils could be effectively used in the framework of retrospective thermoluminescent dosimetry with extreme caution, based on multiple aliquot protocols. - Highlights: • Thermoluminescence of the inner part of commercial colored pencils was studied. • The presence of a trapping level giving rise to natural TL at 260 °C was yielded. • Deco analysis, anomalous fading, thermal quenching, reproducibility, linearity and recovery ability of this peak were studied

  18. Alignment modification for pencil eye shields

    International Nuclear Information System (INIS)

    Evans, M.D.; Pla, M.; Podgorsak, E.B.

    1989-01-01

    Accurate alignment of pencil beam eye shields to protect the lens of the eye may be made easier by means of a simple modification of existing apparatus. This involves drilling a small hole through the center of the shield to isolate the rayline directed to the lens and fabricating a suitable plug for this hole

  19. Two-dimensional pencil beam scaling: an improved proton dose algorithm for heterogeneous media

    International Nuclear Information System (INIS)

    Szymanowski, Hanitra; Oelfke, Uwe

    2002-01-01

    New dose delivery techniques with proton beams, such as beam spot scanning or raster scanning, require fast and accurate dose algorithms which can be applied for treatment plan optimization in clinically acceptable timescales. The clinically required accuracy is particularly difficult to achieve for the irradiation of complex, heterogeneous regions of the patient's anatomy. Currently applied fast pencil beam dose calculations based on the standard inhomogeneity correction of pathlength scaling often cannot provide the accuracy required for clinically acceptable dose distributions. This could be achieved with sophisticated Monte Carlo simulations which are still unacceptably time consuming for use as dose engines in optimization calculations. We therefore present a new algorithm for proton dose calculations which aims to resolve the inherent problem between calculation speed and required clinical accuracy. First, a detailed derivation of the new concept, which is based on an additional scaling of the lateral proton fluence is provided. Then, the newly devised two-dimensional (2D) scaling method is tested for various geometries of different phantom materials. These include standard biological tissues such as bone, muscle and fat as well as air. A detailed comparison of the new 2D pencil beam scaling with the current standard pencil beam approach and Monte Carlo simulations, performed with GEANT, is presented. It was found that the new concept proposed allows calculation of absorbed dose with an accuracy almost equal to that achievable with Monte Carlo simulations while requiring only modestly increased calculation times in comparison to the standard pencil beam approach. It is believed that this new proton dose algorithm has the potential to significantly improve the treatment planning outcome for many clinical cases encountered in highly conformal proton therapy. (author)

  20. Efficient sparse matrix-matrix multiplication for computing periodic responses by shooting method on Intel Xeon Phi

    Science.gov (United States)

    Stoykov, S.; Atanassov, E.; Margenov, S.

    2016-10-01

    Many of the scientific applications involve sparse or dense matrix operations, such as solving linear systems, matrix-matrix products, eigensolvers, etc. In what concerns structural nonlinear dynamics, the computations of periodic responses and the determination of stability of the solution are of primary interest. Shooting method iswidely used for obtaining periodic responses of nonlinear systems. The method involves simultaneously operations with sparse and dense matrices. One of the computationally expensive operations in the method is multiplication of sparse by dense matrices. In the current work, a new algorithm for sparse matrix by dense matrix products is presented. The algorithm takes into account the structure of the sparse matrix, which is obtained by space discretization of the nonlinear Mindlin's plate equation of motion by the finite element method. The algorithm is developed to use the vector engine of Intel Xeon Phi coprocessors. It is compared with the standard sparse matrix by dense matrix algorithm and the one developed by Intel MKL and it is shown that by considering the properties of the sparse matrix better algorithms can be developed.

  1. A nodal method based on matrix-response method

    International Nuclear Information System (INIS)

    Rocamora Junior, F.D.; Menezes, A.

    1982-01-01

    A nodal method based in the matrix-response method, is presented, and its application to spatial gradient problems, such as those that exist in fast reactors, near the core - blanket interface, is investigated. (E.G.) [pt

  2. Technical Note: A direct ray-tracing method to compute integral depth dose in pencil beam proton radiography with a multilayer ionization chamber.

    Science.gov (United States)

    Farace, Paolo; Righetto, Roberto; Deffet, Sylvain; Meijers, Arturs; Vander Stappen, Francois

    2016-12-01

    To introduce a fast ray-tracing algorithm in pencil proton radiography (PR) with a multilayer ionization chamber (MLIC) for in vivo range error mapping. Pencil beam PR was obtained by delivering spots uniformly positioned in a square (45 × 45 mm 2 field-of-view) of 9 × 9 spots capable of crossing the phantoms (210 MeV). The exit beam was collected by a MLIC to sample the integral depth dose (IDD MLIC ). PRs of an electron-density and of a head phantom were acquired by moving the couch to obtain multiple 45 × 45 mm 2 frames. To map the corresponding range errors, the two-dimensional set of IDD MLIC was compared with (i) the integral depth dose computed by the treatment planning system (TPS) by both analytic (IDD TPS ) and Monte Carlo (IDD MC ) algorithms in a volume of water simulating the MLIC at the CT, and (ii) the integral depth dose directly computed by a simple ray-tracing algorithm (IDD direct ) through the same CT data. The exact spatial position of the spot pattern was numerically adjusted testing different in-plane positions and selecting the one that minimized the range differences between IDD direct and IDD MLIC . Range error mapping was feasible by both the TPS and the ray-tracing methods, but very sensitive to even small misalignments. In homogeneous regions, the range errors computed by the direct ray-tracing algorithm matched the results obtained by both the analytic and the Monte Carlo algorithms. In both phantoms, lateral heterogeneities were better modeled by the ray-tracing and the Monte Carlo algorithms than by the analytic TPS computation. Accordingly, when the pencil beam crossed lateral heterogeneities, the range errors mapped by the direct algorithm matched better the Monte Carlo maps than those obtained by the analytic algorithm. Finally, the simplicity of the ray-tracing algorithm allowed to implement a prototype procedure for automated spatial alignment. The ray-tracing algorithm can reliably replace the TPS method in MLIC PR for in

  3. Disposable pencil graphite electrode modified with peptide nanotubes for Vitamin B{sub 12} analysis

    Energy Technology Data Exchange (ETDEWEB)

    Pala, Betül Bozdoğan [Nanotechnology and Nanomedicine Division, Institute of Science, Hacettepe University, 06800 Ankara (Turkey); Vural, Tayfun [Department of Chemistry, Faculty of Science, Hacettepe University, 06800 Beytepe, Ankara (Turkey); Kuralay, Filiz [Department of Chemistry, Faculty of Science and Arts, Ordu University, 52200 Ordu (Turkey); Çırak, Tamer [Nanotechnology and Nanomedicine Division, Institute of Science, Hacettepe University, 06800 Ankara (Turkey); Bolat, Gülçin; Abacı, Serdar [Department of Chemistry, Faculty of Science, Hacettepe University, 06800 Beytepe, Ankara (Turkey); Denkbaş, Emir Baki, E-mail: denkbas@hacettepe.edu.tr [Department of Chemistry, Faculty of Science, Hacettepe University, 06800 Beytepe, Ankara (Turkey)

    2014-06-01

    In this study, peptide nanostructures from diphenylalanine were synthesized in various solvents with various polarities and characterized with Scanning Electron Microscopy (SEM) and Powder X-ray Diffraction (PXRD) techniques. Formation of peptide nanofibrils, nanovesicles, nanoribbons, and nanotubes was observed in different solvent mediums. In order to investigate the effects of peptide nanotubes (PNT) on electrochemical behavior of disposable pencil graphite electrodes (PGE), electrode surfaces were modified with fabricated peptide nanotubes. Electrochemical activity of the pencil graphite electrode was increased with the deposition of PNTs on the surface. The effects of the solvent type, the peptide nanotube concentration, and the passive adsorption time of peptide nanotubes on pencil graphite electrode were studied. For further electrochemical studies, electrodes were modified for 30 min by immobilizing PNTs, which were prepared in water at 6 mg/mL concentration. Vitamin B{sub 12} analyses were performed by the Square Wave (SW) voltammetry method using modified PGEs. The obtained data showed linearity over the range of 0.2 μM and 9.50 μM Vitamin B{sub 12} concentration with high sensitivity. Results showed that PNT modified PGEs were highly simple, fast, cost effective, and feasible for the electro-analytical determination of Vitamin B{sub 12} in real samples.

  4. A wave propagation matrix method in semiclassical theory

    International Nuclear Information System (INIS)

    Lee, S.Y.; Takigawa, N.

    1977-05-01

    A wave propagation matrix method is used to derive the semiclassical formulae of the multiturning point problem. A phase shift matrix and a barrier transformation matrix are introduced to describe the processes of a particle travelling through a potential well and crossing a potential barrier respectively. The wave propagation matrix is given by the products of phase shift matrices and barrier transformation matrices. The method to study scattering by surface transparent potentials and the Bloch wave in solids is then applied

  5. Measuring Vitamin C Content of Commercial Orange Juice Using a Pencil Lead Electrode

    Science.gov (United States)

    King, David; Friend, Jeffrey; Kariuki, James

    2010-01-01

    A pencil lead successfully served as an electrode for the determination of ascorbic acid in commercial orange juice. Cyclic voltammetry was used as an electrochemical probe to measure the current produced from the oxidation of ascorbic acid with a variety of electrodes. The data demonstrate that the less expensive pencil lead electrode gives…

  6. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao

    2016-12-07

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  7. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao; Zheng, Wei-Shi; Ghanem, Bernard

    2016-01-01

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  8. Effects of physics change in Monte Carlo code on electron pencil beam dose distributions

    International Nuclear Information System (INIS)

    Toutaoui, Abdelkader; Khelassi-Toutaoui, Nadia; Brahimi, Zakia; Chami, Ahmed Chafik

    2012-01-01

    Pencil beam algorithms used in computerized electron beam dose planning are usually described using the small angle multiple scattering theory. Alternatively, the pencil beams can be generated by Monte Carlo simulation of electron transport. In a previous work, the 4th version of the Electron Gamma Shower (EGS) Monte Carlo code was used to obtain dose distributions from monoenergetic electron pencil beam, with incident energy between 1 MeV and 50 MeV, interacting at the surface of a large cylindrical homogeneous water phantom. In 2000, a new version of this Monte Carlo code has been made available by the National Research Council of Canada (NRC), which includes various improvements in its electron-transport algorithms. In the present work, we were interested to see if the new physics in this version produces pencil beam dose distributions very different from those calculated with oldest one. The purpose of this study is to quantify as well as to understand these differences. We have compared a series of pencil beam dose distributions scored in cylindrical geometry, for electron energies between 1 MeV and 50 MeV calculated with two versions of the Electron Gamma Shower Monte Carlo Code. Data calculated and compared include isodose distributions, radial dose distributions and fractions of energy deposition. Our results for radial dose distributions show agreement within 10% between doses calculated by the two codes for voxels closer to the pencil beam central axis, while the differences are up to 30% for longer distances. For fractions of energy deposition, the results of the EGS4 are in good agreement (within 2%) with those calculated by EGSnrc at shallow depths for all energies, whereas a slightly worse agreement (15%) is observed at deeper distances. These differences may be mainly attributed to the different multiple scattering for electron transport adopted in these two codes and the inclusion of spin effect, which produces an increase of the effective range of

  9. ABCD Matrix Method a Case Study

    CERN Document Server

    Seidov, Zakir F; Yahalom, Asher

    2004-01-01

    In the Israeli Electrostatic Accelerator FEL, the distance between the accelerator's end and the wiggler's entrance is about 2.1 m, and 1.4 MeV electron beam is transported through this space using four similar quadrupoles (FODO-channel). The transfer matrix method (ABCD matrix method) was used for simulating the beam transport, a set of programs is written in the several programming languages (MATHEMATICA, MATLAB, MATCAD, MAPLE) and reasonable agreement is demonstrated between experimental results and simulations. Comparison of ABCD matrix method with the direct "numerical experiments" using EGUN, ELOP, and GPT programs with and without taking into account the space-charge effects showed the agreement to be good enough as well. Also the inverse problem of finding emittance of the electron beam at the S1 screen position (before FODO-channel), by using the spot image at S2 screen position (after FODO-channel) as function of quad currents, is considered. Spot and beam at both screens are described as tilted eel...

  10. Pencil lead microelectrode and the application on cell dielectrophoresis

    Energy Technology Data Exchange (ETDEWEB)

    Hsieh, Bo-Chuan; Cheng, Tzong-Jih; Shih, Syuan-He [Department of Bio-Industrial Mechatronics Engineering, National Taiwan University, 136 Chou-Shan Rd., Taipei City 106, Taiwan (China); Chen, Richie L.C., E-mail: rlcchen@ntu.edu.tw [Department of Bio-Industrial Mechatronics Engineering, National Taiwan University, 136 Chou-Shan Rd., Taipei City 106, Taiwan (China)

    2011-11-30

    A microelectrode was fabricated by electrochemical etching of a pencil lead (0.5 mm in diameter) in 1.0 M NaOH aqueous solution. The pencil lead was dipped into the solution and then an ac voltage (3.0 V{sub rms} for 10 min) was imposed against a stainless plate under mild stirring (450 rpm). The electrochemically sharpened pencil tip was about 10 {mu}m in diameter (12 {+-} 3 {mu}m, n = 5), and the lateral part was insulated within a polypropylene micro-pipette tip (2-200 {mu}L volume range). The cyclic voltammograms conducted in 2.0 mM ferricyanide/ferrocyanide buffer solution (pH 7.0) are with low capacitive current and a typical sigmoidal signal of micro-sized electrodes. The microelectrode was used to perform dielectrophoresis of polystyrene latex microbeads (nominal diameter of 3 {mu}m) and human red blood cells. A conducting glass (indium tin oxide coated glass, 40 mm x 40 mm x 1 mm) served as the counter electrode (0.5 mm beneath the microelectrode) to generate the asymmetrical electric field and also as the window for microscopic observation. With the sinusoidal bias voltage (30 V{sub rms}) ranged from 20 Hz to 2 MHz, positive and negative dielectrophoretic phenomena were identified.

  11. The finite element response Matrix method

    International Nuclear Information System (INIS)

    Nakata, H.; Martin, W.R.

    1983-01-01

    A new method for global reactor core calculations is described. This method is based on a unique formulation of the response matrix method, implemented with a higher order finite element method. The unique aspects of this approach are twofold. First, there are two levels to the overall calculational scheme: the local or assembly level and the global or core level. Second, the response matrix scheme, which is formulated at both levels, consists of two separate response matrices rather than one response matrix as is generally the case. These separate response matrices are seen to be quite beneficial for the criticality eigenvalue calculation, because they are independent of k /SUB eff/. The response matrices are generated from a Galerkin finite element solution to the weak form of the diffusion equation, subject to an arbitrary incoming current and an arbitrary distributed source. Calculational results are reported for two test problems, the two-dimensional International Atomic Energy Agency benchmark problem and a two-dimensional pressurized water reactor test problem (Biblis reactor), and they compare well with standard coarse mesh methods with respect to accuracy and efficiency. Moreover, the accuracy (and capability) is comparable to fine mesh for a fraction of the computational cost. Extension of the method to treat heterogeneous assemblies and spatial depletion effects is discussed

  12. POLLA-NESC, Resonance Parameter R-Matrix to S-Matrix Conversion by Reich-Moore Method

    International Nuclear Information System (INIS)

    Saussure, G. de; Perez, R.B.

    1975-01-01

    1 - Description of problem or function: The program transforms a set of r-matrix nuclear resonance parameters into a set of equivalent s-matrix (or Kapur-Peierls) resonance parameters. 2 - Method of solution: The program utilizes the multilevel formalism of Reich and Moore and avoids diagonalization of the level matrix. The parameters are obtained by a direct partial fraction expansion of the Reich-Moore expression of the collision matrix. This approach appears simpler and faster when the number of fission channels is known and small. The method is particularly useful when a large number of levels must be considered because it does not require diagonalization of a large level matrix. 3 - Restrictions on the complexity of the problem: By DIMENSION statements, the program is limited to maxima of 100 levels and 5 channels

  13. A Case Study in Proton Pencil-Beam Scanning Delivery

    International Nuclear Information System (INIS)

    Kooy, Hanne M.; Clasie, Benjamin M.; Lu, H.-M.; Madden, Thomas M.; Bentefour, Hassan; Depauw, Nicolas M.S.; Adams, Judy A.; Trofimov, Alexei V.; Demaret, Denis; Delaney, Thomas F.; Flanz, Jacob B.

    2010-01-01

    Purpose: We completed an implementation of pencil-beam scanning (PBS), a technology whereby a focused beam of protons, of variable intensity and energy, is scanned over a plane perpendicular to the beam axis and in depth. The aim of radiotherapy is to improve the target to healthy tissue dose differential. We illustrate how PBS achieves this aim in a patient with a bulky tumor. Methods and Materials: Our first deployment of PBS uses 'broad' pencil-beams ranging from 20 to 35 mm (full-width-half-maximum) over the range interval from 32 to 7 g/cm 2 . Such beam-brushes offer a unique opportunity for treating bulky tumors. We present a case study of a large (4,295 cc clinical target volume) retroperitoneal sarcoma treated to 50.4 Gy relative biological effectiveness (RBE) (presurgery) using a course of photons and protons to the clinical target volume and a course of protons to the gross target volume. Results: We describe our system and present the dosimetry for all courses and provide an interdosimetric comparison. Discussion: The use of PBS for bulky targets reduces the complexity of treatment planning and delivery compared with collimated proton fields. In addition, PBS obviates, especially for cases as presented here, the significant cost incurred in the construction of field-specific hardware. PBS offers improved dose distributions, reduced treatment time, and reduced cost of treatment.

  14. A study of lateral fall-off (penumbra) optimisation for pencil beam scanning (PBS) proton therapy

    Science.gov (United States)

    Winterhalter, C.; Lomax, A.; Oxley, D.; Weber, D. C.; Safai, S.

    2018-01-01

    The lateral fall-off is crucial for sparing organs at risk in proton therapy. It is therefore of high importance to minimize the penumbra for pencil beam scanning (PBS). Three optimisation approaches are investigated: edge-collimated uniformly weighted spots (collimation), pencil beam optimisation of uncollimated pencil beams (edge-enhancement) and the optimisation of edge collimated pencil beams (collimated edge-enhancement). To deliver energies below 70 MeV, these strategies are evaluated in combination with the following pre-absorber methods: field specific fixed thickness pre-absorption (fixed), range specific, fixed thickness pre-absorption (automatic) and range specific, variable thickness pre-absorption (variable). All techniques are evaluated by Monte Carlo simulated square fields in a water tank. For a typical air gap of 10 cm, without pre-absorber collimation reduces the penumbra only for water equivalent ranges between 4-11 cm by up to 2.2 mm. The sharpest lateral fall-off is achieved through collimated edge-enhancement, which lowers the penumbra down to 2.8 mm. When using a pre-absorber, the sharpest fall-offs are obtained when combining collimated edge-enhancement with a variable pre-absorber. For edge-enhancement and large air gaps, it is crucial to minimize the amount of material in the beam. For small air gaps however, the superior phase space of higher energetic beams can be employed when more material is used. In conclusion, collimated edge-enhancement combined with the variable pre-absorber is the recommended setting to minimize the lateral penumbra for PBS. Without collimator, it would be favourable to use a variable pre-absorber for large air gaps and an automatic pre-absorber for small air gaps.

  15. 76 FR 38697 - Cased Pencils From China

    Science.gov (United States)

    2011-07-01

    ... China Determination On the basis of the record \\1\\ developed in the subject five-year review, the United... China would be likely to lead to continuation or recurrence of material injury to an industry in the... Publication 4239 (June 2011), entitled Cased Pencils from China: Investigation No. 731-TA-669 (Third Review...

  16. How to Simply Demonstrate Diamagnetic Levitation with Pencil Lead

    Science.gov (United States)

    Koudelkova, Vera

    2016-01-01

    A new simple arrangement how to demonstrate diamagnetic levitation is presented. It uses pencil lead levitating in a track built from neodymium magnets. This arrangement can also be used as a classroom experiment.

  17. A Pencil Beam for the Linac4 commissioning

    CERN Document Server

    Lallement, JB

    2010-01-01

    In order to characterize the different accelerating structures and transport lines of Linac4 and to proceed to its commissioning, we need to produce a low current, low emittance beam. This note describes the generation of two pencil beams and their dynamic through the Linac.

  18. Modeling cometary photopolarimetric characteristics with Sh-matrix method

    Science.gov (United States)

    Kolokolova, L.; Petrov, D.

    2017-12-01

    Cometary dust is dominated by particles of complex shape and structure, which are often considered as fractal aggregates. Rigorous modeling of light scattering by such particles, even using parallelized codes and NASA supercomputer resources, is very computer time and memory consuming. We are presenting a new approach to modeling cometary dust that is based on the Sh-matrix technique (e.g., Petrov et al., JQSRT, 112, 2012). This method is based on the T-matrix technique (e.g., Mishchenko et al., JQSRT, 55, 1996) and was developed after it had been found that the shape-dependent factors could be separated from the size- and refractive-index-dependent factors and presented as a shape matrix, or Sh-matrix. Size and refractive index dependences are incorporated through analytical operations on the Sh-matrix to produce the elements of T-matrix. Sh-matrix method keeps all advantages of the T-matrix method, including analytical averaging over particle orientation. Moreover, the surface integrals describing the Sh-matrix elements themselves can be solvable analytically for particles of any shape. This makes Sh-matrix approach an effective technique to simulate light scattering by particles of complex shape and surface structure. In this paper, we present cometary dust as an ensemble of Gaussian random particles. The shape of these particles is described by a log-normal distribution of their radius length and direction (Muinonen, EMP, 72, 1996). Changing one of the parameters of this distribution, the correlation angle, from 0 to 90 deg., we can model a variety of particles from spheres to particles of a random complex shape. We survey the angular and spectral dependencies of intensity and polarization resulted from light scattering by such particles, studying how they depend on the particle shape, size, and composition (including porous particles to simulate aggregates) to find the best fit to the cometary observations.

  19. Analysis of Nonlinear Dynamics by Square Matrix Method

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Li Hua [Brookhaven National Lab. (BNL), Upton, NY (United States). Energy and Photon Sciences Directorate. National Synchrotron Light Source II

    2016-07-25

    The nonlinear dynamics of a system with periodic structure can be analyzed using a square matrix. In this paper, we show that because the special property of the square matrix constructed for nonlinear dynamics, we can reduce the dimension of the matrix from the original large number for high order calculation to low dimension in the first step of the analysis. Then a stable Jordan decomposition is obtained with much lower dimension. The transformation to Jordan form provides an excellent action-angle approximation to the solution of the nonlinear dynamics, in good agreement with trajectories and tune obtained from tracking. And more importantly, the deviation from constancy of the new action-angle variable provides a measure of the stability of the phase space trajectories and their tunes. Thus the square matrix provides a novel method to optimize the nonlinear dynamic system. The method is illustrated by many examples of comparison between theory and numerical simulation. Finally, in particular, we show that the square matrix method can be used for optimization to reduce the nonlinearity of a system.

  20. Response rate of bricklayers and supervisors on an internet or a paper-and-pencil questionnaire

    NARCIS (Netherlands)

    Boschman, Julitta S.; van der Molen, Henk F.; Frings-Dresen, Monique H. W.; Sluiter, Judith K.

    2012-01-01

    It is unclear whether or not internet surveys yield response rates comparable to paper-and-pencil surveys for specific occupational groups, such as construction workers. The objective of this study was to examine the differences in response rates between a paper-and-pencil questionnaire and an

  1. Transformation Matrix for Time Discretization Based on Tustin’s Method

    Directory of Open Access Journals (Sweden)

    Yiming Jiang

    2014-01-01

    Full Text Available This paper studies rules in transformation of transfer function through time discretization. A method of using transformation matrix to realize bilinear transform (also known as Tustin’s method is presented. This method can be described as the conversion between the coefficients of transfer functions, which are expressed as transform by certain matrix. For a polynomial of degree n, the corresponding transformation matrix of order n exists and is unique. Furthermore, the transformation matrix can be decomposed into an upper triangular matrix multiplied with another lower triangular matrix. And both have obvious regularity. The proposed method can achieve rapid bilinear transform used in automatic design of digital filter. The result of numerical simulation verifies the correctness of the theoretical results. Moreover, it also can be extended to other similar problems. Example in the last throws light on this point.

  2. An iterative method to invert the LTSn matrix

    Energy Technology Data Exchange (ETDEWEB)

    Cardona, A.V.; Vilhena, M.T. de [UFRGS, Porto Alegre (Brazil)

    1996-12-31

    Recently Vilhena and Barichello proposed the LTSn method to solve, analytically, the Discrete Ordinates Problem (Sn problem) in transport theory. The main feature of this method consist in the application of the Laplace transform to the set of Sn equations and solve the resulting algebraic system for the transport flux. Barichello solve the linear system containing the parameter s applying the definition of matrix invertion exploiting the structure of the LTSn matrix. In this work, it is proposed a new scheme to invert the LTSn matrix, decomposing it in blocks and recursively inverting this blocks.

  3. Validation of a Paper and Pencil Test Battery for the Diagnosis of Minimal Hepatic Encephalopathy in Korea.

    Science.gov (United States)

    Jeong, Jae Yoon; Jun, Dae Won; Bai, Daiseg; Kim, Ji Yean; Sohn, Joo Hyun; Ahn, Sang Bong; Kim, Sang Gyune; Kim, Tae Yeob; Kim, Hyoung Su; Jeong, Soung Won; Cho, Yong Kyun; Song, Do Seon; Kim, Hee Yeon; Jung, Young Kul; Yoon, Eileen L

    2017-09-01

    The aim of this study was to validate a new paper and pencil test battery to diagnose minimal hepatic encephalopathy (MHE) in Korea. A new paper and pencil test battery was composed of number connection test-A (NCT-A), number connection test-B (NCT-B), digit span test (DST), and symbol digit modality test (SDMT). The norm of the new test was based on 315 healthy individuals between the ages of 20 and 70 years old. Another 63 healthy subjects (n = 31) and cirrhosis patients (n = 32) were included as a validation cohort. All participants completed the new paper and pencil test, a critical flicker frequency (CFF) test and computerized cognitive function test (visual continuous performance test [CPT]). The scores on the NCT-A and NCT-B increased but those of DST and SDMT decreased according to age. Twelve of the cirrhotic patients (37.5%) were diagnosed with MHE based on the new paper and pencil test battery. The total score of the paper and pencil test battery showed good positive correlation with the CFF (r = 0.551, P cognitive function test. Also, this score was lower in patients with MHE compared to those without MHE (P cognitive test decreased significantly in patients with MHE compared to those without MHE. Test-retest reliability was comparable. In conclusion, the new paper and pencil test battery including NCT-A, NCT-B, DST, and SDMT showed good correlation with neuropsychological tests. This new paper and pencil test battery could help to discriminate patients with impaired cognitive function in cirrhosis (registered at Clinical Research Information Service [CRIS], https://cris.nih.go.kr/cris, KCT0000955). © 2017 The Korean Academy of Medical Sciences.

  4. X-ray pencil beam facility for optics characterization

    Science.gov (United States)

    Krumrey, Michael; Cibik, Levent; Müller, Peter; Bavdaz, Marcos; Wille, Eric; Ackermann, Marcelo; Collon, Maximilien J.

    2010-07-01

    The Physikalisch-Technische Bundesanstalt (PTB) has used synchrotron radiation for the characterization of optics and detectors for astrophysical X-ray telescopes for more than 20 years. At a dedicated beamline at BESSY II, a monochromatic pencil beam is used by ESA and cosine Research since the end of 2005 for the characterization of novel silicon pore optics, currently under development for the International X-ray Observatory (IXO). At this beamline, a photon energy of 2.8 keV is selected by a Si channel-cut monochromator. Two apertures at distances of 12.2 m and 30.5 m from the dipole source form a pencil beam with a typical diameter of 100 μm and a divergence below 1". The optics to be investigated is placed in a vacuum chamber on a hexapod, the angular positioning is controlled by means of autocollimators to below 1". The reflected beam is registered at 5 m distance from the optics with a CCD-based camera system. This contribution presents design and performance of the upgrade of this beamline to cope with the updated design for IXO. The distance between optics and detector can now be 20 m. For double reflection from an X-ray Optical Unit (XOU) and incidence angles up to 1.4°, this corresponds to a vertical translation of the camera by 2 m. To achieve high reflectance at this angle even with uncoated silicon, a lower photon energy of 1 keV is available from a pair of W/B4C multilayers. For coated optics, a high energy option can provide a pencil beam of 7.6 keV radiation.

  5. Pencil kernel correction and residual error estimation for quality-index-based dose calculations

    International Nuclear Information System (INIS)

    Nyholm, Tufve; Olofsson, Joergen; Ahnesjoe, Anders; Georg, Dietmar; Karlsson, Mikael

    2006-01-01

    Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent set of measurements. No significant systematic error was observed in the calculations using the derived correction of the kernel and the remaining random errors were found to be adequately predicted by the proposed method

  6. Matrix-based image reconstruction methods for tomography

    International Nuclear Information System (INIS)

    Llacer, J.; Meng, J.D.

    1984-10-01

    Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures

  7. Students’ thinking preferences in solving mathematics problems based on learning styles: a comparison of paper-pencil and geogebra

    Science.gov (United States)

    Farihah, Umi

    2018-04-01

    The purpose of this study was to analyze students’ thinking preferences in solving mathematics problems using paper pencil comparing to geogebra based on their learning styles. This research employed a qualitative descriptive study. The subjects of this research was six of eighth grade students of Madrasah Tsanawiyah Negeri 2 Trenggalek, East Java Indonesia academic year 2015-2016 with their difference learning styles; two visual students, two auditory students, and two kinesthetic students.. During the interview, the students presented the Paper and Pencil-based Task (PBTs) and the Geogebra-based Task (GBTs). By investigating students’ solution methods and the representation in solving the problems, the researcher compared their visual and non-visual thinking preferences in solving mathematics problems while they were using Geogebra and without Geogebra. Based on the result of research analysis, it was shown that the comparison between students’ PBTs and GBTs solution either visual, auditory, or kinesthetic represented how Geogebra can influence their solution method. By using Geogebra, they prefer using visual method while presenting GBTs to using non-visual method.

  8. Effect of pencil grasp on the speed and legibility of handwriting in children.

    Science.gov (United States)

    Schwellnus, Heidi; Carnahan, Heather; Kushki, Azadeh; Polatajko, Helene; Missiuna, Cheryl; Chau, Tom

    2012-01-01

    Pencil grasps other than the dynamic tripod may be functional for handwriting. This study examined the impact of grasp on handwriting speed and legibility. We videotaped 120 typically developing fourth-grade students while they performed a writing task. We categorized the grasps they used and evaluated their writing for speed and legibility using a handwriting assessment. Using linear regression analysis, we examined the relationship between grasp and handwriting. We documented six categories of pencil grasp: four mature grasp patterns, one immature grasp pattern, and one alternating grasp pattern. Multiple linear regression results revealed no significant effect for mature grasp on either legibility or speed. Pencil grasp patterns did not influence handwriting speed or legibility in this sample of typically developing children. This finding adds to the mounting body of evidence that alternative grasps may be acceptable for fast and legible handwriting. Copyright © 2012 by the American Occupational Therapy Association, Inc.

  9. A Golub-Kahan-type reduction method for matrix pairs

    NARCIS (Netherlands)

    Hochstenbach, M.E.; Reichel, L.; Yu, X.

    2015-01-01

    We describe a novel method for reducing a pair of large matrices {A;B} to a pair of small matrices {H;K}. The method is an extension of Golub-Kahan bidiagonalization to matrix pairs, and simplifies to the latter method when B is the identity matrix. Applications to Tikhonov regularization of large

  10. A Golub-Kahan-type reduction method for matrix pairs

    NARCIS (Netherlands)

    Hochstenbach, M.E.; Reichel, L.; Yu, X.

    2015-01-01

    We describe a novel method for reducing a pair of large matrices {A,B} to a pair of small matrices {H,K}. The method is an extension of Golub–Kahan bidiagonalization to matrix pairs, and simplifies to the latter method when B is the identity matrix. Applications to Tikhonov regularization of large

  11. 76 FR 2337 - Certain Cased Pencils From the People's Republic of China: Preliminary Results and Partial...

    Science.gov (United States)

    2011-01-13

    ... has autonomy from the government in making decisions regarding the selection of management; and (4... Financial Statistics. When relying on prices of imports into India as surrogate values, we have disregarded... the 2006-2007 financial statement of Triveni Pencils Ltd. (``Triveni''), an Indian producer of pencils...

  12. 78 FR 54452 - Certain Cased Pencils From the People's Republic of China: Rescission of Antidumping Duty...

    Science.gov (United States)

    2013-09-04

    ... the People's Republic of China: Rescission of Antidumping Duty Administrative Review; 2011-2012 AGENCY... certain cased pencils (pencils) from the People's Republic of China (PRC) for the period December 1, 2011, through November 30, 2012, based on the withdrawal of the review request by one company and on the...

  13. SU-F-T-209: Multicriteria Optimization Algorithm for Intensity Modulated Radiation Therapy Using Pencil Proton Beam Scanning

    Energy Technology Data Exchange (ETDEWEB)

    Beltran, C; Kamal, H [Mayo Clinic, Rochester, MN (United States)

    2016-06-15

    Purpose: To provide a multicriteria optimization algorithm for intensity modulated radiation therapy using pencil proton beam scanning. Methods: Intensity modulated radiation therapy using pencil proton beam scanning requires efficient optimization algorithms to overcome the uncertainties in the Bragg peaks locations. This work is focused on optimization algorithms that are based on Monte Carlo simulation of the treatment planning and use the weights and the dose volume histogram (DVH) control points to steer toward desired plans. The proton beam treatment planning process based on single objective optimization (representing a weighted sum of multiple objectives) usually leads to time-consuming iterations involving treatment planning team members. We proved a time efficient multicriteria optimization algorithm that is developed to run on NVIDIA GPU (Graphical Processing Units) cluster. The multicriteria optimization algorithm running time benefits from up-sampling of the CT voxel size of the calculations without loss of fidelity. Results: We will present preliminary results of Multicriteria optimization for intensity modulated proton therapy based on DVH control points. The results will show optimization results of a phantom case and a brain tumor case. Conclusion: The multicriteria optimization of the intensity modulated radiation therapy using pencil proton beam scanning provides a novel tool for treatment planning. Work support by a grant from Varian Inc.

  14. SU-F-T-209: Multicriteria Optimization Algorithm for Intensity Modulated Radiation Therapy Using Pencil Proton Beam Scanning

    International Nuclear Information System (INIS)

    Beltran, C; Kamal, H

    2016-01-01

    Purpose: To provide a multicriteria optimization algorithm for intensity modulated radiation therapy using pencil proton beam scanning. Methods: Intensity modulated radiation therapy using pencil proton beam scanning requires efficient optimization algorithms to overcome the uncertainties in the Bragg peaks locations. This work is focused on optimization algorithms that are based on Monte Carlo simulation of the treatment planning and use the weights and the dose volume histogram (DVH) control points to steer toward desired plans. The proton beam treatment planning process based on single objective optimization (representing a weighted sum of multiple objectives) usually leads to time-consuming iterations involving treatment planning team members. We proved a time efficient multicriteria optimization algorithm that is developed to run on NVIDIA GPU (Graphical Processing Units) cluster. The multicriteria optimization algorithm running time benefits from up-sampling of the CT voxel size of the calculations without loss of fidelity. Results: We will present preliminary results of Multicriteria optimization for intensity modulated proton therapy based on DVH control points. The results will show optimization results of a phantom case and a brain tumor case. Conclusion: The multicriteria optimization of the intensity modulated radiation therapy using pencil proton beam scanning provides a novel tool for treatment planning. Work support by a grant from Varian Inc.

  15. Pre-form ceramic matrix composite cavity and method of forming and method of forming a ceramic matrix composite component

    Science.gov (United States)

    Monaghan, Philip Harold; Delvaux, John McConnell; Taxacher, Glenn Curtis

    2015-06-09

    A pre-form CMC cavity and method of forming pre-form CMC cavity for a ceramic matrix component includes providing a mandrel, applying a base ply to the mandrel, laying-up at least one CMC ply on the base ply, removing the mandrel, and densifying the base ply and the at least one CMC ply. The remaining densified base ply and at least one CMC ply form a ceramic matrix component having a desired geometry and a cavity formed therein. Also provided is a method of forming a CMC component.

  16. The inverse spectral problem for pencils of differential operators

    International Nuclear Information System (INIS)

    Guseinov, I M; Nabiev, I M

    2007-01-01

    The inverse problem of spectral analysis for a quadratic pencil of Sturm-Liouville operators on a finite interval is considered. A uniqueness theorem is proved, a solution algorithm is presented, and sufficient conditions for the solubility of the inverse problem are obtained. Bibliography: 31 titles.

  17. Analytical linear energy transfer model including secondary particles: calculations along the central axis of the proton pencil beam

    International Nuclear Information System (INIS)

    Marsolat, F; De Marzi, L; Mazal, A; Pouzoulet, F

    2016-01-01

    In proton therapy, the relative biological effectiveness (RBE) depends on various types of parameters such as linear energy transfer (LET). An analytical model for LET calculation exists (Wilkens’ model), but secondary particles are not included in this model. In the present study, we propose a correction factor, L sec , for Wilkens’ model in order to take into account the LET contributions of certain secondary particles. This study includes secondary protons and deuterons, since the effects of these two types of particles can be described by the same RBE-LET relationship. L sec was evaluated by Monte Carlo (MC) simulations using the GATE/GEANT4 platform and was defined by the ratio of the LET d distributions of all protons and deuterons and only primary protons. This method was applied to the innovative Pencil Beam Scanning (PBS) delivery systems and L sec was evaluated along the beam axis. This correction factor indicates the high contribution of secondary particles in the entrance region, with L sec values higher than 1.6 for a 220 MeV clinical pencil beam. MC simulations showed the impact of pencil beam parameters, such as mean initial energy, spot size, and depth in water, on L sec . The variation of L sec with these different parameters was integrated in a polynomial function of the L sec factor in order to obtain a model universally applicable to all PBS delivery systems. The validity of this correction factor applied to Wilkens’ model was verified along the beam axis of various pencil beams in comparison with MC simulations. A good agreement was obtained between the corrected analytical model and the MC calculations, with mean-LET deviations along the beam axis less than 0.05 keV μm −1 . These results demonstrate the efficacy of our new correction of the existing LET model in order to take into account secondary protons and deuterons along the pencil beam axis. (paper)

  18. Intranet Effectiveness: A Public Relations Paper-and-Pencil Checklist.

    Science.gov (United States)

    Murgolo-Poore, Marie E.; Pitt, Leyland F.; Ewing, Michael T.

    2002-01-01

    Describes a process directed at developing a simple paper-and-pencil checklist to assess Intranet effectiveness. Discusses the checklist purification procedure, and attempts to establish reliability and validity for the list. Concludes by identifying managerial applications of the checklist, recognizing the limitations of the approach, and…

  19. Comparison of matrix methods for elastic wave scattering problems

    International Nuclear Information System (INIS)

    Tsao, S.J.; Varadan, V.K.; Varadan, V.V.

    1983-01-01

    This article briefly describes the T-matrix method and the MOOT (method of optimal truncation) of elastic wave scattering as they apply to A-D, SH- wave problems as well as 3-D elastic wave problems. Two methods are compared for scattering by elliptical cylinders as well as oblate spheroids of various eccentricity as a function of frequency. Convergence, and symmetry of the scattering cross section are also compared for ellipses and spheroidal cavities of different aspect ratios. Both the T-matrix approach and the MOOT were programmed on an AMDHL 470 computer using double precision arithmetic. Although the T-matrix method and MOOT are not always in agreement, it is in no way implied that any of the published results using MOOT are in error

  20. Eddy current testing of PWR fuel pencils in the pool of the Osiris reactor

    International Nuclear Information System (INIS)

    Faure, M.; Marchand, L.

    1983-12-01

    A nondestructive testing bench is described. It is devoted to examination of high residual power fuel pencils without stress on the cladding nor interference with cooling. Guiding by fluid bearings decrease the background noise. Scanning speed is limited only by safety criteria and data acquisition configuration. Simultaneous control of various parameters is possible. Associated to an irradiation loop, loaded and unloaded in a reactor swinning pool, this bench can follow fuel pencil degradation after each irradiation cycle [fr

  1. Refractive index inversion based on Mueller matrix method

    Science.gov (United States)

    Fan, Huaxi; Wu, Wenyuan; Huang, Yanhua; Li, Zhaozhao

    2016-03-01

    Based on Stokes vector and Jones vector, the correlation between Mueller matrix elements and refractive index was studied with the result simplified, and through Mueller matrix way, the expression of refractive index inversion was deduced. The Mueller matrix elements, under different incident angle, are simulated through the expression of specular reflection so as to analyze the influence of the angle of incidence and refractive index on it, which is verified through the measure of the Mueller matrix elements of polished metal surface. Research shows that, under the condition of specular reflection, the result of Mueller matrix inversion is consistent with the experiment and can be used as an index of refraction of inversion method, and it provides a new way for target detection and recognition technology.

  2. A new paper and pencil task reveals adult false belief reasoning bias.

    Science.gov (United States)

    Coburn, Patricia I; Bernstein, Daniel M; Begeer, Sander

    2015-09-01

    Theory of mind (ToM) is the ability to take other people's perspective by inferring their mental state. Most 6-year olds pass the change-of-location false belief task that is commonly used to assess ToM. However, the change-of-location task is not suitable for individuals over 5 years of age, due to its discrete response options. In two experiments, we used a paper and pencil version of a modified change-of-location task (the Real Object Sandbox task) to assess false belief reasoning continuously rather than discretely in adults. Participants heard nine change-of-location scenarios and answered a critical question after each. The memory control questions only required the participant to remember the object's original location, whereas the false belief questions required participants to take the perspective of the protagonist. Participants were more accurate on memory trials than trials requiring perspective taking, and performance on paper and pencil trials correlated with corresponding trials on the Real Object Sandbox task. The Paper and Pencil Sandbox task is a convenient continuous measure of ToM that could be administered to a wide range of age groups.

  3. SU-F-T-182: A Stochastic Approach to Daily QA Tolerances On Spot Properties for Proton Pencil Beam Scanning

    International Nuclear Information System (INIS)

    St James, S; Bloch, C; Saini, J

    2016-01-01

    Purpose: Proton pencil beam scanning is used clinically across the United States. There are no current guidelines on tolerances for daily QA specific to pencil beam scanning, specifically related to the individual spot properties (spot width). Using a stochastic method to determine tolerances has the potential to optimize tolerances on individual spots and decrease the number of false positive failures in daily QA. Individual and global spot tolerances were evaluated. Methods: As part of daily QA for proton pencil beam scanning, a field of 16 spots (corresponding to 8 energies) is measured using an array of ion chambers (Matrixx, IBA). Each individual spot is fit to two Gaussian functions (x,y). The spot width (σ) in × and y are recorded (32 parameters). Results from the daily QA were retrospectively analyzed for 100 days of data. The deviations of the spot widths were histogrammed and fit to a Gaussian function. The stochastic spot tolerance was taken to be the mean ± 3σ. Using these results, tolerances were developed and tested against known deviations in spot width. Results: The individual spot tolerances derived with the stochastic method decreased in 30/32 instances. Using the previous tolerances (± 20% width), the daily QA would have detected 0/20 days of the deviation. Using a tolerance of any 6 spots failing the stochastic tolerance, 18/20 days of the deviation would have been detected. Conclusion: Using a stochastic method we have been able to decrease daily tolerances on the spot widths for 30/32 spot widths measured. The stochastic tolerances can lead to detection of deviations that previously would have been picked up on monthly QA and missed by daily QA. This method could be easily extended for evaluation of other QA parameters in proton spot scanning.

  4. Pencil-shaped radiation detection ionization chamber

    International Nuclear Information System (INIS)

    Suzuki, A.

    1979-01-01

    A radiation detection ionization chamber is described. It consists of an elongated cylindrical pencil-shaped tubing forming an outer wall of the chamber and a center electrode disposed along the major axis of the tubing. The length of the chamber is substantially greater than the diameter. A cable connecting portion at one end of the chamber is provided for connecting the chamber to a triaxial cable. An end support portion is connected at the other end of the chamber for supporting and tensioning the center electrode. 17 claims

  5. Direct determination of scattering time delays using the R-matrix propagation method

    International Nuclear Information System (INIS)

    Walker, R.B.; Hayes, E.F.

    1989-01-01

    A direct method for determining time delays for scattering processes is developed using the R-matrix propagation method. The procedure involves the simultaneous generation of the global R matrix and its energy derivative. The necessary expressions to obtain the energy derivative of the S matrix are relatively simple and involve many of the same matrix elements required for the R-matrix propagation method. This method is applied to a simple model for a chemical reaction that displays sharp resonance features. The test results of the direct method are shown to be in excellent agreement with the traditional numerical differentiation method for scattering energies near the resonance energy. However, for sharp resonances the numerical differentiation method requires calculation of the S-matrix elements at many closely spaced energies. Since the direct method presented here involves calculations at only a single energy, one is able to generate accurate energy derivatives and time delays much more efficiently and reliably

  6. Adaptation from Paper-Pencil to Web-Based Administration of a Parent-Completed Developmental Questionnaire for Young Children

    Science.gov (United States)

    Yovanoff, Paul; Squires, Jane; McManus, Suzanne

    2013-01-01

    Adapting traditional paper-pencil instruments to computer-based environments has received considerable attention from the research community due to the possible administration mode effects on obtained measures. When differences due to mode of completion (i.e., paper-pencil, computer-based) are present, threats to measurement validity are posed. In…

  7. The Matrix Element Method at Next-to-Leading Order

    OpenAIRE

    Campbell, John M.; Giele, Walter T.; Williams, Ciaran

    2012-01-01

    This paper presents an extension of the matrix element method to next-to-leading order in perturbation theory. To accomplish this we have developed a method to calculate next-to-leading order weights on an event-by-event basis. This allows for the definition of next-to-leading order likelihoods in exactly the same fashion as at leading order, thus extending the matrix element method to next-to-leading order. A welcome by-product of the method is the straightforward and efficient generation of...

  8. Reference dosimetry of proton pencil beams based on dose-area product: a proof of concept.

    Science.gov (United States)

    Gomà, Carles; Safai, Sairos; Vörös, Sándor

    2017-06-21

    This paper describes a novel approach to the reference dosimetry of proton pencil beams based on dose-area product ([Formula: see text]). It depicts the calibration of a large-diameter plane-parallel ionization chamber in terms of dose-area product in a 60 Co beam, the Monte Carlo calculation of beam quality correction factors-in terms of dose-area product-in proton beams, the Monte Carlo calculation of nuclear halo correction factors, and the experimental determination of [Formula: see text] of a single proton pencil beam. This new approach to reference dosimetry proves to be feasible, as it yields [Formula: see text] values in agreement with the standard and well-established approach of determining the absorbed dose to water at the centre of a broad homogeneous field generated by the superposition of regularly-spaced proton pencil beams.

  9. Pencil beam proton radiography using a multilayer ionization chamber

    Science.gov (United States)

    Farace, Paolo; Righetto, Roberto; Meijers, Arturs

    2016-06-01

    A pencil beam proton radiography (PR) method, using a commercial multilayer ionization chamber (MLIC) integrated with a treatment planning system (TPS) was developed. A Giraffe (IBA Dosimetry) MLIC (±0.5 mm accuracy) was used to obtain pencil beam PR by delivering spots uniformly positioned at a 5.0 mm distance in a 9  ×  9 square of spots. PRs of an electron-density (with tissue-equivalent inserts) phantom and a head phantom were acquired. The integral depth dose (IDD) curves of the delivered spots were computed by the TPS in a volume of water simulating the MLIC, and virtually added to the CT at the exit side of the phantoms. For each spot, measured and calculated IDD were overlapped in order to compute a map of range errors. On the head-phantom, the maximum dose from PR acquisition was estimated. Additionally, on the head phantom the impact on the range errors map was estimated in case of a 1 mm position misalignment. In the electron-density phantom, range errors were within 1 mm in the soft-tissue rods, but greater in the dense-rod. In the head-phantom the range errors were  -0.9  ±  2.7 mm on the whole map and within 1 mm in the brain area. On both phantoms greater errors were observed at inhomogeneity interfaces, due to sensitivity to small misalignment, and inaccurate TPS dose computation. The effect of the 1 mm misalignment was clearly visible on the range error map and produced an increased spread of range errors (-1.0  ±  3.8 mm on the whole map). The dose to the patient for such PR acquisitions would be acceptable as the maximum dose to the head phantom was  <2cGyE. By the described 2D method, allowing to discriminate misalignments, range verification can be performed in selected areas to implement an in vivo quality assurance program.

  10. Pencil beam proton radiography using a multilayer ionization chamber.

    Science.gov (United States)

    Farace, Paolo; Righetto, Roberto; Meijers, Arturs

    2016-06-07

    A pencil beam proton radiography (PR) method, using a commercial multilayer ionization chamber (MLIC) integrated with a treatment planning system (TPS) was developed. A Giraffe (IBA Dosimetry) MLIC (±0.5 mm accuracy) was used to obtain pencil beam PR by delivering spots uniformly positioned at a 5.0 mm distance in a 9  ×  9 square of spots. PRs of an electron-density (with tissue-equivalent inserts) phantom and a head phantom were acquired. The integral depth dose (IDD) curves of the delivered spots were computed by the TPS in a volume of water simulating the MLIC, and virtually added to the CT at the exit side of the phantoms. For each spot, measured and calculated IDD were overlapped in order to compute a map of range errors. On the head-phantom, the maximum dose from PR acquisition was estimated. Additionally, on the head phantom the impact on the range errors map was estimated in case of a 1 mm position misalignment. In the electron-density phantom, range errors were within 1 mm in the soft-tissue rods, but greater in the dense-rod. In the head-phantom the range errors were  -0.9  ±  2.7 mm on the whole map and within 1 mm in the brain area. On both phantoms greater errors were observed at inhomogeneity interfaces, due to sensitivity to small misalignment, and inaccurate TPS dose computation. The effect of the 1 mm misalignment was clearly visible on the range error map and produced an increased spread of range errors (-1.0  ±  3.8 mm on the whole map). The dose to the patient for such PR acquisitions would be acceptable as the maximum dose to the head phantom was  <2cGyE. By the described 2D method, allowing to discriminate misalignments, range verification can be performed in selected areas to implement an in vivo quality assurance program.

  11. Robust Proton Pencil Beam Scanning Treatment Planning for Rectal Cancer Radiation Therapy

    International Nuclear Information System (INIS)

    Blanco Kiely, Janid Patricia; White, Benjamin M.

    2016-01-01

    Purpose: To investigate, in a treatment plan design and robustness study, whether proton pencil beam scanning (PBS) has the potential to offer advantages, relative to interfraction uncertainties, over photon volumetric modulated arc therapy (VMAT) in a locally advanced rectal cancer patient population. Methods and Materials: Ten patients received a planning CT scan, followed by an average of 4 weekly offline CT verification CT scans, which were rigidly co-registered to the planning CT. Clinical PBS plans were generated on the planning CT, using a single-field uniform-dose technique with single-posterior and parallel-opposed (LAT) fields geometries. The VMAT plans were generated on the planning CT using 2 6-MV, 220° coplanar arcs. Clinical plans were forward-calculated on verification CTs to assess robustness relative to anatomic changes. Setup errors were assessed by forward-calculating clinical plans with a ±5-mm (left–right, anterior–posterior, superior–inferior) isocenter shift on the planning CT. Differences in clinical target volume and organ at risk dose–volume histogram (DHV) indicators between plans were tested for significance using an appropriate Wilcoxon test (P<.05). Results: Dosimetrically, PBS plans were statistically different from VMAT plans, showing greater organ at risk sparing. However, the bladder was statistically identical among LAT and VMAT plans. The clinical target volume coverage was statistically identical among all plans. The robustness test found that all DVH indicators for PBS and VMAT plans were robust, except the LAT's genitalia (V5, V35). The verification CT plans showed that all DVH indicators were robust. Conclusions: Pencil beam scanning plans were found to be as robust as VMAT plans relative to interfractional changes during treatment when posterior beam angles and appropriate range margins are used. Pencil beam scanning dosimetric gains in the bowel (V15, V20) over VMAT suggest that using PBS to treat rectal cancer

  12. Robust Proton Pencil Beam Scanning Treatment Planning for Rectal Cancer Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Blanco Kiely, Janid Patricia, E-mail: jkiely@sas.upenn.edu; White, Benjamin M.

    2016-05-01

    Purpose: To investigate, in a treatment plan design and robustness study, whether proton pencil beam scanning (PBS) has the potential to offer advantages, relative to interfraction uncertainties, over photon volumetric modulated arc therapy (VMAT) in a locally advanced rectal cancer patient population. Methods and Materials: Ten patients received a planning CT scan, followed by an average of 4 weekly offline CT verification CT scans, which were rigidly co-registered to the planning CT. Clinical PBS plans were generated on the planning CT, using a single-field uniform-dose technique with single-posterior and parallel-opposed (LAT) fields geometries. The VMAT plans were generated on the planning CT using 2 6-MV, 220° coplanar arcs. Clinical plans were forward-calculated on verification CTs to assess robustness relative to anatomic changes. Setup errors were assessed by forward-calculating clinical plans with a ±5-mm (left–right, anterior–posterior, superior–inferior) isocenter shift on the planning CT. Differences in clinical target volume and organ at risk dose–volume histogram (DHV) indicators between plans were tested for significance using an appropriate Wilcoxon test (P<.05). Results: Dosimetrically, PBS plans were statistically different from VMAT plans, showing greater organ at risk sparing. However, the bladder was statistically identical among LAT and VMAT plans. The clinical target volume coverage was statistically identical among all plans. The robustness test found that all DVH indicators for PBS and VMAT plans were robust, except the LAT's genitalia (V5, V35). The verification CT plans showed that all DVH indicators were robust. Conclusions: Pencil beam scanning plans were found to be as robust as VMAT plans relative to interfractional changes during treatment when posterior beam angles and appropriate range margins are used. Pencil beam scanning dosimetric gains in the bowel (V15, V20) over VMAT suggest that using PBS to treat rectal

  13. A comparison of paper-and-pencil and computerized forms of Line Orientation and Enhanced Cued Recall Tests.

    Science.gov (United States)

    Aşkar, Petek; Altun, Arif; Cangöz, Banu; Cevik, Vildan; Kaya, Galip; Türksoy, Hasan

    2012-04-01

    The purpose of this study was to assess whether a computerized battery of neuropsychological tests could produce similar results as the conventional forms. Comparisons on 77 volunteer undergraduates were carried out with two neuropsychological tests: Line Orientation Test and Enhanced Cued Recall Test. Firstly, students were assigned randomly across the test medium (paper-and-pencil versus computerized). Secondly, the groups were given the same test in the other medium after a 30-day interval between tests. Results showed that the Enhanced Cued Recall Test-Computer-based did not correlate with the Enhanced Cued Recall Test-Paper-and-pencil results. Line Orientation Test-Computer-based scores, on the other hand, did correlate significantly with the Line Orientation Test-Paper-and-pencil version. In both tests, scores were higher on paper-and-pencil tests compared to computer-based tests. Total score difference between modalities was statistically significant for both Enhanced Cued Recall Tests and for the Line Orientation Test. In both computer-based tests, it took less time for participants to complete the tests.

  14. Feasibility of Pencil Beam Scanned Intensity Modulated Proton Therapy in Breath-hold for Locally Advanced Non-Small Cell Lung Cancer

    DEFF Research Database (Denmark)

    Gorgisyan, Jenny; Munck Af Rosenschold, Per; Perrin, Rosalind

    2017-01-01

    PURPOSE: We evaluated the feasibility of treating patients with locally advanced non-small cell lung cancer (NSCLC) with pencil beam scanned intensity modulated proton therapy (IMPT) in breath-hold. METHODS AND MATERIALS: Fifteen NSCLC patients who had previously received 66 Gy in 33 fractions wi...

  15. Pareto front analysis of 6 and 15 MV dynamic IMRT for lung cancer using pencil beam, AAA and Monte Carlo

    DEFF Research Database (Denmark)

    Ottosson, R O; Hauer, Anna Karlsson; Behrens, C.F.

    2010-01-01

    The pencil beam dose calculation method is frequently used in modern radiation therapy treatment planning regardless of the fact that it is documented inaccurately for cases involving large density variations. The inaccuracies are larger for higher beam energies. As a result, low energy beams are...

  16. An improved 4-step commutation method application for matrix converter

    DEFF Research Database (Denmark)

    Guo, Yu; Guo, Yougui; Deng, Wenlang

    2014-01-01

    A novel four-step commutation method is proposed for matrix converter cell, 3 phase inputs to 1 phase output in this paper, which is obtained on the analysis of published commutation methods for matrix converter. The first and fourth step can be shorter than the second or third one. The discussed...... method here is implemented by programming in VHDL language. Finally, the novel method in this paper is verified by experiments....

  17. Evaluation of the energy dependence of ionization chambers pencil type calibrated beam tomography standards

    International Nuclear Information System (INIS)

    Fontes, Ladyjane Pereira; Potiens, Maria da Penha A.

    2015-01-01

    The Instrument Calibration Laboratory of IPEN (LCI - IPEN) performs calibrations of pencil-type ionization chambers (IC) used in measures of dosimetric survey on clinical systems of Computed Tomography (CT). Many users make mistakes when using a calibrated ionization chamber in their CT dosimetry systems. In this work a methodology for determination of factors of correction for quality (Kq) through the calibration curve that is specific for each ionization chamber was established. Furthermore, it was possible to demonstrate the energy dependence on an pencil-type Ionization Chamber(IC) calibrated at the LCI - IPEN. (author)

  18. Widening the Scope of R-matrix Methods

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Ian J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dimitriou, Paraskevi [IAEA, Vienna (Austria); DeBoer, Richard J. [Nieuwland Science Hall, Notre Dame, IN (United States); Kunieda, Satoshi [Nuclear Data Center (JAEA), Tokai (Japan); Paris, Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Thompson, Ian [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Trkov, Andrej [IAEA, Vienna (Austria)

    2016-03-01

    A Consultant’s Meeting was held at the IAEA Headquarters, from 7 to 9 December 2015, to discuss the status of R-matrix codes currently used in calculations of charged-particle induced reaction cross sections at low energies. The ultimate goal was to initiate an international effort, coordinated by the IAEA, to evaluate charged-particle induced reactions in the resolved-resonance region. Participants reviewed the capabilities of the codes, the different implementations of R-matrix theory and translatability of the R-matrix parameters, the evaluation methods and suitable data formats for broader dissemination. The details of the presentations and technical discussions, as well as the actions that were proposed to achieve the goal of the meeting are summarized in this report.

  19. Application of the R-matrix method to photoionization of molecules.

    Science.gov (United States)

    Tashiro, Motomichi

    2010-04-07

    The R-matrix method has been used for theoretical calculation of electron collision with atoms and molecules for long years. The method was also formulated to treat photoionization process, however, its application has been mostly limited to photoionization of atoms. In this work, we implement the R-matrix method to treat molecular photoionization problem based on the UK R-matrix codes. This method can be used for diatomic as well as polyatomic molecules, with multiconfigurational description for electronic states of both target neutral molecule and product molecular ion. Test calculations were performed for valence electron photoionization of nitrogen (N(2)) as well as nitric oxide (NO) molecules. Calculated photoionization cross sections and asymmetry parameters agree reasonably well with the available experimental results, suggesting usefulness of the method for molecular photoionization.

  20. 76 FR 12323 - Certain Cased Pencils From the People's Republic of China: Final Results of the Expedited Third...

    Science.gov (United States)

    2011-03-07

    ... the People's Republic of China: Final Results of the Expedited Third Sunset Review of the Antidumping... antidumping duty order on certain cased pencils (``pencils'') from the People's Republic of China (``PRC''), pursuant to section 751(c) of the Tariff Act of 1930, as amended (``the Act''). See Initiation of Five-Year...

  1. Three-body forces for electrons by the S-matrix method

    International Nuclear Information System (INIS)

    Margaritelli, R.

    1989-01-01

    A electromagnetic three-body potential between eletrons is derived by the S-matrix method. This potential can be compared up to a certain point with other electromagnetic potentials (obtained by other methods) encountered in the literature. However, since the potential derived here is far more complete than others, this turns direct comparison with the potentials found in the literature somewhat difficult. These calculations allow a better understanding of the S-matrix method as applied to problems which involve the calculations of three-body nuclear forces (these calculations are performed in order to understand the 3 He form factor). Furthermore, these results enable us to decide between two discrepant works which derive the two-pion exchange three-body potential, both by the S-matrix method. (author) [pt

  2. Hybrid transfer-matrix FDTD method for layered periodic structures.

    Science.gov (United States)

    Deinega, Alexei; Belousov, Sergei; Valuev, Ilya

    2009-03-15

    A hybrid transfer-matrix finite-difference time-domain (FDTD) method is proposed for modeling the optical properties of finite-width planar periodic structures. This method can also be applied for calculation of the photonic bands in infinite photonic crystals. We describe the procedure of evaluating the transfer-matrix elements by a special numerical FDTD simulation. The accuracy of the new method is tested by comparing computed transmission spectra of a 32-layered photonic crystal composed of spherical or ellipsoidal scatterers with the results of direct FDTD and layer-multiple-scattering calculations.

  3. A nodal method based on the response-matrix method

    International Nuclear Information System (INIS)

    Cunha Menezes Filho, A. da; Rocamora Junior, F.D.

    1983-02-01

    A nodal approach based on the Response-Matrix method is presented with the purpose of investigating the possibility of mixing two different allocations in the same problem. It is found that the use of allocation of albedo combined with allocation of direct reflection produces good results for homogeneous fast reactor configurations. (Author) [pt

  4. Amperometric sensor for detection of bisphenol A using a pencil graphite electrode modified with polyaniline nanorods and multiwalled carbon nanotubes

    International Nuclear Information System (INIS)

    Poorahong, S.; Thammakhet, C.; Numnuam, A.; Kanatharana, P.; Thavarungkul, P.; Limbut, W.

    2012-01-01

    We report on a simple and highly sensitive amperometric method for the determination of bisphenol A (BPA) using pencil graphite electrodes modified with polyaniline nanorods and multiwalled carbon nanotubes. The modified electrodes display enhanced electroactivity for the oxidation of BPA compared to the unmodified pencil graphite electrode. Under optimized conditions, the sensor has a linear response to BPA in the 1. 0 and 400 μM concentration range, with a limit of detection of 10 nM (at S/N = 3). The modified electrode also has a remarkably stable response, and up to 95 injections are possible with a relative standard deviation of 4. 2% at 100 μM of BPA. Recoveries range from 86 to 102% for boiling water spiked with BPA from four brands of baby bottles. (author)

  5. Electrochemical Preparation of a Molecularly Imprinted Polypyrrole-modified Pencil Graphite Electrode for Determination of Ascorbic Acid

    Directory of Open Access Journals (Sweden)

    Yücel Sahin

    2008-09-01

    Full Text Available A molecularly imprinted polymer (MIP polypyrrole (PPy-based film was fabricated for the determination of ascorbic acid. The film was prepared by incorporation of a template molecule (ascorbic acid during the electropolymerization of pyrrole onto a pencil graphite electrode (PGE in aqueous solution using a cyclic voltammetry method. The performance of the imprinted and non-imprinted (NIP films was evaluated by differential pulse voltammetry (DPV. The effect of pH, monomer and template concentrations, electropolymerization cycles and interferents on the performance of the MIP electrode was investigated and optimized. The molecularly imprinted film exhibited a high selectivity and sensitivity toward ascorbic acid. The DPV peak current showed a linear dependence on the ascorbic acid concentration and a linear calibration curve was obtained in the range of 0.25 to 7.0 mM of ascorbic acid with a correlation coefficient of 0.9946. The detection limit (3σ was determined as 7.4x10-5 M (S/N=3. The molecularly-imprinted polypyrrole-modified pencil graphite electrode showed a stable and reproducible response, without any influence of interferents commonly existing in pharmaceutical samples. The proposed method is simple and quick. The PPy electrodes have a low response time, good mechanical stability and are disposable simple to construct.

  6. Correction of experimental photon pencil-beams for the effects of non-uniform and non-parallel measurement conditions

    International Nuclear Information System (INIS)

    Ceberg, Crister P.; Bjaerngard, Bengt E.

    1995-01-01

    An approximate experimental determination of photon pencil-beams can be based on the reciprocity theorem. The scatter part of the pencil-beam is then essentially the derivative with respect to the field radius of measured scatter-to-primary ratios in circular fields. Obtained in this way, however, the pencil-beam implicitly carries the influence from the lateral fluence and beam quality variations of the incident photons, as well as the effects of the divergence of the beam. In this work we show how these effects can be corrected for. The procedure was to calculate scatter-to-primary ratios using an analytical expression for the pencil-beam. By disregarding one by one the effects of the divergence and the fluence and beam quality variations, the influence of these effects were separated and quantified. For instance, for a 6 MV beam of 20x20 cm 2 field size, at 20 cm depth and a source distance of 100 cm, the total effect was 3.9%; 2.0% was due to the non-uniform incident profile, 1.0% due to the non-uniform beam quality, and 0.9% due to the divergence of the beam. At a source distance of 400 cm, all these effects were much lower, adding up to a total of 0.3 %. Using calculated correction factors like these, measured scatter-to-primary ratios were then stripped from the effects of non-uniform and non-parallel measurement conditions, and the scatter part of the pencil-beam was determined using the reciprocity theorem without approximations

  7. Matrix method for two-dimensional waveguide mode solution

    Science.gov (United States)

    Sun, Baoguang; Cai, Congzhong; Venkatesh, Balajee Seshasayee

    2018-05-01

    In this paper, we show that the transfer matrix theory of multilayer optics can be used to solve the modes of any two-dimensional (2D) waveguide for their effective indices and field distributions. A 2D waveguide, even composed of numerous layers, is essentially a multilayer stack and the transmission through the stack can be analysed using the transfer matrix theory. The result is a transfer matrix with four complex value elements, namely A, B, C and D. The effective index of a guided mode satisfies two conditions: (1) evanescent waves exist simultaneously in the first (cladding) layer and last (substrate) layer, and (2) the complex element D vanishes. For a given mode, the field distribution in the waveguide is the result of a 'folded' plane wave. In each layer, there is only propagation and absorption; at each boundary, only reflection and refraction occur, which can be calculated according to the Fresnel equations. As examples, we show that this method can be used to solve modes supported by the multilayer step-index dielectric waveguide, slot waveguide, gradient-index waveguide and various plasmonic waveguides. The results indicate the transfer matrix method is effective for 2D waveguide mode solution in general.

  8. Comparison of Paper-and-Pencil versus Web Administration of the Youth Risk Behavior Survey (YRBS): Risk Behavior Prevalence Estimates

    Science.gov (United States)

    Eaton, Danice K.; Brener, Nancy D.; Kann, Laura; Denniston, Maxine M.; McManus, Tim; Kyle, Tonja M.; Roberts, Alice M.; Flint, Katherine H.; Ross, James G.

    2010-01-01

    The authors examined whether paper-and-pencil and Web surveys administered in the school setting yield equivalent risk behavior prevalence estimates. Data were from a methods study conducted by the Centers for Disease Control and Prevention (CDC) in spring 2008. Intact classes of 9th- or 10th-grade students were assigned randomly to complete a…

  9. Theoretical treatment of molecular photoionization based on the R-matrix method

    International Nuclear Information System (INIS)

    Tashiro, Motomichi

    2012-01-01

    The R-matrix method was implemented to treat molecular photoionization problem based on the UK R-matrix codes. This method was formulated to treat photoionization process long before, however, its application has been mostly limited to photoionization of atoms. Application of the method to valence photoionization as well as inner-shell photoionization process will be presented.

  10. Solution-processed nanocrystalline PbS on paper substrate with pencil traced electrodes as visible photodetector

    Science.gov (United States)

    Vankhade, Dhaval; Chaudhuri, Tapas K.

    2018-04-01

    Paper-based PbS photodetector sensitive in the visible spectrum is reported. Nanocrystalline PbS-on-paper devices are fabricated by a spin coating method on white paper (300 GSM) from a methanolic precursor solution. Photodetector cells of gap 0.2 cm and length 0.5 cm are prepared by drawing contacts by monolithic cretacolor 8B pencil. X-ray diffractometer confirmed the deposition of nanocrystalline PbS films with 14 nm crystallites. The SEM illustrated the uniform coating of nanocrystalline PbS thin films on cellulose fibres of papers having an average thickness of fibres are 10 µm. The linear J-V characteristics in dark and under illumination of light using graphite trace on nanocrystalline PbS-on-paper shows good ohmic contact. The resistivity of pencil trace is 30 Ω.cm. Spectral response measurements of photodetector reveal the excellent sensitivity from 400 to 700 nm with a peak at 550 nm. The best responsivity anddetectivity are 0.7 A/W and 1.4 × 1012 Jones respectively. These paper-based low-cost photodetectors devices have fast photoresponse and recovery without baseline deviation.

  11. Fast pencil beam dose calculation for proton therapy using a double-Gaussian beam model

    Directory of Open Access Journals (Sweden)

    Joakim eda Silva

    2015-12-01

    Full Text Available The highly conformal dose distributions produced by scanned proton pencil beams are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a pencil beam algorithm running on graphics processing units (GPUs intended specifically for online dose calculation. Here we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such pencil beam algorithm for proton therapy running on a GPU. We employ two different parametrizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of pencil beams in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included whilst prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Further, the calculation time is relatively unaffected by the parametrization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy.

  12. The role of a microDiamond detector in the dosimetry of proton pencil beams

    Energy Technology Data Exchange (ETDEWEB)

    Goma, Carles [Paul Scherrer Institute, Villigen (Switzerland). Centre for Proton Therapy; Swiss Federal Institute of Technology Zurich (Switzerland). Dept. of Physics; Marinelli, Marco; Verona-Rinati, Gianluca [Roma Univ. ' ' Tor Vergata' ' (Italy). Dipt. di Ingegneria Industriale; INFN, Roma (Italy); Safai, Sairos [Paul Scherrer Institute, Villigen (Switzerland). Centre for Proton Therapy; Wuerfel, Jan [PTW-Freiburg, Freiburg (Germany)

    2016-05-01

    In this work, the performance of a microDiamond detector in a scanned proton beam is studied and its potential role in the dosimetric characterization of proton pencil beams is assessed. The linearity of the detector response with the absorbed dose and the dependence on the dose-rate were tested. The depth-dose curve and the lateral dose profiles of a proton pencil beam were measured and compared to reference data. The feasibility of calibrating the beam monitor chamber with a microDiamond detector was also studied. It was found the detector reading is linear with the absorbed dose to water (down to few cGy) and the detector response is independent of both the dose-rate (up to few Gy/s) and the proton beam energy (within the whole clinically-relevant energy range). The detector showed a good performance in depth-dose curve and lateral dose profile measurements; and it might even be used to calibrate the beam monitor chambers-provided it is cross-calibrated against a reference ionization chamber. In conclusion, the microDiamond detector was proved capable of performing an accurate dosimetric characterization of proton pencil beams.

  13. Modeling of beam customization devices in the pencil-beam splitting algorithm for heavy charged particle radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Kanematsu, Nobuyuki, E-mail: nkanemat@nirs.go.jp [Department of Accelerator and Medical Physics, Research Center for Charged Particle Therapy, National Institute of Radiological Sciences, 4-9-1 Anagawa, Inage-ku, Chiba 263-8555 (Japan); Department of Quantum Science and Energy Engineering, School of Engineering, Tohoku University, 6-6 Aramaki Aza Aoba, Aoba-ku, Sendai 980-8579 (Japan)

    2011-03-07

    A broad-beam-delivery system for radiotherapy with protons or ions often employs multiple collimators and a range-compensating filter, which offer complex and potentially useful beam customization. It is however difficult for conventional pencil-beam algorithms to deal with fine structures of these devices due to beam-size growth during transport. This study aims to avoid the difficulty with a novel computational model. The pencil beams are initially defined at the range-compensating filter with angular-acceptance correction for upstream collimation followed by stopping and scattering. They are individually transported with possible splitting near the aperture edge of a downstream collimator to form a sharp field edge. The dose distribution for a carbon-ion beam was calculated and compared with existing experimental data. The penumbra sizes of various collimator edges agreed between them to a submillimeter level. This beam-customization model will be used in the greater framework of the pencil-beam splitting algorithm for accurate and efficient patient dose calculation.

  14. Modeling of beam customization devices in the pencil-beam splitting algorithm for heavy charged particle radiotherapy.

    Science.gov (United States)

    Kanematsu, Nobuyuki

    2011-03-07

    A broad-beam-delivery system for radiotherapy with protons or ions often employs multiple collimators and a range-compensating filter, which offer complex and potentially useful beam customization. It is however difficult for conventional pencil-beam algorithms to deal with fine structures of these devices due to beam-size growth during transport. This study aims to avoid the difficulty with a novel computational model. The pencil beams are initially defined at the range-compensating filter with angular-acceptance correction for upstream collimation followed by stopping and scattering. They are individually transported with possible splitting near the aperture edge of a downstream collimator to form a sharp field edge. The dose distribution for a carbon-ion beam was calculated and compared with existing experimental data. The penumbra sizes of various collimator edges agreed between them to a submillimeter level. This beam-customization model will be used in the greater framework of the pencil-beam splitting algorithm for accurate and efficient patient dose calculation.

  15. Efficient computation method of Jacobian matrix

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1995-05-01

    As well known, the elements of the Jacobian matrix are complex trigonometric functions of the joint angles, resulting in a matrix of staggering complexity when we write it all out in one place. This article addresses that difficulties to this subject are overcome by using velocity representation. The main point is that its recursive algorithm and computer algebra technologies allow us to derive analytical formulation with no human intervention. Particularly, it is to be noted that as compared to previous results the elements are extremely simplified throughout the effective use of frame transformations. Furthermore, in case of a spherical wrist, it is shown that the present approach is computationally most efficient. Due to such advantages, the proposed method is useful in studying kinematically peculiar properties such as singularity problems. (author)

  16. Acceleration of intensity-modulated radiotherapy dose calculation by importance sampling of the calculation matrices

    International Nuclear Information System (INIS)

    Thieke, Christian; Nill, Simeon; Oelfke, Uwe; Bortfeld, Thomas

    2002-01-01

    In inverse planning for intensity-modulated radiotherapy, the dose calculation is a crucial element limiting both the maximum achievable plan quality and the speed of the optimization process. One way to integrate accurate dose calculation algorithms into inverse planning is to precalculate the dose contribution of each beam element to each voxel for unit fluence. These precalculated values are stored in a big dose calculation matrix. Then the dose calculation during the iterative optimization process consists merely of matrix look-up and multiplication with the actual fluence values. However, because the dose calculation matrix can become very large, this ansatz requires a lot of computer memory and is still very time consuming, making it not practical for clinical routine without further modifications. In this work we present a new method to significantly reduce the number of entries in the dose calculation matrix. The method utilizes the fact that a photon pencil beam has a rapid radial dose falloff, and has very small dose values for the most part. In this low-dose part of the pencil beam, the dose contribution to a voxel is only integrated into the dose calculation matrix with a certain probability. Normalization with the reciprocal of this probability preserves the total energy, even though many matrix elements are omitted. Three probability distributions were tested to find the most accurate one for a given memory size. The sampling method is compared with the use of a fully filled matrix and with the well-known method of just cutting off the pencil beam at a certain lateral distance. A clinical example of a head and neck case is presented. It turns out that a sampled dose calculation matrix with only 1/3 of the entries of the fully filled matrix does not sacrifice the quality of the resulting plans, whereby the cutoff method results in a suboptimal treatment plan

  17. The response-matrix based AFEN method for the hexagonal geometry

    International Nuclear Information System (INIS)

    Noh, Jae Man; Kim, Keung Koo; Zee, Sung Quun; Joo, Hyung Kook; Cho, Byng Oh; Jeong, Hyung Guk; Cho, Jin Young

    1998-03-01

    The analytic function expansion nodal (AFEN) method, developed to overcome the limitations caused by the transverse integration, has been successfully to predict the neutron behavior in the hexagonal core as well as rectangular core. In the hexagonal node, the transverse leakage resulted from the transverse integration has some singular terms such as delta-function and step-functions near the node center line. In most nodal methods using the transverse integration, the accuracy of nodal method is degraded because the transverse leakage is approximated as a smooth function across the node center line by ignoring singular terms. However, the AFEN method in which there is no transverse leakage term in deriving nodal coupling equations keeps good accuracy for hexagonal node. In this study, the AFEN method which shows excellent accuracy in the hexagonal core analyses is reformulated as a response matrix form. This form of the AFEN method can be implemented easily to nodal codes based on the response matrix method. Therefore, the Coarse Mesh Rebalance (CMR) acceleration technique which is one of main advantages of the response matrix method can be utilized for the AFEN method. The response matrix based AFEN method has been successfully implemented into the MASTER code and its accuracy and computational efficiency were examined by analyzing the two- and three- dimensional benchmark problem of VVER-440. Based on the results, it can be concluded that the newly formulated AFEN method predicts accurately the assembly powers (within 0.2% average error) as well as the effective multiplication factor (within 0.2% average error) as well as the effective multiplication factor (within 20 pcm error). In addition, the CMR acceleration technique is quite efficient in reducing the computation time of the AFEN method by 8 to 10 times. (author). 22 refs., 1 tab., 4 figs

  18. Analytical techniques for instrument design - matrix methods

    International Nuclear Information System (INIS)

    Robinson, R.A.

    1997-01-01

    We take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalisation to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, we discuss a toolbox of matrix manipulations that can be performed on the 6- dimensional Cooper-Nathans matrix: diagonalisation (Moller-Nielsen method), coordinate changes e.g. from (Δk I ,Δk F to ΔE, ΔQ ampersand 2 dummy variables), integration of one or more variables (e.g. over such dummy variables), integration subject to linear constraints (e.g. Bragg's Law for analysers), inversion to give the variance-covariance matrix, and so on. We show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. We will argue that a generalised program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. We will also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question

  19. Matrix-based system reliability method and applications to bridge networks

    International Nuclear Information System (INIS)

    Kang, W.-H.; Song Junho; Gardoni, Paolo

    2008-01-01

    Using a matrix-based system reliability (MSR) method, one can estimate the probabilities of complex system events by simple matrix calculations. Unlike existing system reliability methods whose complexity depends highly on that of the system event, the MSR method describes any general system event in a simple matrix form and therefore provides a more convenient way of handling the system event and estimating its probability. Even in the case where one has incomplete information on the component probabilities and/or the statistical dependence thereof, the matrix-based framework enables us to estimate the narrowest bounds on the system failure probability by linear programming. This paper presents the MSR method and applies it to a transportation network consisting of bridge structures. The seismic failure probabilities of bridges are estimated by use of the predictive fragility curves developed by a Bayesian methodology based on experimental data and existing deterministic models of the seismic capacity and demand. Using the MSR method, the probability of disconnection between each city/county and a critical facility is estimated. The probability mass function of the number of failed bridges is computed as well. In order to quantify the relative importance of bridges, the MSR method is used to compute the conditional probabilities of bridge failures given that there is at least one city disconnected from the critical facility. The bounds on the probability of disconnection are also obtained for cases with incomplete information

  20. Reagent-Free Quantification of Aqueous Free Chlorine via Electrical Readout of Colorimetrically Functionalized Pencil Lines.

    Science.gov (United States)

    Mohtasebi, Amirmasoud; Broomfield, Andrew D; Chowdhury, Tanzina; Selvaganapathy, P Ravi; Kruse, Peter

    2017-06-21

    Colorimetric methods are commonly used to quantify free chlorine in drinking water. However, these methods are not suitable for reagent-free, continuous, and autonomous applications. Here, we demonstrate how functionalization of a pencil-drawn film with phenyl-capped aniline tetramer (PCAT) can be used for quantitative electric readout of free chlorine concentrations. The functionalized film can be implemented in a simple fluidic device for continuous sensing of aqueous free chlorine concentrations. The sensor is selective to free chlorine and can undergo a reagent-free reset for further measurements. Our sensor is superior to electrochemical methods in that it does not require a reference electrode. It is capable of quantification of free chlorine in the range of 0.1-12 ppm with higher precision than colorimetric (absorptivity) methods. The interactions of PCAT with the pencil-drawn film upon exposure to hypochlorite were characterized spectroscopically. A previously reported detection mechanism relied on the measurement of a baseline shift to quantify free chlorine concentrations. The new method demonstrated here measures initial spike size upon exposure to free chlorine. It relies on a fast charge built up on the sensor film due to intermittent PCAT salt formation. It has the advantage of being significantly faster than the measurement of baseline shift, but it cannot be used to detect gradual changes in free chlorine concentration without the use of frequent reset pulses. The stability of PCAT was examined in the presence of free chlorine as a function of pH. While most ions commonly present in drinking water do not interfere with the free chlorine detection, other oxidants may contribute to the signal. Our sensor is easy to fabricate and robust, operates reagent-free, and has very low power requirements and is thus suitable for remote deployment.

  1. Transfer matrix method for four-flux radiative transfer.

    Science.gov (United States)

    Slovick, Brian; Flom, Zachary; Zipp, Lucas; Krishnamurthy, Srini

    2017-07-20

    We develop a transfer matrix method for four-flux radiative transfer, which is ideally suited for studying transport through multiple scattering layers. The model predicts the specular and diffuse reflection and transmission of multilayer composite films, including interface reflections, for diffuse or collimated incidence. For spherical particles in the diffusion approximation, we derive closed-form expressions for the matrix coefficients and show remarkable agreement with numerical Monte Carlo simulations for a range of absorption values and film thicknesses, and for an example multilayer slab.

  2. Comparison of experimental methods for estimating matrix diffusion coefficients for contaminant transport modeling

    Science.gov (United States)

    Telfeyan, Katherine; Ware, S. Doug; Reimus, Paul W.; Birdsell, Kay H.

    2018-02-01

    Diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating effective matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of effective matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged from 14 to 30%, and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than effective matrix diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields effective matrix diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.

  3. Implementation of pencil kernel and depth penetration algorithms for treatment planning of proton beams

    International Nuclear Information System (INIS)

    Russell, K.R.; Saxner, M.; Ahnesjoe, A.; Montelius, A.; Grusell, E.; Dahlgren, C.V.

    2000-01-01

    The implementation of two algorithms for calculating dose distributions for radiation therapy treatment planning of intermediate energy proton beams is described. A pencil kernel algorithm and a depth penetration algorithm have been incorporated into a commercial three-dimensional treatment planning system (Helax-TMS, Helax AB, Sweden) to allow conformal planning techniques using irregularly shaped fields, proton range modulation, range modification and dose calculation for non-coplanar beams. The pencil kernel algorithm is developed from the Fermi-Eyges formalism and Moliere multiple-scattering theory with range straggling corrections applied. The depth penetration algorithm is based on the energy loss in the continuous slowing down approximation with simple correction factors applied to the beam penumbra region and has been implemented for fast, interactive treatment planning. Modelling of the effects of air gaps and range modifying device thickness and position are implicit to both algorithms. Measured and calculated dose values are compared for a therapeutic proton beam in both homogeneous and heterogeneous phantoms of varying complexity. Both algorithms model the beam penumbra as a function of depth in a homogeneous phantom with acceptable accuracy. Results show that the pencil kernel algorithm is required for modelling the dose perturbation effects from scattering in heterogeneous media. (author)

  4. Simultaneous quantification of arginine, alanine, methionine and cysteine amino acids in supplements using a novel bioelectro-nanosensor based on CdSe quantum dot/modified carbon nanotube hollow fiber pencil graphite electrode via Taguchi method.

    Science.gov (United States)

    Hooshmand, Sara; Es'haghi, Zarrin

    2017-11-30

    A number of four amino acids have been simultaneously determined at CdSe quantum dot-modified/multi-walled carbon nanotube hollow fiber pencil graphite electrode in different bodybuilding supplements. CdSe quantum dots were synthesized and applied to construct a modified carbon nanotube hollow fiber pencil graphite electrode. FT-IR, TEM, XRD and EDAX methods were applied for characterization of the synthesized CdSe QDs. The electro-oxidation of arginine (Arg), alanine (Ala), methionine (Met) and cysteine (Cys) at the surface of the modified electrode was studied. Then the Taguchi's method was applied using MINITAB 17 software to find out the optimum conditions for the amino acids determination. Under the optimized conditions, the differential pulse (DP) voltammetric peak currents of Arg, Ala, Met and Cys increased linearly with their concentrations in the ranges of 0.287-33670μM and detection limits of 0.081, 0.158, 0.094 and 0.116μM were obtained for them, respectively. Satisfactory results were achieved for calibration and validation sets. The prepared modified electrode represents a very good resolution between the voltammetric peaks of the four amino acids which makes it suitable for the detection of each in presence of others in real samples. Copyright © 2017. Published by Elsevier B.V.

  5. A stochastic method for computing hadronic matrix elements

    Energy Technology Data Exchange (ETDEWEB)

    Alexandrou, Constantia [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; The Cyprus Institute, Nicosia (Cyprus). Computational-based Science and Technology Research Center; Dinter, Simon; Drach, Vincent [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Jansen, Karl [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Hadjiyiannakou, Kyriakos [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Renner, Dru B. [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Collaboration: European Twisted Mass Collaboration

    2013-02-15

    We present a stochastic method for the calculation of baryon three-point functions that is more versatile compared to the typically used sequential method. We analyze the scaling of the error of the stochastically evaluated three-point function with the lattice volume and find a favorable signal-to-noise ratio suggesting that our stochastic method can be used efficiently at large volumes to compute hadronic matrix elements.

  6. Response Matrix Method Development Program at Savannah River Laboratory

    International Nuclear Information System (INIS)

    Sicilian, J.M.

    1976-01-01

    The Response Matrix Method Development Program at Savannah River Laboratory (SRL) has concentrated on the development of an effective system of computer codes for the analysis of Savannah River Plant (SRP) reactors. The most significant contribution of this program to date has been the verification of the accuracy of diffusion theory codes as used for routine analysis of SRP reactor operation. This paper documents the two steps carried out in achieving this verification: confirmation of the accuracy of the response matrix technique through comparison with experiment and Monte Carlo calculations; and establishment of agreement between diffusion theory and response matrix codes in situations which realistically approximate actual operating conditions

  7. Device for the automatic evaluation of pencil dosimeters

    International Nuclear Information System (INIS)

    Schallopp, B.

    1976-01-01

    In connenction with the automation of radiation protection in nuclear power plants, an automatic reading device has been developed for the direct input of the readings of pencil dosimeters into a computer. Voltage measurements would be simple but are excluded, because the internal electrode of the dosimeter may not be touched, for operational reasons. This paper describes an optical/electronic conversion device in which the reading of the dosimeter is projected onto a Vidicon, scanned, and converted into a digital signal for output to the computer. (orig.) [de

  8. Effect of pencil grasp on the speed and legibility of handwriting after a 10-minute copy task in Grade 4 children.

    Science.gov (United States)

    Schwellnus, Heidi; Carnahan, Heather; Kushki, Azadeh; Polatajko, Helene; Missiuna, Cheryl; Chau, Tom

    2012-06-01

    To investigate the impact of common pencil grasp patterns on the speed and legibility of handwriting after a 10-minute copy task, intended to induce muscle fatigue, in typically developing children and in those non-proficient in handwriting. A total of 120 Grade 4 students completed a standardised handwriting assessment before and after a 10-minute copy task. The students indicated the perceived difficulty of the handwriting task at baseline and after 10 minutes. The students also completed a self-report questionnaire regarding their handwriting proficiency upon completion. The majority of the students rated higher effort after the 10-minute copy task than at baseline (rank sum: P = 0.00001). The effort ratings were similar for the different grasp patterns (multiple linear regression: F = 0.37, P = 0.895). For both typically developing children and those with handwriting issues, the legibility of the writing samples decreased after the 10-minute copy task but the speed of writing increased. CONCLUSIONS AND SIGNIFICANCE OF THE STUDY: The quality of the handwriting decreased after the 10-minute copy task; however, there was no difference in the quality or speed scores among the different pencil grasps before and after the copy task. The dynamic tripod pencil grasp did not offer any advantage over the lateral tripod or the dynamic or lateral quadrupod pencil grasps in terms of quality of handwriting after a 10-minute copy task. These four pencil grasp patterns performed equivalently. Our findings question the practice of having students adopt the dynamic tripod pencil grasp. © 2012 The Authors Australian Occupational Therapy Journal © 2012 Occupational Therapy Australia.

  9. Iterative approach as alternative to S-matrix in modal methods

    Science.gov (United States)

    Semenikhin, Igor; Zanuccoli, Mauro

    2014-12-01

    The continuously increasing complexity of opto-electronic devices and the rising demands of simulation accuracy lead to the need of solving very large systems of linear equations making iterative methods promising and attractive from the computational point of view with respect to direct methods. In particular, iterative approach potentially enables the reduction of required computational time to solve Maxwell's equations by Eigenmode Expansion algorithms. Regardless of the particular eigenmodes finding method used, the expansion coefficients are computed as a rule by scattering matrix (S-matrix) approach or similar techniques requiring order of M3 operations. In this work we consider alternatives to the S-matrix technique which are based on pure iterative or mixed direct-iterative approaches. The possibility to diminish the impact of M3 -order calculations to overall time and in some cases even to reduce the number of arithmetic operations to M2 by applying iterative techniques are discussed. Numerical results are illustrated to discuss validity and potentiality of the proposed approaches.

  10. A New Method of Creating Technology/Function Matrix for Systematic Innovation without Expert

    Directory of Open Access Journals (Sweden)

    Tien-Yuan Cheng

    2012-02-01

    Full Text Available The technology/function matrix is comprised by specific technologies and functions, and through the technology/function matrix we can known what the technologies with functions have opportunities for innovation of product or technology. However, the technology/function matrix is very difficult to create, because the patents need to be read, analyzed and categorized into the technology/function matrix always more than hundreds or thousands. In this research, I propose a method to create a technology/function matrix just need to execute patent search without reading and analyzing patents. Through the proposed method anyone can create a technology/function matrix in a short time without experts’ help even if there are thousands of thousands of patents need to be read and analyzed.

  11. Matrix completion by deep matrix factorization.

    Science.gov (United States)

    Fan, Jicong; Cheng, Jieyu

    2018-02-01

    Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Nucleon matrix elements using the variational method in lattice QCD

    International Nuclear Information System (INIS)

    Dragos, J.; Kamleh, W.; Leinweber, D.B.; Zanotti, J.M.; Rakow, P.E.L.; Young, R.D.; Adelaide Univ., SA

    2016-06-01

    The extraction of hadron matrix elements in lattice QCD using the standard two- and threepoint correlator functions demands careful attention to systematic uncertainties. One of the most commonly studied sources of systematic error is contamination from excited states. We apply the variational method to calculate the axial vector current g_A, the scalar current g_S and the quark momentum fraction left angle x right angle of the nucleon and we compare the results to the more commonly used summation and two-exponential fit methods. The results demonstrate that the variational approach offers a more efficient and robust method for the determination of nucleon matrix elements.

  13. A New Pseudoinverse Matrix Method For Balancing Chemical Equations And Their Stability

    International Nuclear Information System (INIS)

    Risteski, Ice B.

    2008-01-01

    In this work is given a new pseudoniverse matrix method for balancing chemical equations. Here offered method is founded on virtue of the solution of a Diophantine matrix equation by using of a Moore-Penrose pseudoinverse matrix. The method has been tested on several typical chemical equations and found to be very successful for the all equations in our extensive balancing research. This method, which works successfully without any limitations, also has the capability to determine the feasibility of a new chemical reaction, and if it is feasible, then it will balance the equation. Chemical equations treated here possess atoms with fractional oxidation numbers. Also, in the present work are introduced necessary and sufficient criteria for stability of chemical equations over stability of their extended matrices

  14. Chosen interval methods for solving linear interval systems with special type of matrix

    Science.gov (United States)

    Szyszka, Barbara

    2013-10-01

    The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.

  15. Successive Over Relaxation Method Which Uses Matrix Norms for ...

    African Journals Online (AJOL)

    An algorithm for S.O.R functional iteration which uses matrix norms for the Jacobi iteration matrices rather than the usual Power method, feasible in Newton Operator for the solution of nonlinear system of equations is proposed. We modified the S.O.R. iterative method known as Multiphase S.O.R. method for Newton ...

  16. On a novel matrix method for three-dimensional photoelasticity

    International Nuclear Information System (INIS)

    Theocaris, P.S.; Gdoutos, E.E.

    1978-01-01

    A non-destructive method for the photoelastic determination of three-dimensional stress distributions, based on the Mueller and Jones calculi, is developed. The differential equations satisfied by the Stokes and Jones vectors, when a polarized light beam passes through a photoelastic model, presenting rotation of the secondary principal stress directions, are established in matrix form. The Peano-Baker method is used for the solution of these differential equations in a matrix series form, establishing the elements of the Mueller and Jones matrices of the photoelastic model. These matrices are experimentally determined by using different wavelengths in conjunction with Jones' 'equivalence theorem'. The Neumann equations are immediately deduced from the above-mentioned differential equations. (orig.) [de

  17. Nonlinear response matrix methods for radiative transfer

    International Nuclear Information System (INIS)

    Miller, W.F. Jr.; Lewis, E.E.

    1987-01-01

    A nonlinear response matrix formalism is presented for the solution of time-dependent radiative transfer problems. The essential feature of the method is that within each computational cell the temperature is calculated in response to the incoming photons from all frequency groups. Thus the updating of the temperature distribution is placed within the iterative solution of the spaceangle transport problem, instead of being placed outside of it. The method is formulated for both grey and multifrequency problems and applied in slab geometry. The method is compared to the more conventional source iteration technique. 7 refs., 1 fig., 4 tabs

  18. Measuring methods of matrix diffusion

    International Nuclear Information System (INIS)

    Muurinen, A.; Valkiainen, M.

    1988-03-01

    In Finland the spent nuclear fuel is planned to be disposed of at large depths in crystalline bedrock. The radionuclides which are dissolved in the groundwater may be able to diffuse into the micropores of the porous rock matrix and thus be withdrawn from the flowing water in the fractures. This phenomenon is called matrix diffusion. A review over matrix diffusion is presented in the study. The main interest is directed to the diffusion of non-sorbing species. The review covers diffusion experiments and measurements of porosity, pore size, specific surface area and water permeability

  19. An integrating factor matrix method to find first integrals

    International Nuclear Information System (INIS)

    Saputra, K V I; Quispel, G R W; Van Veen, L

    2010-01-01

    In this paper we develop an integrating factor matrix method to derive conditions for the existence of first integrals. We use this novel method to obtain first integrals, along with the conditions for their existence, for two- and three-dimensional Lotka-Volterra systems with constant terms. The results are compared to previous results obtained by other methods.

  20. J-matrix method of scattering in one dimension: The nonrelativistic theory

    International Nuclear Information System (INIS)

    Alhaidari, A.D.; Bahlouli, H.; Abdelmonem, M.S.

    2009-01-01

    We formulate a theory of nonrelativistic scattering in one dimension based on the J-matrix method. The scattering potential is assumed to have a finite range such that it is well represented by its matrix elements in a finite subset of a basis that supports a tridiagonal matrix representation for the reference wave operator. Contrary to our expectation, the 1D formulation reveals a rich and highly nontrivial structure compared to the 3D formulation. Examples are given to demonstrate the utility and accuracy of the method. It is hoped that this formulation constitutes a viable alternative to the classical treatment of 1D scattering problem and that it will help unveil new and interesting applications.

  1. Monte Carlo evaluation of a photon pencil kernel algorithm applied to fast neutron therapy treatment planning

    Science.gov (United States)

    Söderberg, Jonas; Alm Carlsson, Gudrun; Ahnesjö, Anders

    2003-10-01

    When dedicated software is lacking, treatment planning for fast neutron therapy is sometimes performed using dose calculation algorithms designed for photon beam therapy. In this work Monte Carlo derived neutron pencil kernels in water were parametrized using the photon dose algorithm implemented in the Nucletron TMS (treatment management system) treatment planning system. A rectangular fast-neutron fluence spectrum with energies 0-40 MeV (resembling a polyethylene filtered p(41)+ Be spectrum) was used. Central axis depth doses and lateral dose distributions were calculated and compared with the corresponding dose distributions from Monte Carlo calculations for homogeneous water and heterogeneous slab phantoms. All absorbed doses were normalized to the reference dose at 10 cm depth for a field of radius 5.6 cm in a 30 × 40 × 20 cm3 water test phantom. Agreement to within 7% was found in both the lateral and the depth dose distributions. The deviations could be explained as due to differences in size between the test phantom and that used in deriving the pencil kernel (radius 200 cm, thickness 50 cm). In the heterogeneous phantom, the TMS, with a directly applied neutron pencil kernel, and Monte Carlo calculated absorbed doses agree approximately for muscle but show large deviations for media such as adipose or bone. For the latter media, agreement was substantially improved by correcting the absorbed doses calculated in TMS with the neutron kerma factor ratio and the stopping power ratio between tissue and water. The multipurpose Monte Carlo code FLUKA was used both in calculating the pencil kernel and in direct calculations of absorbed dose in the phantom.

  2. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  3. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang

    2017-09-27

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  4. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    Science.gov (United States)

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  5. Feasibility of proton pencil beam scanning treatment of free-breathing lung cancer patients

    NARCIS (Netherlands)

    Jakobi, Annika; Perrin, Rosalind; Knopf, Antje; Richter, Christian

    BACKGROUND: The interplay effect might degrade the dose of pencil beam scanning proton therapy to a degree that free-breathing treatment might be impossible without further motion mitigation techniques, which complicate and prolong the treatment. We assessed whether treatment of free-breathing

  6. SU-E-T-321: The Effects of a Dynamic Collimation System On Proton Pencil Beams to Improve Lateral Tissue Sparing in Spot Scanned Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Hill, P; Wang, D; Flynn, R; Hyer, D [University Of Iowa, Iowa City, IA (United States)

    2014-06-01

    Purpose: To evaluate the lateral beam penumbra in pencil beam scanning proton therapy delivered using a dynamic collimator device capable of trimming a portion of the primary beam in close proximity to the patient. Methods: Monte Carlo simulations of pencil beams were performed using MCNPX. Each simulation transported a 125 MeV proton pencil beam through a range shifter, past acollimator, and into a water phantom. Two parameters were varied among the simulations, the source beam size (sigma in air from 3 to 9 mm), and the position of the edge of the collimator (placed from 0 to 30 mm from the central axis of the beam). Proton flux was tallied at the phantom surface to determine the effective beam sizefor all combinations of source beam size and collimator edge position. Results: Quantifying beam size at the phantom surface provides a useful measure tocompare performance among varying source beam sizes and collimation conditions. For arelatively large source beam size (9 mm) entering the range shifter, sigma at thesurface was found to be 10 mm without collimation versus 4 mm with collimation. Additionally, sigma at the surface achievable with collimation was found to be smallerthan for any uncollimated beam, even for very small source beam sizes. Finally, thelateral penumbra achievable with collimation was determined to be largely independentof the source beam size. Conclusion: Collimation can significantly reduce proton pencil beam lateral penumbra.Given the known dosimetric disadvantages resulting from large beam spot sizes,employing a dynamic collimation system can significantly improve lateral tissuesparing in spot-scanned dose distributions.

  7. Improvement of the Convergence of the Invariant Imbedding T-Matrix Method

    Science.gov (United States)

    Zhai, S.; Panetta, R. L.; Yang, P.

    2017-12-01

    The invariant imbedding T-matrix method (IITM) is based on an electromagnetic volume integral equation to compute the T-matrix of an arbitrary scattering particle. A free-space Green's function is chosen as the integral kernel and thus each source point is placed in an imaginary vacuum spherical shell extending from the center to that source point. The final T-matrix (of the largest circumscribing sphere) is obtained through an iterative relation that, layer by layer, computes the T-matrix from the particle center to the outermost shell. On each spherical shell surface, an integration of the product of the refractive index 𝜀(𝜃, 𝜑) and vector spherical harmonics must be performed, resulting in the so-called U-matrix, which directly leads to the T-matrix on the spherical surface. Our observations indicate that the matrix size and sparseness are determined by the particular refractive index function 𝜀(𝜃, 𝜑). If 𝜀(𝜃, 𝜑) is an analytic function on the surface, then the matrix elements resulting from the integration decay rapidly, leading to sparse matrix; if 𝜀(𝜃, 𝜑) is not (for example, contains jump discontinuities), then the matrix elements decay slowly, leading to a large dense matrix. The intersection between an irregular scatterer and each spherical shell can leave jump discontinuities in 𝜀(𝜃, 𝜑) distributed over the shell surface. The aforementioned feature is analogous to the Gibbs phenomenon appearing in the orthogonal expansion of non-smooth functions with Hermitian eigenfunctions (complex exponential, Legendre, Bessel,...) where poor convergence speed is a direct consequence of the slow decay rate of the expansion coefficients. Various methods have been developed to deal with this slow convergence in the presence of discontinuities. Among the different approaches the most practical one may be a spectral filter: a filter is applied on the

  8. Novel Direction Of Arrival Estimation Method Based on Coherent Accumulation Matrix Reconstruction

    Directory of Open Access Journals (Sweden)

    Li Lei

    2015-04-01

    Full Text Available Based on coherent accumulation matrix reconstruction, a novel Direction Of Arrival (DOA estimation decorrelation method of coherent signals is proposed using a small sample. First, the Signal to Noise Ratio (SNR is improved by performing coherent accumulation operation on an array of observed data. Then, according to the structure characteristics of the accumulated snapshot vector, the equivalent covariance matrix, whose rank is the same as the number of array elements, is constructed. The rank of this matrix is proved to be determined just by the number of incident signals, which realize the decorrelation of coherent signals. Compared with spatial smoothing method, the proposed method performs better by effectively avoiding aperture loss with high-resolution characteristics and low computational complexity. Simulation results demonstrate the efficiency of the proposed method.

  9. Decomposition of spectra in EPR dosimetry using the matrix method

    International Nuclear Information System (INIS)

    Sholom, S.V.; Chumak, V.V.

    2003-01-01

    The matrix method of EPR spectra decomposition is developed and adapted for routine application in retrospective EPR dosimetry with teeth. According to this method, the initial EPR spectra are decomposed (using methods of matrix algebra) into several reference components (reference matrices) that are specific for each material. Proposed procedure has been tested on the example of tooth enamel. Reference spectra were a spectrum of an empty sample tube and three standard signals of enamel (two at g=2.0045, both for the native signal and one at g perpendicular =2.0018, g parallel =1.9973 for the dosimetric signal). Values of dosimetric signals obtained using the given method have been compared with data obtained by manual manipulation of spectra, and good coincidence was observed. This allows considering the proposed method as potent for application in routine EPR dosimetry

  10. Polarimetric imaging of turbid inhomogeneous slab media based on backscattering using a pencil beam for illumination: Monte Carlo simulation

    Science.gov (United States)

    Otsuki, Soichi

    2018-04-01

    Polarimetric imaging of absorbing, strongly scattering, or birefringent inclusions is investigated in a negligibly absorbing, moderately scattering, and isotropic slab medium. It was proved that the reduced effective scattering Mueller matrix is exactly calculated from experimental or simulated raw matrices even if the medium is anisotropic and/or heterogeneous, or the outgoing light beam exits obliquely to the normal of the slab surface. The calculation also gives a reasonable approximation of the reduced matrix using a light beam with a finite diameter for illumination. The reduced matrix was calculated using a Monte Carlo simulation and was factorized in two dimensions by the Lu-Chipman polar decomposition. The intensity of backscattered light shows clear and modestly clear differences for absorbing and strongly scattering inclusions, respectively, whereas it shows no difference for birefringent inclusions. Conversely, some polarization parameters, for example, the selective depolarization coefficients exhibit only a slight difference for the absorbing inclusions, whereas they showed clear difference for the strongly scattering or birefringent inclusions. Moreover, these quantities become larger with increasing the difference in the optical properties of the inclusions relative to the surrounding medium. However, it is difficult to recognize inclusions that buried at the depth deeper than 3 mm under the surface. Thus, the present technique can detect the approximate shape and size of these inclusions, and considering the depth where inclusions lie, estimate their optical properties. This study reveals the possibility of the polarization-sensitive imaging of turbid inhomogeneous media using a pencil beam for illumination.

  11. The time-dependent density matrix renormalisation group method

    Science.gov (United States)

    Ma, Haibo; Luo, Zhen; Yao, Yao

    2018-04-01

    Substantial progress of the time-dependent density matrix renormalisation group (t-DMRG) method in the recent 15 years is reviewed in this paper. By integrating the time evolution with the sweep procedures in density matrix renormalisation group (DMRG), t-DMRG provides an efficient tool for real-time simulations of the quantum dynamics for one-dimensional (1D) or quasi-1D strongly correlated systems with a large number of degrees of freedom. In the illustrative applications, the t-DMRG approach is applied to investigate the nonadiabatic processes in realistic chemical systems, including exciton dissociation and triplet fission in polymers and molecular aggregates as well as internal conversion in pyrazine molecule.

  12. Standard test method for translaminar fracture toughness of laminated and pultruded polymer matrix composite materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2004-01-01

    1.1 This test method covers the determination of translaminar fracture toughness, KTL, for laminated and pultruded polymer matrix composite materials of various ply orientations using test results from monotonically loaded notched specimens. 1.2 This test method is applicable to room temperature laboratory air environments. 1.3 Composite materials that can be tested by this test method are not limited by thickness or by type of polymer matrix or fiber, provided that the specimen sizes and the test results meet the requirements of this test method. This test method was developed primarily from test results of various carbon fiber – epoxy matrix laminates and from additional results of glass fiber – epoxy matrix, glass fiber-polyester matrix pultrusions and carbon fiber – bismaleimide matrix laminates (1-4, 6, 7). 1.4 A range of eccentrically loaded, single-edge-notch tension, ESE(T), specimen sizes with proportional planar dimensions is provided, but planar size may be variable and adjusted, with asso...

  13. Solubility on compact subsets for differential equations with real principal pencil of symbols

    International Nuclear Information System (INIS)

    Shananin, N A

    2006-01-01

    The central result is a theorem on the solubility on compact subsets for differential equations of quasiprincipal type with real principal pencil of symbols. The proof is based on the analysis of the microlocal structure of the singularities of solutions of equations in this class.

  14. A collocation finite element method with prior matrix condensation

    International Nuclear Information System (INIS)

    Sutcliffe, W.J.

    1977-01-01

    For thin shells with general loading, sixteen degrees of freedom have been used for a previous finite element solution procedure using a Collocation method instead of the usual variational based procedures. Although the number of elements required was relatively small, nevertheless the final matrix for the simultaneous solution of all unknowns could become large for a complex compound structure. The purpose of the present paper is to demonstrate a method of reducing the final matrix size, so allowing solution for large structures with comparatively small computer storage requirements while retaining the accuracy given by high order displacement functions. Collocation points, a number are equilibrium conditions which must be satisfied independently of the overall compatibility of forces and deflections for a complete structure. (Auth.)

  15. Technical Note: Spot characteristic stability for proton pencil beam scanning

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Chin-Cheng, E-mail: chen.ccc@gmail.com; Chang, Chang; Mah, Dennis [ProCure Treatment Center, Somerset, New Jersey 08873 (United States); Moyers, Michael F. [ProCure Treatment Center, Somerset, New Jersey 08873 and Shanghai Proton and Heavy Ion Center, Shanghai 201321 (China); Gao, Mingcheng [CDH Proton Center, Warrenville, Illinois 60555 (United States)

    2016-02-15

    Purpose: The spot characteristics for proton pencil beam scanning (PBS) were measured and analyzed over a 16 month period, which included one major site configuration update and six cyclotron interventions. The results provide a reference to establish the quality assurance (QA) frequency and tolerance for proton pencil beam scanning. Methods: A simple treatment plan was generated to produce an asymmetric 9-spot pattern distributed throughout a field of 16 × 18 cm for each of 18 proton energies (100.0–226.0 MeV). The delivered fluence distribution in air was measured using a phosphor screen based CCD camera at three planes perpendicular to the beam line axis (x-ray imaging isocenter and up/down stream 15.0 cm). The measured fluence distributions for each energy were analyzed using in-house programs which calculated the spot sizes and positional deviations of the Gaussian shaped spots. Results: Compared to the spot characteristic data installed into the treatment planning system, the 16-month averaged deviations of the measured spot sizes at the isocenter plane were 2.30% and 1.38% in the IEC gantry x and y directions, respectively. The maximum deviation was 12.87% while the minimum deviation was 0.003%, both at the upstream plane. After the collinearity of the proton and x-ray imaging system isocenters was optimized, the positional deviations of the spots were all within 1.5 mm for all three planes. During the site configuration update, spot positions were found to deviate by 6 mm until the tuning parameters file was properly restored. Conclusions: For this beam delivery system, it is recommended to perform a spot size and position check at least monthly and any time after a database update or cyclotron intervention occurs. A spot size deviation tolerance of <15% can be easily met with this delivery system. Deviations of spot positions were <2 mm at any plane up/down stream 15 cm from the isocenter.

  16. Technical Note: Spot characteristic stability for proton pencil beam scanning

    International Nuclear Information System (INIS)

    Chen, Chin-Cheng; Chang, Chang; Mah, Dennis; Moyers, Michael F.; Gao, Mingcheng

    2016-01-01

    Purpose: The spot characteristics for proton pencil beam scanning (PBS) were measured and analyzed over a 16 month period, which included one major site configuration update and six cyclotron interventions. The results provide a reference to establish the quality assurance (QA) frequency and tolerance for proton pencil beam scanning. Methods: A simple treatment plan was generated to produce an asymmetric 9-spot pattern distributed throughout a field of 16 × 18 cm for each of 18 proton energies (100.0–226.0 MeV). The delivered fluence distribution in air was measured using a phosphor screen based CCD camera at three planes perpendicular to the beam line axis (x-ray imaging isocenter and up/down stream 15.0 cm). The measured fluence distributions for each energy were analyzed using in-house programs which calculated the spot sizes and positional deviations of the Gaussian shaped spots. Results: Compared to the spot characteristic data installed into the treatment planning system, the 16-month averaged deviations of the measured spot sizes at the isocenter plane were 2.30% and 1.38% in the IEC gantry x and y directions, respectively. The maximum deviation was 12.87% while the minimum deviation was 0.003%, both at the upstream plane. After the collinearity of the proton and x-ray imaging system isocenters was optimized, the positional deviations of the spots were all within 1.5 mm for all three planes. During the site configuration update, spot positions were found to deviate by 6 mm until the tuning parameters file was properly restored. Conclusions: For this beam delivery system, it is recommended to perform a spot size and position check at least monthly and any time after a database update or cyclotron intervention occurs. A spot size deviation tolerance of <15% can be easily met with this delivery system. Deviations of spot positions were <2 mm at any plane up/down stream 15 cm from the isocenter

  17. Convergence analysis of modulus-based matrix splitting iterative methods for implicit complementarity problems.

    Science.gov (United States)

    Wang, An; Cao, Yang; Shi, Quan

    2018-01-01

    In this paper, we demonstrate a complete version of the convergence theory of the modulus-based matrix splitting iteration methods for solving a class of implicit complementarity problems proposed by Hong and Li (Numer. Linear Algebra Appl. 23:629-641, 2016). New convergence conditions are presented when the system matrix is a positive-definite matrix and an [Formula: see text]-matrix, respectively.

  18. Supine craniospinal irradiation in pediatric patients by proton pencil beam scanning.

    Science.gov (United States)

    Farace, Paolo; Bizzocchi, Nicola; Righetto, Roberto; Fellin, Francesco; Fracchiolla, Francesco; Lorentini, Stefano; Widesott, Lamberto; Algranati, Carlo; Rombi, Barbara; Vennarini, Sabina; Amichetti, Maurizio; Schwarz, Marco

    2017-04-01

    Proton therapy is the emerging treatment modality for craniospinal irradiation (CSI) in pediatric patients. Herein, special methods adopted for CSI at proton Therapy Center of Trento by pencil beam scanning (PBS) are comprehensively described. Twelve pediatric patients were treated by proton PBS using two/three isocenters. Special methods refer to: (i) patient positioning in supine position on immobilization devices crossed by the beams; (ii) planning field-junctions via the ancillary-beam technique; (iii) achieving lens-sparing by three-beams whole-brain-irradiation; (iv) applying a movable-snout and beam-splitting technique to reduce the lateral penumbra. Patient-specific quality assurance (QA) program was performed using two-dimensional ion chamber array and γ-analysis. Daily kilovoltage alignment was performed. PBS allowed to obtain optimal target coverage (mean D98%>98%) with reduced dose to organs-at-risk. Lens sparing was obtained (mean D1∼730cGyE). Reducing lateral penumbra decreased the dose to the kidneys (mean Dmean4cm (mean γ>95%) than at depths<4cm. The reported methods allowed to effectively perform proton PBS CSI. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. A discrete ordinate response matrix method for massively parallel computers

    International Nuclear Information System (INIS)

    Hanebutte, U.R.; Lewis, E.E.

    1991-01-01

    A discrete ordinate response matrix method is formulated for the solution of neutron transport problems on massively parallel computers. The response matrix formulation eliminates iteration on the scattering source. The nodal matrices which result from the diamond-differenced equations are utilized in a factored form which minimizes memory requirements and significantly reduces the required number of algorithm utilizes massive parallelism by assigning each spatial node to a processor. The algorithm is accelerated effectively by a synthetic method in which the low-order diffusion equations are also solved by massively parallel red/black iterations. The method has been implemented on a 16k Connection Machine-2, and S 8 and S 16 solutions have been obtained for fixed-source benchmark problems in X--Y geometry

  20. Disabling Orthostatic Headache after Penetrating Stonemason Pencil Injury to the Sacral Region

    Directory of Open Access Journals (Sweden)

    Carlo Brembilla

    2015-01-01

    Full Text Available Penetrating injuries to the spine, although less common than motor vehicle accidents and falls, are important causes of injury to the spinal cord. They are essentially of two varieties: gunshot or stab wounds. Gunshot injuries to the spine are more commonly described. Stab wounds are usually inflicted by knife or other sharp objects. Rarer objects causing incidental spinal injuries include glass fragments, wood pieces, chopsticks, nailguns, and injection needles. Just few cases of penetrating vertebral injuries caused by pencil are described. The current case concerns a 42-year-old man with an accidental penetrating stonemason pencil injury into the vertebral canal without neurological deficit. After the self-removal of the foreign object the patient complained of a disabling orthostatic headache. The early identification and treatment of the intracranial hypotension due to the posttraumatic cerebrospinal fluid (CSF sacral fistulae were mandatory to avoid further neurological complications. In the current literature acute pattern of intracranial hypotension immediately after a penetrating injury of the vertebral column has never been reported.

  1. Convergence Improvement of Response Matrix Method with Large Discontinuity Factors

    International Nuclear Information System (INIS)

    Yamamoto, Akio

    2003-01-01

    In the response matrix method, a numerical divergence problem has been reported when extremely small or large discontinuity factors are utilized in the calculations. In this paper, an alternative response matrix formulation to solve the divergence problem is discussed, and properties of iteration matrixes are investigated through eigenvalue analyses. In the conventional response matrix formulation, partial currents between adjacent nodes are assumed to be discontinuous, and outgoing partial currents are converted into incoming partial currents by the discontinuity factor matrix. Namely, the partial currents of the homogeneous system (i.e., homogeneous partial currents) are treated in the conventional response matrix formulation. In this approach, the spectral radius of an iteration matrix for the partial currents may exceed unity when an extremely small or large discontinuity factor is used. Contrary to this, an alternative response matrix formulation using heterogeneous partial currents is discussed in this paper. In the latter approach, partial currents are assumed to be continuous between adjacent nodes, and discontinuity factors are directly considered in the coefficients of a response matrix. From the eigenvalue analysis of the iteration matrix for the one-group, one-dimensional problem, the spectral radius for the heterogeneous partial current formulation does not exceed unity even if an extremely small or large discontinuity factor is used in the calculation; numerical stability of the alternative formulation is superior to the conventional one. The numerical stability of the heterogeneous partial current formulation is also confirmed by the two-dimensional light water reactor core analysis. Since the heterogeneous partial current formulation does not require any approximation, the converged solution exactly reproduces the reference solution when the discontinuity factors are directly derived from the reference calculation

  2. A matrix-inversion method for gamma-source mapping from gamma-count data - 59082

    International Nuclear Information System (INIS)

    Bull, Richard K.; Adsley, Ian; Burgess, Claire

    2012-01-01

    Gamma ray counting is often used to survey the distribution of active waste material in various locations. Ideally the output from such surveys would be a map of the activity of the waste. In this paper a simple matrix-inversion method is presented. This allows an array of gamma-count data to be converted to an array of source activities. For each survey area the response matrix is computed using the gamma-shielding code Microshield [1]. This matrix links the activity array to the count array. The activity array is then obtained via matrix inversion. The method was tested on artificially-created arrays of count-data onto which statistical noise had been added. The method was able to reproduce, quite faithfully, the original activity distribution used to generate the dataset. The method has been applied to a number of practical cases, including the distribution of activated objects in a hot cell and to activated Nimonic springs amongst fuel-element debris in vaults at a nuclear plant. (authors)

  3. Electron beam treatment planning: A review of dose computation methods

    International Nuclear Information System (INIS)

    Mohan, R.; Riley, R.; Laughlin, J.S.

    1983-01-01

    Various methods of dose computations are reviewed. The equivalent path length methods used to account for body curvature and internal structure are not adequate because they ignore the lateral diffusion of electrons. The Monte Carlo method for the broad field three-dimensional situation in treatment planning is impractical because of the enormous computer time required. The pencil beam technique may represent a suitable compromise. The behavior of a pencil beam may be described by the multiple scattering theory or, alternatively, generated using the Monte Carlo method. Although nearly two orders of magnitude slower than the equivalent path length technique, the pencil beam method improves accuracy sufficiently to justify its use. It applies very well when accounting for the effect of surface irregularities; the formulation for handling inhomogeneous internal structure is yet to be developed

  4. A Brief Study on the Ignition of the Non-Thermal Atmospheric Pressure Plasma Jet from a Double Dielectric Barrier Configured Plasma Pencil

    International Nuclear Information System (INIS)

    Begum, Asma; Laroussi, Mounir; Pervez, M. R.

    2013-01-01

    To understand the self sustained propagation of the plasma jet/bullet in air under atmospheric pressure, the ignition of the plasma jet/bullet, the plasma jet/bullet ignition point in the plasma pencil, the formation time and the formation criteria from a dielectric barrier configured plasma pencil were investigated in this study. The results were confirmed by comparing these results with the plasma jet ignition process in the plasma pencil without a dielectric barrier. Electrical, optical, and imaging techniques were used to study the formation of the plasma jet from the ignition of discharge in a double dielectric barrier configured plasma pencil. The investigation results show that the plasma jet forms at the outlet of the plasma pencil as a donut shaped discharge front because of the electric field line along the outlet's surface. It is shown that the required time for the formation of the plasma jet changes with the input voltage of the discharge. The input power calculation for the gap discharge and for the whole system shows that 56% of the average input power is used by the first gap discharge. The estimated electron density inside the gap discharge is in the order of 10 11 cm −3 . If helium is used as a feeding gas, a minimum 1.48×10 −8 C charge is required per pulse in the gap discharge to generate a plasma jet

  5. Validation of nuclear models in Geant4 using the dose distribution of a 177 MeV proton pencil beam

    International Nuclear Information System (INIS)

    Hall, David C; Paganetti, Harald; Makarova, Anastasia; Gottschalk, Bernard

    2016-01-01

    A proton pencil beam is associated with a surrounding low-dose envelope, originating from nuclear interactions. It is important for treatment planning systems to accurately model this envelope when performing dose calculations for pencil beam scanning treatments, and Monte Carlo (MC) codes are commonly used for this purpose. This work aims to validate the nuclear models employed by the Geant4 MC code, by comparing the simulated absolute dose distribution to a recent experiment of a 177 MeV proton pencil beam stopping in water. Striking agreement is observed over five orders of magnitude, with both the shape and normalisation well modelled. The normalisations of two depth dose curves are lower than experiment, though this could be explained by an experimental positioning error. The Geant4 neutron production model is also verified in the distal region. The entrance dose is poorly modelled, suggesting an unaccounted upstream source of low-energy protons. Recommendations are given for a follow-up experiment which could resolve these issues. (note)

  6. Internet Administration of Paper-and-Pencil Questionnaires Used in Couple Research: Assessing Psychometric Equivalence

    Science.gov (United States)

    Brock, Rebecca L.; Barry, Robin A.; Lawrence, Erika; Dey, Jodi; Rolffs, Jaci

    2012-01-01

    This study examined the psychometric equivalence of paper-and-pencil and Internet formats of key questionnaires used in couple research. Self-report questionnaires assessing interpersonal constructs (relationship satisfaction, communication/conflict management, partner support, emotional intimacy) and intrapersonal constructs (individual traits,…

  7. Discrete-ordinate method with matrix exponential for a pseudo-spherical atmosphere: Vector case

    International Nuclear Information System (INIS)

    Doicu, A.; Trautmann, T.

    2009-01-01

    The paper is devoted to the extension of the matrix-exponential formalism for the scalar radiative transfer to the vector case. Using basic results of the theory of matrix-exponential functions we provide a compact and versatile formulation of the vector radiative transfer. As in the scalar case, we operate with the concept of the layer equation incorporating the level values of the Stokes vector. The matrix exponentials which enter in the expression of the layer equation are computed by using the matrix eigenvalue method and the Pade approximation. A discussion of the computational efficiency of the proposed method for both an aerosol-loaded atmosphere as well as a cloudy atmosphere is also provided

  8. Proper comparison among methods using a confusion matrix

    CSIR Research Space (South Africa)

    Salmon

    2015-07-01

    Full Text Available -1 IGARSS 2015, Milan, Italy, 26-31 July 2015 Proper comparison among methods using a confusion matrix 1,2 B.P. Salmon, 2,3 W. Kleynhans, 2,3 C.P. Schwegmann and 1J.C. Olivier 1School of Engineering and ICT, University of Tasmania, Australia 2...

  9. A nonlinearity compensation method for a matrix converter drive

    DEFF Research Database (Denmark)

    Lee, Kyo-Beum; Blaabjerg, Frede

    2005-01-01

    converter model using the direction of current. The proposed method does not need any additional hardware or complicated software and it is easy to realize by applying the algorithm to the conventional vector control. The proposed compensation method is applied for high-performance induction motor drives...... using a 3-kW matrix converter system without a speed sensor. Experimental results show the proposed method provides good compensating characteristics....

  10. A Predictive-Control-Based Over-Modulation Method for Conventional Matrix Converters

    DEFF Research Database (Denmark)

    Zhang, Guanguan; Yang, Jian; Sun, Yao

    2018-01-01

    To increase the voltage transfer ratio of the matrix converter and improve the input/output current performance simultaneously, an over-modulation method based on predictive control is proposed in this paper, where the weighting factor is selected by an automatic adjusting mechanism, which is able...... to further enhance the system performance promptly. This method has advantages like the maximum voltage transfer ratio can reach 0.987 in the experiments; the total harmonic distortion of the input and output current are reduced, and the losses in the matrix converter are decreased. Moreover, the specific...

  11. The Visual Matrix Method: Imagery and Affect in a Group-Based Research Setting

    Directory of Open Access Journals (Sweden)

    Lynn Froggett

    2015-07-01

    Full Text Available The visual matrix is a method for researching shared experience, stimulated by sensory material relevant to a research question. It is led by imagery, visualization and affect, which in the matrix take precedence over discourse. The method enables the symbolization of imaginative and emotional material, which might not otherwise be articulated and allows "unthought" dimensions of experience to emerge into consciousness in a participatory setting. We describe the process of the matrix with reference to the study "Public Art and Civic Engagement" (FROGGETT, MANLEY, ROY, PRIOR & DOHERTY, 2014 in which it was developed and tested. Subsequently, examples of its use in other contexts are provided. Both the matrix and post-matrix discussions are described, as is the interpretive process that follows. Theoretical sources are highlighted: its origins in social dreaming; the atemporal, associative nature of the thinking during and after the matrix which we describe through the Deleuzian idea of the rhizome; and the hermeneutic analysis which draws from object relations theory and the Lorenzerian tradition of scenic understanding. The matrix has been conceptualized as a "scenic rhizome" to account for its distinctive quality and hybrid origins in research practice. The scenic rhizome operates as a "third" between participants and the "objects" of contemplation. We suggest that some of the drawbacks of other group-based methods are avoided in the visual matrix—namely the tendency for inter-personal dynamics to dominate the event. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs150369

  12. Dose distributions of a proton beam for eye tumor therapy: Hybrid pencil-beam ray-tracing calculations

    International Nuclear Information System (INIS)

    Rethfeldt, Ch.; Fuchs, H.; Gardey, K.-U.

    2006-01-01

    For the case of eye tumor therapy with protons, improvements are introduced compared to the standard dose calculation which implies straight-line optics and the constant-density assumption for the eye and its surrounding. The progress consists of (i) taking account of the lateral scattering of the protons in tissue by folding the entrance fluence distribution with the pencil beam distribution widening with growing depth in the tissue, (ii) rescaling the spread-out Bragg peak dose distribution in water with the radiological path length calculated voxel by voxel on ray traces through a realistic density matrix for the treatment geometry, yielding a trajectory dependence of the geometrical range. Distributions calculated for some specific situations are compared to measurements and/or standard calculations, and differences to the latter are discussed with respect to the requirements of therapy planning. The most pronounced changes appear for wedges placed in front of the eye, causing additional widening of the lateral falloff. The more accurate prediction of the dose dependence at the field borders is of interest with respect to side effects in the risk organs of the eye

  13. Analytical techniques for instrument design - Matrix methods

    International Nuclear Information System (INIS)

    Robinson, R.A.

    1997-01-01

    The authors take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalization to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, they discuss a toolbox of matrix manipulations that can be performed on the 6-dimensional Cooper-Nathans matrix. They show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. They will argue that a generalized program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. They also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question

  14. Sensitive Adsorptive Voltammetric Method for Determination of Bisphenol A by Gold Nanoparticle/Polyvinylpyrrolidone-Modified Pencil Graphite Electrode

    Directory of Open Access Journals (Sweden)

    Yesim Tugce Yaman

    2016-05-01

    Full Text Available A novel electrochemical sensor gold nanoparticle (AuNP/polyvinylpyrrolidone (PVP modified pencil graphite electrode (PGE was developed for the ultrasensitive determination of Bisphenol A (BPA. The gold nanoparticles were electrodeposited by constant potential electrolysis and PVP was attached by passive adsorption onto the electrode surface. The electrode surfaces were characterized by electrochemical impedance spectroscopy (EIS and scanning electron microscopy (SEM. The parameters that affected the experimental conditions were researched and optimized. The AuNP/PVP/PGE sensor provided high sensitivity and selectivity for BPA recognition by using square wave adsorptive stripping voltammetry (SWAdSV. Under optimized conditions, the detection limit was found to be 1.0 nM. This new sensor system offered the advantages of simple fabrication which aided the expeditious replication, low cost, fast response, high sensitivity and low background current for BPA. This new sensor system was successfully tested for the detection of the amount of BPA in bottled drinking water with high reliability.

  15. Comparison of Experimental Methods for Estimating Matrix Diffusion Coefficients for Contaminant Transport Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Telfeyan, Katherine Christina [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ware, Stuart Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Reimus, Paul William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Birdsell, Kay Hanson [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-11-06

    Diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged from 14 to 30%, and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.

  16. Novel image analysis methods for quantification of in situ 3-D tendon cell and matrix strain.

    Science.gov (United States)

    Fung, Ashley K; Paredes, J J; Andarawis-Puri, Nelly

    2018-01-23

    Macroscopic tendon loads modulate the cellular microenvironment leading to biological outcomes such as degeneration or repair. Previous studies have shown that damage accumulation and the phases of tendon healing are marked by significant changes in the extracellular matrix, but it remains unknown how mechanical forces of the extracellular matrix are translated to mechanotransduction pathways that ultimately drive the biological response. Our overarching hypothesis is that the unique relationship between extracellular matrix strain and cell deformation will dictate biological outcomes, prompting the need for quantitative methods to characterize the local strain environment. While 2-D methods have successfully calculated matrix strain and cell deformation, 3-D methods are necessary to capture the increased complexity that can arise due to high levels of anisotropy and out-of-plane motion, particularly in the disorganized, highly cellular, injured state. In this study, we validated the use of digital volume correlation methods to quantify 3-D matrix strain using images of naïve tendon cells, the collagen fiber matrix, and injured tendon cells. Additionally, naïve tendon cell images were used to develop novel methods for 3-D cell deformation and 3-D cell-matrix strain, which is defined as a quantitative measure of the relationship between matrix strain and cell deformation. The results support that these methods can be used to detect strains with high accuracy and can be further extended to an in vivo setting for observing temporal changes in cell and matrix mechanics during degeneration and healing. Copyright © 2017. Published by Elsevier Ltd.

  17. Ultra-light and flexible pencil-trace anode for high performance potassium-ion and lithium-ion batteries

    Directory of Open Access Journals (Sweden)

    Zhixin Tai

    2017-07-01

    Full Text Available Engineering design of battery configurations and new battery system development are alternative approaches to achieve high performance batteries. A novel flexible and ultra-light graphite anode is fabricated by simple friction drawing on filter paper with a commercial 8B pencil. Compared with the traditional anode using copper foil as current collector, this innovative current-collector-free design presents capacity improvement of over 200% by reducing the inert weight of the electrode. The as-prepared pencil-trace electrode exhibits excellent rate performance in potassium-ion batteries (KIBs, significantly better than in lithium-ion batteries (LIBs, with capacity retention of 66% for the KIB vs. 28% for the LIB from 0.1 to 0.5 A g−1. It also shows a high reversible capacity of ∼230 mAh g−1 at 0.2 A g−1, 75% capacity retention over 350 cycles at 0.4 A g−1and the highest rate performance (based on the total electrode weight among graphite electrodes for K+ storage reported so far. Keywords: Current-collector-free, Flexible pencil-trace electrode, Potassium-ion battery, Lithium-ion battery, Layer-by-layer interconnected architecture

  18. Paper and pencil vs online self-administered food frequency questionnaire (FFQ) applied to university population: a pilot study.

    Science.gov (United States)

    González Carrascosa, R; García Segovia, P; Martínez Monzó, J

    2011-01-01

    To test the reliability of dietary intake data measured with an online food frequency questionnaires (FFQ) applied to a university population by comparing the results with those from a paper and pencil version. A total of 50 students were recruited from the second-year Food Technology course at the Universitat Politècnica de València (Comunidad Valenciana, Spain) in the academic year 2008-2009. The students were between the ages of 20-32. The participants completed both presentation modes of the FFQ (paper and pencil and online) in a cross-over study with a time interval of 3-week. To study the effect of ordering of the questionnaires, participants were randomly assigned to group A (paper and pencil FFQ first) and group B (online FFQ first). Both self-administered semi-quantitative presentations of the FFQ included 84 food items divided into six groups (dairy products; eggs, meat and fish; vegetables, legumes and fruits; bread, cereals and similar; oils, fats and sweets; beverages and pre-cooked). Participants were asked how frequently and how much each food item they had consumed in the previous year. The response rate was 78% (39 students, 23% men and 77% women). For the total sample, the median dietary intakes were higher for the paper and pencil FFQ than the online version for energy (2,077 vs. 1,635 kcal/day), proteins (96 vs. 88 g/day), carbohydrates (272 vs. 211 g/day), and fat (70 vs. 58 g/day), respectively. These differences were statistically significant. However, there were not significant differences between the two presentations when the consumption by groups of food was calculated, except for "beverages and pre-cooked" group. The pilot testing showed that this online FFQ is a useful tool for estimating the intake of food groups in this university population. On the other hand, the differences found in the results of the absolute quantities of energy and nutrients intakes were not clear. These differences could be due to the problems that the

  19. Another method for a global fit of the Cabibbo-Kobayashi-Maskawa matrix

    International Nuclear Information System (INIS)

    Dita, Petre

    2005-01-01

    Recently we proposed a novel method for doing global fits on the entries of the Cabibbo-Kobayashi-Maskawa matrix. The new used ingredients were a clear relationship between the entries of the CKM matrix and the experimental data, as well as the use of the necessary and sufficient condition the data have to satisfy in order to find a unitary matrix compatible with them. This condition writes as -1 ≤ cosδ ≤1 where δ is the phase that accounts for CP violation. Numerical results are provided for the CKM matrix entries, the mixing angles between generations and all the angles of the standard unitarity triangle. (author)

  20. The Dirac operator on a finite domain and the R-matrix method

    International Nuclear Information System (INIS)

    Grant, I P

    2008-01-01

    Relativistic effects in electron-atom collisions and photo-excitation and -ionization processes increase in importance as the atomic number of the target atom grows and spin-dependent effects increase. A relativistic treatment in which electron motion is described using the Dirac Hamiltonian is then desirable. A version of the popular nonrelativistic R-matrix package incorporating terms from the Breit-Pauli Hamiltonian has been used for modelling such processes for some years. The fully relativistic Dirac R-matrix method has been less popular, but is becoming increasingly relevant for applications to heavy ion targets, where the need to use relativistic wavefunctions is more obvious. The Dirac R-matrix method has been controversial ever since it was first proposed by Goertzel (1948 Phys. Rev. 73 1463-6), and it is therefore important to confirm that recent elaborate and costly applications of the method, such as, Badnell et al (2004 J. Phys. B: At. Mol. Phys. 37 4589) and Ballance and Griffin (2007 J. Phys. B: At. Mol. Opt. Phys. 40 247-58), rest on secure foundations. The first part of this paper analyses the structure of the two-point boundary-value problem for the Dirac operator on a finite domain, from which we construct a unified derivation of the Schroedinger (nonrelativistic) and Dirac (relativistic) R-matrix methods. Suggestions that the usual relativistic theory is not well founded are shown to be without foundation

  1. A transfer-matrix method for spatially modulated structures

    International Nuclear Information System (INIS)

    Surda, A.

    1991-03-01

    A cluster transfer-matrix method convenient for calculation of spatially modulated structures of a wide class of lattice-gas models is developed. The method formulates the problem of calculation of the partition function in terms of non-linear mapping of effective multi-site fields. It is applied to a lattice-gas model qualitatively describing the system of oxygen atoms in the basal planes of high-temperature superconductors. The properties of an incommensurate structure occurring at intermediate temperatures are discussed in detail. (author). 21 refs, 15 figs

  2. Optimization of MIMO Systems Capacity Using Large Random Matrix Methods

    Directory of Open Access Journals (Sweden)

    Philippe Loubaton

    2012-11-01

    Full Text Available This paper provides a comprehensive introduction of large random matrix methods for input covariance matrix optimization of mutual information of MIMO systems. It is first recalled informally how large system approximations of mutual information can be derived. Then, the optimization of the approximations is discussed, and important methodological points that are not necessarily covered by the existing literature are addressed, including the strict concavity of the approximation, the structure of the argument of its maximum, the accuracy of the large system approach with regard to the number of antennas, or the justification of iterative water-filling optimization algorithms. While the existing papers have developed methods adapted to a specific model, this contribution tries to provide a unified view of the large system approximation approach.

  3. Solution of the Multigroup-Diffusion equation by the response matrix method

    International Nuclear Information System (INIS)

    Oliveira, C.R.E.

    1980-10-01

    A preliminary analysis of the response matrix method is made, considering its application to the solution of the multigroup diffusion equations. The one-dimensional formulation is presented and used to test some flux expansions, seeking the application of the method to the two-dimensional problem. This formulation also solves the equations that arise from the integro-differential synthesis algorithm. The slow convergence of the power method, used to solve the eigenvalue problem, and its acceleration by means of the Chebyshev polynomial method, are also studied. An algorithm for the estimation of the dominance ratio is presented, based on the residues of two successive iteration vectors. This ratio, which is not known a priori, is fundamental for the efficiency of the method. Some numerical problems are solved, testing the 1D formulation of the response matrix method, its application to the synthesis algorithm and also, at the same time, the algorithm to accelerate the source problem. (Author) [pt

  4. A pedagogical derivation of the matrix element method in particle physics data analysis

    Science.gov (United States)

    Sumowidagdo, Suharyo

    2018-03-01

    The matrix element method provides a direct connection between the underlying theory of particle physics processes and detector-level physical observables. I am presenting a pedagogically-oriented derivation of the matrix element method, drawing from elementary concepts in probability theory, statistics, and the process of experimental measurements. The level of treatment should be suitable for beginning research student in phenomenology and experimental high energy physics.

  5. Discrete-ordinate method with matrix exponential for a pseudo-spherical atmosphere: Scalar case

    International Nuclear Information System (INIS)

    Doicu, A.; Trautmann, T.

    2009-01-01

    We present a discrete-ordinate algorithm using the matrix-exponential solution for pseudo-spherical radiative transfer. Following the finite-element technique we introduce the concept of layer equation and formulate the discrete radiative transfer problem in terms of the level values of the radiance. The layer quantities are expressed by means of matrix exponentials, which are computed by using the matrix eigenvalue method and the Pade approximation. These solution methods lead to a compact and versatile formulation of the radiative transfer. Simulated nadir and limb radiances for an aerosol-loaded atmosphere and a cloudy atmosphere are presented along with a discussion of the model intercomparisons and timings

  6. Modelling of packet traffic with matrix analytic methods

    DEFF Research Database (Denmark)

    Andersen, Allan T.

    1995-01-01

    BISDN network. The heuristic formula did not seem to yield substantially better results than already available approximations. Finally, some results for the finite capacity BMAP/G/1 queue have been obtained. The steady state probability vector of the embedded chain is found by a direct method where...... process. A heuristic formula for the tail behaviour of a single server queue fed by a superposition of renewal processes has been evaluated. The evaluation was performed by applying Matrix Analytic methods. The heuristic formula has applications in the Call Admission Control (CAC) procedure of the future...

  7. Graphene-Reinforced Aluminum Matrix Composites: A Review of Synthesis Methods and Properties

    Science.gov (United States)

    Chen, Fei; Gupta, Nikhil; Behera, Rakesh K.; Rohatgi, Pradeep K.

    2018-03-01

    Graphene-reinforced aluminum (Gr-Al) matrix nanocomposites (NCs) have attracted strong interest from both research and industry in high-performance weight-sensitive applications. Due to the vastly different bonding characteristics of the Al matrix (metallic) and graphene (in-plane covalent + inter-plane van der Waals), the graphene phase has a general tendency to agglomerate and phase separate in the metal matrix, which is detrimental for the mechanical and chemical properties of the composite. Thus, synthesis of Gr-Al NCs is extremely challenging. This review summarizes the different methods available to synthesize Gr-Al NCs and the resulting properties achieved in these NCs. Understanding the effect of processing parameters on the realized properties opens up the possibility of tailoring the synthesis methods to achieve the desired properties for a given application.

  8. Graphene-Reinforced Aluminum Matrix Composites: A Review of Synthesis Methods and Properties

    Science.gov (United States)

    Chen, Fei; Gupta, Nikhil; Behera, Rakesh K.; Rohatgi, Pradeep K.

    2018-06-01

    Graphene-reinforced aluminum (Gr-Al) matrix nanocomposites (NCs) have attracted strong interest from both research and industry in high-performance weight-sensitive applications. Due to the vastly different bonding characteristics of the Al matrix (metallic) and graphene (in-plane covalent + inter-plane van der Waals), the graphene phase has a general tendency to agglomerate and phase separate in the metal matrix, which is detrimental for the mechanical and chemical properties of the composite. Thus, synthesis of Gr-Al NCs is extremely challenging. This review summarizes the different methods available to synthesize Gr-Al NCs and the resulting properties achieved in these NCs. Understanding the effect of processing parameters on the realized properties opens up the possibility of tailoring the synthesis methods to achieve the desired properties for a given application.

  9. Energy-dependent applications of the transfer matrix method

    International Nuclear Information System (INIS)

    Oeztunali, O.I.; Aronson, R.

    1975-01-01

    The transfer matrix method is applied to energy-dependent neutron transport problems for multiplying and nonmultiplying media in one-dimensional plane geometry. Experimental cross sections are used for total, elastic, and inelastic scattering and fission. Numerical solutions are presented for the problem of a unit point isotropic source in an infinite medium of water and for the problem of the critical 235 U slab with finite water reflectors. No iterations were necessary in this method. Numerical results obtained are consistent with physical considerations and compare favorably with the moments method results for the problem of the unit point isotropic source in an infinite water medium. (U.S.)

  10. Comparative Study of Inference Methods for Bayesian Nonnegative Matrix Factorisation

    DEFF Research Database (Denmark)

    Brouwer, Thomas; Frellsen, Jes; Liò, Pietro

    2017-01-01

    In this paper, we study the trade-offs of different inference approaches for Bayesian matrix factorisation methods, which are commonly used for predicting missing values, and for finding patterns in the data. In particular, we consider Bayesian nonnegative variants of matrix factorisation and tri......-factorisation, and compare non-probabilistic inference, Gibbs sampling, variational Bayesian inference, and a maximum-a-posteriori approach. The variational approach is new for the Bayesian nonnegative models. We compare their convergence, and robustness to noise and sparsity of the data, on both synthetic and real...

  11. Multiple resonance compensation for betatron coupling and its equivalence with matrix method

    CERN Document Server

    De Ninno, G

    1999-01-01

    Analyses of betatron coupling can be broadly divided into two categories: the matrix approach that decouples the single-turn matrix to reveal the normal modes and the hamiltonian approach that evaluates the coupling in terms of the action of resonances in perturbation theory. The latter is often regarded as being less exact but good for physical insight. The common opinion is that the correction of the two closest sum and difference resonances to the working point is sufficient to reduce the off-axis terms in the 4X4 single-turn matrix, but this is only partially true. The reason for this is explained, and a method is developed that sums to infinity all coupling resonances and, in this way, obtains results equivalent to the matrix approach. The two approaches is discussed with reference to the dynamic aperture. Finally, the extension of the summation method to resonances of all orders is outlined and the relative importance of a single resonance compared to all resonances of a given order is analytically desc...

  12. Uniqueness theorems for differential pencils with eigenparameter boundary conditions and transmission conditions

    Science.gov (United States)

    Yang, Chuan-Fu

    Inverse spectral problems are considered for differential pencils with boundary conditions depending polynomially on the spectral parameter and with a finite number of transmission conditions. We give formulations of the associated inverse problems such as Titchmarsh-Weyl theorem, Hochstadt-Lieberman theorem and Mochizuki-Trooshin theorem, and prove corresponding uniqueness theorems. The obtained results are generalizations of the similar results for the classical Sturm-Liouville operator on a finite interval.

  13. A Literature Study of Matrix Element Influenced to the Result of Analysis Using Absorption Atomic Spectroscopy Method (AAS)

    International Nuclear Information System (INIS)

    Tyas-Djuhariningrum

    2004-01-01

    The gold sample analysis can be deviated more than >10% to those thrue value caused by the matrix element. So that the matrix element character need to be study in order to reduce the deviation. In rock samples, the matrix elements can cause self quenching, self absorption and ionization process, so there is a result analysis error. In the rock geochemical process, the elements of the same group at the periodic system have the tendency to be together because of their same characteristic. In absorption Atomic Spectroscopy analysis, the elements associate can absorb primer energy with similar wave length so that it can cause deviation in the result interpretation. The aim of study is to predict matrix element influences from rock sample with application standard method for reducing deviation. In quantitative way, assessment of primer light intensity that will be absorbed is proportional to the concentration atom in the sample that relationship between photon intensity with concentration in part per million is linier (ppm). These methods for eliminating matrix elements influence consist of three methods : external standard method, internal standard method, and addition standard method. External standard method for all matrix element, internal standard method for elimination matrix element that have similar characteristics, addition standard methods for elimination matrix elements in Au, Pt samples. The third of standard posess here accuracy are about 95-97%. (author)

  14. A method to compute the inverse of a complex n-block tridiagonal quasi-hermitian matrix

    International Nuclear Information System (INIS)

    Godfrin, Elena

    1990-01-01

    This paper presents a method to compute the inverse of a complex n-block tridiagonal quasi-hermitian matrix using adequate partitions of the complete matrix. This type of matrix is very usual in quantum mechanics and, more specifically, in solid state physics (e.g., interfaces and superlattices), when the tight-binding approximation is used. The efficiency of the method is analyzed comparing the required CPU time and work-area for different usual techniques. (Author)

  15. Reactor calculation in coarse mesh by finite element method applied to matrix response method

    International Nuclear Information System (INIS)

    Nakata, H.

    1982-01-01

    The finite element method is applied to the solution of the modified formulation of the matrix-response method aiming to do reactor calculations in coarse mesh. Good results are obtained with a short running time. The method is applicable to problems where the heterogeneity is predominant and to problems of evolution in coarse meshes where the burnup is variable in one same coarse mesh, making the cross section vary spatially with the evolution. (E.G.) [pt

  16. A method to select aperture margin in collimated spot scanning proton therapy

    International Nuclear Information System (INIS)

    Wang, Dongxu; Smith, Blake R; Gelover, Edgar; Flynn, Ryan T; Hyer, Daniel E

    2015-01-01

    The use of collimator or aperture may sharpen the lateral dose gradient for spot scanning proton therapy. However, to date, there has not been a standard method to determine the aperture margin for a single field in collimated spot scanning proton therapy. This study describes a theoretical framework to select the optimal aperture margin for a single field, and also presents the spot spacing limit required such that the optimal aperture margin exists. Since, for a proton pencil beam partially intercepted by collimator, the maximum point dose (spot center) shifts away from the original pencil beam central axis, we propose that the optimal margin should be equal to the maximum pencil beam center shift under the condition that spot spacing is small with respect to the maximum pencil beam center shift, which can be numerically determined based on beam modeling data. A test case is presented which demonstrates agreement with the prediction made based on the proposed methods. When apertures are applied in a commercial treatment planning system this method may be implemented. (note)

  17. NLTE steady-state response matrix method.

    Science.gov (United States)

    Faussurier, G.; More, R. M.

    2000-05-01

    A connection between atomic kinetics and non-equilibrium thermodynamics has been recently established by using a collisional-radiative model modified to include line absorption. The calculated net emission can be expressed as a non-local thermodynamic equilibrium (NLTE) symmetric response matrix. In the paper, this connection is extended to both cases of the average-atom model and the Busquet's model (RAdiative-Dependent IOnization Model, RADIOM). The main properties of the response matrix still remain valid. The RADIOM source function found in the literature leads to a diagonal response matrix, stressing the absence of any frequency redistribution among the frequency groups at this order of calculation.

  18. Agar/gelatin bilayer gel matrix fabricated by simple thermo-responsive sol-gel transition method.

    Science.gov (United States)

    Wang, Yifeng; Dong, Meng; Guo, Mengmeng; Wang, Xia; Zhou, Jing; Lei, Jian; Guo, Chuanhang; Qin, Chaoran

    2017-08-01

    We present a simple and environmentally-friendly method to generate an agar/gelatin bilayer gel matrix for further biomedical applications. In this method, the thermally responsive sol-gel transitions of agar and gelatin combined with the different transition temperatures are exquisitely employed to fabricate the agar/gelatin bilayer gel matrix and achieve separate loading for various materials (e.g., drugs, fluorescent materials, and nanoparticles). Importantly, the resulting bilayer gel matrix provides two different biopolymer environments (a polysaccharide environment vs a protein environment) with a well-defined border, which allows the loaded materials in different layers to retain their original properties (e.g., magnetism and fluorescence) and reduce mutual interference. In addition, the loaded materials in the bilayer gel matrix exhibit an interesting release behavior under the control of thermal stimuli. Consequently, the resulting agar/gelatin bilayer gel matrix is a promising candidate for biomedical applications in drug delivery, controlled release, fluorescence labeling, and bio-imaging. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. A Normalized Transfer Matrix Method for the Free Vibration of Stepped Beams: Comparison with Experimental and FE(3D Methods

    Directory of Open Access Journals (Sweden)

    Tamer Ahmed El-Sayed

    2017-01-01

    Full Text Available The exact solution for multistepped Timoshenko beam is derived using a set of fundamental solutions. This set of solutions is derived to normalize the solution at the origin of the coordinates. The start, end, and intermediate boundary conditions involve concentrated masses and linear and rotational elastic supports. The beam start, end, and intermediate equations are assembled using the present normalized transfer matrix (NTM. The advantage of this method is that it is quicker than the standard method because the size of the complete system coefficient matrix is 4 × 4. In addition, during the assembly of this matrix, there are no inverse matrix steps required. The validity of this method is tested by comparing the results of the current method with the literature. Then the validity of the exact stepped analysis is checked using experimental and FE(3D methods. The experimental results for stepped beams with single step and two steps, for sixteen different test samples, are in excellent agreement with those of the three-dimensional finite element FE(3D. The comparison between the NTM method and the finite element method results shows that the modal percentage deviation is increased when a beam step location coincides with a peak point in the mode shape. Meanwhile, the deviation decreases when a beam step location coincides with a straight portion in the mode shape.

  20. Workshop report on large-scale matrix diagonalization methods in chemistry theory institute

    Energy Technology Data Exchange (ETDEWEB)

    Bischof, C.H.; Shepard, R.L.; Huss-Lederman, S. [eds.

    1996-10-01

    The Large-Scale Matrix Diagonalization Methods in Chemistry theory institute brought together 41 computational chemists and numerical analysts. The goal was to understand the needs of the computational chemistry community in problems that utilize matrix diagonalization techniques. This was accomplished by reviewing the current state of the art and looking toward future directions in matrix diagonalization techniques. This institute occurred about 20 years after a related meeting of similar size. During those 20 years the Davidson method continued to dominate the problem of finding a few extremal eigenvalues for many computational chemistry problems. Work on non-diagonally dominant and non-Hermitian problems as well as parallel computing has also brought new methods to bear. The changes and similarities in problems and methods over the past two decades offered an interesting viewpoint for the success in this area. One important area covered by the talks was overviews of the source and nature of the chemistry problems. The numerical analysts were uniformly grateful for the efforts to convey a better understanding of the problems and issues faced in computational chemistry. An important outcome was an understanding of the wide range of eigenproblems encountered in computational chemistry. The workshop covered problems involving self- consistent-field (SCF), configuration interaction (CI), intramolecular vibrational relaxation (IVR), and scattering problems. In atomic structure calculations using the Hartree-Fock method (SCF), the symmetric matrices can range from order hundreds to thousands. These matrices often include large clusters of eigenvalues which can be as much as 25% of the spectrum. However, if Cl methods are also used, the matrix size can be between 10{sup 4} and 10{sup 9} where only one or a few extremal eigenvalues and eigenvectors are needed. Working with very large matrices has lead to the development of

  1. An interlaboratory comparison of methods for measuring rock matrix porosity

    International Nuclear Information System (INIS)

    Rasilainen, K.; Hellmuth, K.H.; Kivekaes, L.; Ruskeeniemi, T.; Melamed, A.; Siitari-Kauppi, M.

    1996-09-01

    An interlaboratory comparison study was conducted for the available Finnish methods of rock matrix porosity measurements. The aim was first to compare different experimental methods for future applications, and second to obtain quality assured data for the needs of matrix diffusion modelling. Three different versions of water immersion techniques, a tracer elution method, a helium gas through-diffusion method, and a C-14-PMMA method were tested. All methods selected for this study were established experimental tools in the respective laboratories, and they had already been individually tested. Rock samples for the study were obtained from a homogeneous granitic drill core section from the natural analogue site at Palmottu. The drill core section was cut into slabs that were expected to be practically identical. The subsamples were then circulated between the different laboratories using a round robin approach. The circulation was possible because all methods were non-destructive, except the C-14-PMMA method, which was always the last method to be applied. The possible effect of drying temperature on the measured porosity was also preliminarily tested. These measurements were done in the order of increasing drying temperature. Based on the study, it can be concluded that all methods are comparable in their accuracy. The selection of methods for future applications can therefore be based on practical considerations. Drying temperature seemed to have very little effect on the measured porosity, but a more detailed study is needed for definite conclusions. (author) (4 refs.)

  2. Efficient Tridiagonal Preconditioner for the Matrix-Free Truncated Newton Method

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Vlček, Jan

    2014-01-01

    Roč. 235, 25 May (2014), s. 394-407 ISSN 0096-3003 R&D Projects: GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : unconstrained optimization * large scale optimization * matrix-free truncated Newton method * preconditioned conjugate gradient method * preconditioners obtained by the directional differentiation * numerical algorithms Subject RIV: BA - General Mathematics Impact factor: 1.551, year: 2014

  3. Method of computer algebraic calculation of the matrix elements in the second quantization language

    International Nuclear Information System (INIS)

    Gotoh, Masashi; Mori, Kazuhide; Itoh, Reikichi

    1995-01-01

    An automated method by the algebraic programming language REDUCE3 for specifying the matrix elements expressed in second quantization language is presented and then applied to the case of the matrix elements in the TDHF theory. This program works in a very straightforward way by commuting the electron creation and annihilation operator (a † and a) until these operators have completely vanished from the expression of the matrix element under the appropriate elimination conditions. An improved method using singlet generators of unitary transformations in the place of the electron creation and annihilation operators is also presented. This improvement reduces the time and memory required for the calculation. These methods will make programming in the field of quantum chemistry much easier. 11 refs., 1 tab

  4. A massively parallel discrete ordinates response matrix method for neutron transport

    International Nuclear Information System (INIS)

    Hanebutte, U.R.; Lewis, E.E.

    1992-01-01

    In this paper a discrete ordinates response matrix method is formulated with anisotropic scattering for the solution of neutron transport problems on massively parallel computers. The response matrix formulation eliminates iteration on the scattering source. The nodal matrices that result from the diamond-differenced equations are utilized in a factored form that minimizes memory requirements and significantly reduces the number of arithmetic operations required per node. The red-black solution algorithm utilizes massive parallelism by assigning each spatial node to one or more processors. The algorithm is accelerated by a synthetic method in which the low-order diffusion equations are also solved by massively parallel red-black iterations. The method is implemented on a 16K Connection Machine-2, and S 8 and S 16 solutions are obtained for fixed-source benchmark problems in x-y geometry

  5. Writing forces associated with four pencil grasp patterns in grade 4 children.

    Science.gov (United States)

    Schwellnus, Heidi; Carnahan, Heather; Kushki, Azadeh; Polatajko, Helene; Missiuna, Cheryl; Chau, Tom

    2013-01-01

    OBJECTIVE. We investigated differences in handwriting kinetics, speed, and legibility among four pencil grasps after a 10-min copy task. METHOD. Seventy-four Grade 4 students completed a handwriting assessment before and after a copy task. Grip and axial forces were measured with an instrumented stylus and force-sensitive tablet. We used multiple linear regression to analyze the relationship between grasp pattern and grip and axial forces. RESULTS. We found no kinetic differences among grasps, whether considered individually or grouped by the number of fingers on the barrel. However, when grasps were grouped according to the thumb position, the adducted grasps exhibited higher mean grip and axial forces. CONCLUSION. Grip forces were generally similar across the different grasps. Kinetic differences resulting from thumb position seemed to have no bearing on speed and legibility. Interventions for handwriting difficulties should focus more on speed and letter formation than on grasp pattern. Copyright © 2013 by the American Occupational Therapy Association, Inc.

  6. On the use of transition matrix methods with extended ensembles.

    Science.gov (United States)

    Escobedo, Fernando A; Abreu, Charlles R A

    2006-03-14

    Different extended ensemble schemes for non-Boltzmann sampling (NBS) of a selected reaction coordinate lambda were formulated so that they employ (i) "variable" sampling window schemes (that include the "successive umbrella sampling" method) to comprehensibly explore the lambda domain and (ii) transition matrix methods to iteratively obtain the underlying free-energy eta landscape (or "importance" weights) associated with lambda. The connection between "acceptance ratio" and transition matrix methods was first established to form the basis of the approach for estimating eta(lambda). The validity and performance of the different NBS schemes were then assessed using as lambda coordinate the configurational energy of the Lennard-Jones fluid. For the cases studied, it was found that the convergence rate in the estimation of eta is little affected by the use of data from high-order transitions, while it is noticeably improved by the use of a broader window of sampling in the variable window methods. Finally, it is shown how an "elastic" window of sampling can be used to effectively enact (nonuniform) preferential sampling over the lambda domain, and how to stitch the weights from separate one-dimensional NBS runs to produce a eta surface over a two-dimensional domain.

  7. Optimization of GEANT4 settings for Proton Pencil Beam Scanning simulations using GATE

    Energy Technology Data Exchange (ETDEWEB)

    Grevillot, Loic, E-mail: loic.grevillot@gmail.co [Universite de Lyon, F-69622 Lyon (France); Creatis, CNRS UMR 5220, F-69622 Villeurbanne (France); Centre de Lutte Contre le Cancer Leon Berard, F-69373 Lyon (France); IBA, B-1348 Louvain-la-Neuve (Belgium); Frisson, Thibault [Universite de Lyon, F-69622 Lyon (France); Creatis, CNRS UMR 5220, F-69622 Villeurbanne (France); Centre de Lutte Contre le Cancer Leon Berard, F-69373 Lyon (France); Zahra, Nabil [Universite de Lyon, F-69622 Lyon (France); IPNL, CNRS UMR 5822, F-69622 Villeurbanne (France); Centre de Lutte Contre le Cancer Leon Berard, F-69373 Lyon (France); Bertrand, Damien; Stichelbaut, Frederic [IBA, B-1348 Louvain-la-Neuve (Belgium); Freud, Nicolas [Universite de Lyon, F-69622 Lyon (France); CNDRI, INSA-Lyon, F-69621 Villeurbanne Cedex (France); Sarrut, David [Universite de Lyon, F-69622 Lyon (France); Creatis, CNRS UMR 5220, F-69622 Villeurbanne (France); Centre de Lutte Contre le Cancer Leon Berard, F-69373 Lyon (France)

    2010-10-15

    This study reports the investigation of different GEANT4 settings for proton therapy applications in the context of Treatment Planning System comparisons. The GEANT4.9.2 release was used through the GATE platform. We focused on the Pencil Beam Scanning delivery technique, which allows for intensity modulated proton therapy applications. The most relevant options and parameters (range cut, step size, database binning) for the simulation that influence the dose deposition were investigated, in order to determine a robust, accurate and efficient simulation environment. In this perspective, simulations of depth-dose profiles and transverse profiles at different depths and energies between 100 and 230 MeV have been assessed against reference measurements in water and PMMA. These measurements were performed in Essen, Germany, with the IBA dedicated Pencil Beam Scanning system, using Bragg-peak chambers and radiochromic films. GEANT4 simulations were also compared to the PHITS.2.14 and MCNPX.2.5.0 Monte Carlo codes. Depth-dose simulations reached 0.3 mm range accuracy compared to NIST CSDA ranges, with a dose agreement of about 1% over a set of five different energies. The transverse profiles simulated using the different Monte Carlo codes showed discrepancies, with up to 15% difference in beam widening between GEANT4 and MCNPX in water. A 8% difference between the GEANT4 multiple scattering and single scattering algorithms was observed. The simulations showed the inability of reproducing the measured transverse dose spreading with depth in PMMA, corroborating the fact that GEANT4 underestimates the lateral dose spreading. GATE was found to be a very convenient simulation environment to perform this study. A reference physics-list and an optimized parameters-list have been proposed. Satisfactory agreement against depth-dose profiles measurements was obtained. The simulation of transverse profiles using different Monte Carlo codes showed significant deviations. This point

  8. IMPACT OF MATRIX INVERSION ON THE COMPLEXITY OF THE FINITE ELEMENT METHOD

    Directory of Open Access Journals (Sweden)

    M. Sybis

    2016-04-01

    Full Text Available Purpose. The development of a wide construction market and a desire to design innovative architectural building constructions has resulted in the need to create complex numerical models of objects having increasingly higher computational complexity. The purpose of this work is to show that choosing a proper method for solving the set of equations can improve the calculation time (reduce the complexity by a few levels of magnitude. Methodology. The article presents an analysis of the impact of matrix inversion algorithm on the deflection calculation in the beam, using the finite element method (FEM. Based on the literature analysis, common methods of calculating set of equations were determined. From the found solutions the Gaussian elimination, LU and Cholesky decomposition methods have been implemented to determine the effect of the matrix inversion algorithm used for solving the equations set on the number of computational operations performed. In addition, each of the implemented method has been further optimized thereby reducing the number of necessary arithmetic operations. Findings. These optimizations have been performed on the use of certain properties of the matrix, such as symmetry or significant number of zero elements in the matrix. The results of the analysis are presented for the division of the beam to 5, 50, 100 and 200 nodes, for which the deflection has been calculated. Originality. The main achievement of this work is that it shows the impact of the used methodology on the complexity of solving the problem (or equivalently, time needed to obtain results. Practical value. The difference between the best (the less complex and the worst (the most complex is in the row of few orders of magnitude. This result shows that choosing wrong methodology may enlarge time needed to perform calculation significantly.

  9. A comparison of companion matrix methods to find roots of a trigonometric polynomial

    Science.gov (United States)

    Boyd, John P.

    2013-08-01

    A trigonometric polynomial is a truncated Fourier series of the form fN(t)≡∑j=0Naj cos(jt)+∑j=1N bj sin(jt). It has been previously shown by the author that zeros of such a polynomial can be computed as the eigenvalues of a companion matrix with elements which are complex valued combinations of the Fourier coefficients, the "CCM" method. However, previous work provided no examples, so one goal of this new work is to experimentally test the CCM method. A second goal is introduce a new alternative, the elimination/Chebyshev algorithm, and experimentally compare it with the CCM scheme. The elimination/Chebyshev matrix (ECM) algorithm yields a companion matrix with real-valued elements, albeit at the price of usefulness only for real roots. The new elimination scheme first converts the trigonometric rootfinding problem to a pair of polynomial equations in the variables (c,s) where c≡cos(t) and s≡sin(t). The elimination method next reduces the system to a single univariate polynomial P(c). We show that this same polynomial is the resultant of the system and is also a generator of the Groebner basis with lexicographic ordering for the system. Both methods give very high numerical accuracy for real-valued roots, typically at least 11 decimal places in Matlab/IEEE 754 16 digit floating point arithmetic. The CCM algorithm is typically one or two decimal places more accurate, though these differences disappear if the roots are "Newton-polished" by a single Newton's iteration. The complex-valued matrix is accurate for complex-valued roots, too, though accuracy decreases with the magnitude of the imaginary part of the root. The cost of both methods scales as O(N3) floating point operations. In spite of intimate connections of the elimination/Chebyshev scheme to two well-established technologies for solving systems of equations, resultants and Groebner bases, and the advantages of using only real-valued arithmetic to obtain a companion matrix with real-valued elements

  10. A Control Method of Current Type Matrix Converter for Plasma Control Coil Power Supply

    International Nuclear Information System (INIS)

    Shimada, K.; Matsukawa, M.; Kurihara, K.; Jun-ichi Itoh

    2006-01-01

    In exploration to a tokamak fusion reactor, the control of plasma instabilities of high β plasma such as neoclassical tearing mode (NTM), resistive wall mode (RWM) etc., is the key issue for steady-state sustainment. One of the proposed methods to avoid suppressing RWM is that AC current having a phase to work for reduction the RWM growth is generated in a coil (sector coil) equipped spirally on the plasma vacuum vessel. To stabilize RWM, precise and fast real-time feedback control of magnetic field with proper amplitude and frequency is necessary. This implies that an appropriate power supply dedicated for such an application is expected to be developed. A matrix converter as one of power supply candidates for this purpose could provide a solution The matrix converter, categorized in an AC/AC direct converter composed of nine bi-directional current switches, has a great feature that a large energy storage element is unnecessary in comparison with a standard existing AC/AC indirect converter, which is composed of an AC/DC converter and a DC/AC inverter. It is also advantageous in cost and size of its applications. Fortunately, a voltage type matrix converter has come to be available at the market recently, while a current type matrix converter, which is advantageous for fast control of the large-inductance coil current, has been unavailable. On the background above mentioned, we proposed a new current type matrix converter and its control method applicable to a power supply with fast response for suppressing plasma instabilities. Since this converter is required with high accuracy control, the gate control method is adopted to three-phase switching method using middle phase to reduce voltage and current waveforms distortion. The control system is composed of VME-bus board with DSP (Digital Signal Processor) and FPGA (Field Programmable Gate Array) for high speed calculation and control. This paper describes the control method of a current type matrix converter

  11. The use of research questionnaires with hearing impaired adults: online vs. paper-and-pencil administration

    Directory of Open Access Journals (Sweden)

    Thorén Elisabet

    2012-10-01

    Full Text Available Abstract Background When evaluating hearing rehabilitation, it is reasonable to use self-report questionnaires as outcome measure. Questionnaires used in audiological research are developed and validated for the paper-and-pencil format. As computer and Internet use is increasing, standardized questionnaires used in the audiological context should be evaluated to determine the viability of the online administration format. The aim of this study was to compare administration of questionnaires online versus paper- and pencil of four standardised questionnaires used in hearing research and clinic. We included the Hearing Handicap Inventory for the Elderly (HHIE, the International Outcome Inventory for Hearing Aids (IOI-HA, Satisfaction with Amplification in Daily Life (SADL, and the Hospital Anxiety and Depression Scale (HADS. Methods A cross-over design was used by randomly letting the participants complete the questionnaires either online or on paper. After 3 weeks the participants filled out the same questionnaires again but in the other format. A total of 65 hearing-aid users were recruited from a hearing clinic to participate on a voluntary basis and of these 53 completed both versions of the questionnaires. Results A significant main effect of format was found on the HHIE (p Conclusions For three of the four included questionnaires the participants’ scores remained consistent across administrations and formats. For the fourth included questionnaire (HHIE a significant difference of format with a small effect size was found. The relevance of the difference in scores between the formats depends on which context the questionnaire is used in. On balance, it is recommended that the administration format remain stable across assessment points.

  12. Label-Free Electrochemical Detection of the Specific Oligonucleotide Sequence of Dengue Virus Type 1 on Pencil Graphite Electrodes

    Science.gov (United States)

    Souza, Elaine; Nascimento, Gustavo; Santana, Nataly; Ferreira, Danielly; Lima, Manoel; Natividade, Edna; Martins, Danyelly; Lima-Filho, José

    2011-01-01

    A biosensor that relies on the adsorption immobilization of the 18-mer single-stranded nucleic acid related to dengue virus gene 1 on activated pencil graphite was developed. Hybridization between the probe and its complementary oligonucleotides (the target) was investigated by monitoring guanine oxidation by differential pulse voltammetry (DPV). The pencil graphite electrode was made of ordinary pencil lead (type 4B). The polished surface of the working electrode was activated by applying a potential of 1.8 V for 5 min. Afterward, the dengue oligonucleotides probe was immobilized on the activated electrode by applying 0.5 V to the electrode in 0.5 M acetate buffer (pH 5.0) for 5 min. The hybridization process was carried out by incubating at the annealing temperature of the oligonucleotides. A time of five minutes and concentration of 1 μM were found to be the optimal conditions for probe immobilization. The electrochemical detection of annealing between the DNA probe (TS-1P) immobilized on the modified electrode, and the target (TS-1T) was achieved. The target could be quantified in a range from 1 to 40 nM with good linearity and a detection limit of 0.92 nM. The specificity of the electrochemical biosensor was tested using non-complementary sequences of dengue virus 2 and 3. PMID:22163916

  13. Application of the correction factor for radiation qualityKq in dosimetry with pencil-type ionization chambers using a Tandem system

    International Nuclear Information System (INIS)

    Fontes, Ladyjane Pereira; Potiens, Maria da Penha Albuquerque

    2017-01-01

    The pencil-type ionization chamber widely used in computed tomography (CT) dosimetry, is a measuring instrument that has a cylindrical shape and provides uniform response independent of the angle of incidence of ionizing radiation. Calibration and measurements performed with the pencil-type ionization chamber are done in terms of Kerma product in air-length (P k,l ) and values are given in Gy.cm. To obtain the values of (P k,l ) during clinical measurements, the readings performed with the ionization chamber are multiplied by the calibration coefficient (N k,l ) and the correction factor C for quality (K q ) which are given in Calibration certificates of the chambers. The application of the correction factor for radiation quality K q is done as a function of the effective energy of the beam that is determined by the Half Value layer (HVL) calculation. In order to estimate the HVL values in this work, a Tandem system made up of cylindrical aluminum and PMMA absorber layers was used as a low cost and easy to apply method. From the Tandem curve, it was possible to construct the calibration curve and obtain the appropriate K q to the beam of the computed tomography equipment studied. (author)

  14. Sub-second pencil beam dose calculation on GPU for adaptive proton therapy.

    Science.gov (United States)

    da Silva, Joakim; Ansorge, Richard; Jena, Rajesh

    2015-06-21

    Although proton therapy delivered using scanned pencil beams has the potential to produce better dose conformity than conventional radiotherapy, the created dose distributions are more sensitive to anatomical changes and patient motion. Therefore, the introduction of adaptive treatment techniques where the dose can be monitored as it is being delivered is highly desirable. We present a GPU-based dose calculation engine relying on the widely used pencil beam algorithm, developed for on-line dose calculation. The calculation engine was implemented from scratch, with each step of the algorithm parallelized and adapted to run efficiently on the GPU architecture. To ensure fast calculation, it employs several application-specific modifications and simplifications, and a fast scatter-based implementation of the computationally expensive kernel superposition step. The calculation time for a skull base treatment plan using two beam directions was 0.22 s on an Nvidia Tesla K40 GPU, whereas a test case of a cubic target in water from the literature took 0.14 s to calculate. The accuracy of the patient dose distributions was assessed by calculating the γ-index with respect to a gold standard Monte Carlo simulation. The passing rates were 99.2% and 96.7%, respectively, for the 3%/3 mm and 2%/2 mm criteria, matching those produced by a clinical treatment planning system.

  15. Matrix Methods for Solving Hartree-Fock Equations in Atomic Structure Calculations and Line Broadening

    Directory of Open Access Journals (Sweden)

    Thomas Gomez

    2018-04-01

    Full Text Available Atomic structure of N-electron atoms is often determined by solving the Hartree-Fock equations, which are a set of integro-differential equations. The integral part of the Hartree-Fock equations treats electron exchange, but the Hartree-Fock equations are not often treated as an integro-differential equation. The exchange term is often approximated as an inhomogeneous or an effective potential so that the Hartree-Fock equations become a set of ordinary differential equations (which can be solved using the usual shooting methods. Because the Hartree-Fock equations are an iterative-refinement method, the inhomogeneous term relies on the previous guess of the wavefunction. In addition, there are numerical complications associated with solving inhomogeneous differential equations. This work uses matrix methods to solve the Hartree-Fock equations as an integro-differential equation. It is well known that a derivative operator can be expressed as a matrix made of finite-difference coefficients; energy eigenvalues and eigenvectors can be obtained by using linear-algebra packages. The integral (exchange part of the Hartree-Fock equation can be approximated as a sum and written as a matrix. The Hartree-Fock equations can be solved as a matrix that is the sum of the differential and integral matrices. We compare calculations using this method against experiment and standard atomic structure calculations. This matrix method can also be used to solve for free-electron wavefunctions, thus improving how the atoms and free electrons interact. This technique is important for spectral line broadening in two ways: it improves the atomic structure calculations, and it improves the motion of the plasma electrons that collide with the atom.

  16. Strategy BMT Al-Ittihad Using Matrix IE, Matrix SWOT 8K, Matrix SPACE and Matrix TWOS

    Directory of Open Access Journals (Sweden)

    Nofrizal Nofrizal

    2018-03-01

    Full Text Available This research aims to formulate and select BMT Al-Ittihad Rumbai strategy to face the changing of business environment both from internal environment such as organization resources, finance, member and external business such as competitor, economy, politics and others. This research method used Analysis of EFAS, IFAS, IE Matrix, SWOT-8K Matrix, SPACE Matrix and TWOS Matrix. our hope from this research it can assist BMT Al-Ittihad in formulating and selecting strategies for the sustainability of BMT Al-Ittihad in the future. The sample in this research is using purposive sampling technique that is the manager and leader of BMT Al-IttihadRumbaiPekanbaru. The result of this research shows that the position of BMT Al-Ittihad using IE Matrix, SWOT-8K Matrix and SPACE Matrix is in growth position, stabilization and aggressive. The choice of strategy after using TWOS Matrix is market penetration, market development, vertical integration, horizontal integration, and stabilization (careful.

  17. Impact of beam angle choice on pencil beam scanning breath-hold proton therapy for lung lesions

    DEFF Research Database (Denmark)

    Gorgisyan, Jenny; Perrin, Rosalind; Lomax, Antony J

    2017-01-01

    INTRODUCTION: The breath-hold technique inter alia has been suggested to mitigate the detrimental effect of motion on pencil beam scanned (PBS) proton therapy dose distributions. The aim of this study was to evaluate the robustness of incident proton beam angles to day-to-day anatomical variation...

  18. Computed tomography on a defective CANDU fuel pencil end cap

    International Nuclear Information System (INIS)

    Lupton, L.R.

    1985-09-01

    Five tomographic slices through a defective end cap from a CANDU fuel pencil have been generated using a Co-60 source and a first generation translate-rotate tomography scanner. An anomaly in the density distribution that is believed to have resulted from the defect has been observed. However, with the 0.30 mm spatial resolution used, it has not been possible to state unequivocally whether the change in density is caused by a defect in the weld or a statistical anomaly in the data. It is concluded that a microtomography system, with a spatial resolution in the range of 0.1 mm, could detect the flaw

  19. The current matrix elements from HAL QCD method

    Science.gov (United States)

    Watanabe, Kai; Ishii, Noriyoshi

    2018-03-01

    HAL QCD method is a method to construct a potential (HAL QCD potential) that reproduces the NN scattering phase shift faithful to the QCD. The HAL QCD potential is obtained from QCD by eliminating the degrees of freedom of quarks and gluons and leaving only two particular hadrons. Therefor, in the effective quantum mechanics of two nucleons defined by HAL QCD potential, the conserved current consists not only of the nucleon current but also an extra current originating from the potential (two-body current). Though the form of the two-body current is closely related to the potential, it is not straight forward to extract the former from the latter. In this work, we derive the the current matrix element formula in the quantum mechanics defined by the HAL QCD potential. As a first step, we focus on the non-relativistic case. To give an explicit example, we consider a second quantized non-relativistic two-channel coupling model which we refer to as the original model. From the original model, the HAL QCD potential for the open channel is constructed by eliminating the closed channel in the elastic two-particle scattering region. The current matrix element formula is derived by demanding the effective quantum mechanics defined by the HAL QCD potential to respond to the external field in the same way as the original two-channel coupling model.

  20. A Globally Convergent Matrix-Free Method for Constrained Equations and Its Linear Convergence Rate

    Directory of Open Access Journals (Sweden)

    Min Sun

    2014-01-01

    Full Text Available A matrix-free method for constrained equations is proposed, which is a combination of the well-known PRP (Polak-Ribière-Polyak conjugate gradient method and the famous hyperplane projection method. The new method is not only derivative-free, but also completely matrix-free, and consequently, it can be applied to solve large-scale constrained equations. We obtain global convergence of the new method without any differentiability requirement on the constrained equations. Compared with the existing gradient methods for solving such problem, the new method possesses linear convergence rate under standard conditions, and a relax factor γ is attached in the update step to accelerate convergence. Preliminary numerical results show that it is promising in practice.

  1. A 2D/1D coupling neutron transport method based on the matrix MOC and NEM methods

    International Nuclear Information System (INIS)

    Zhang, H.; Zheng, Y.; Wu, H.; Cao, L.

    2013-01-01

    A new 2D/1D coupling method based on the matrix MOC method (MMOC) and nodal expansion method (NEM) is proposed for solving the three-dimensional heterogeneous neutron transport problem. The MMOC method, used for radial two-dimensional calculation, constructs a response matrix between source and flux with only one sweep and then solves the linear system by using the restarted GMRES algorithm instead of the traditional trajectory sweeping process during within-group iteration for angular flux update. Long characteristics are generated by using the customization of commercial software AutoCAD. A one-dimensional diffusion calculation is carried out in the axial direction by employing the NEM method. The 2D and ID solutions are coupled through the transverse leakage items. The 3D CMFD method is used to ensure the global neutron balance and adjust the different convergence properties of the radial and axial solvers. A computational code is developed based on these theories. Two benchmarks are calculated to verify the coupling method and the code. It is observed that the corresponding numerical results agree well with references, which indicates that the new method is capable of solving the 3D heterogeneous neutron transport problem directly. (authors)

  2. A 2D/1D coupling neutron transport method based on the matrix MOC and NEM methods

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, H.; Zheng, Y.; Wu, H.; Cao, L. [School of Nuclear Science and Technology, Xi' an Jiaotong University, No. 28, Xianning West Road, Xi' an, Shaanxi 710049 (China)

    2013-07-01

    A new 2D/1D coupling method based on the matrix MOC method (MMOC) and nodal expansion method (NEM) is proposed for solving the three-dimensional heterogeneous neutron transport problem. The MMOC method, used for radial two-dimensional calculation, constructs a response matrix between source and flux with only one sweep and then solves the linear system by using the restarted GMRES algorithm instead of the traditional trajectory sweeping process during within-group iteration for angular flux update. Long characteristics are generated by using the customization of commercial software AutoCAD. A one-dimensional diffusion calculation is carried out in the axial direction by employing the NEM method. The 2D and ID solutions are coupled through the transverse leakage items. The 3D CMFD method is used to ensure the global neutron balance and adjust the different convergence properties of the radial and axial solvers. A computational code is developed based on these theories. Two benchmarks are calculated to verify the coupling method and the code. It is observed that the corresponding numerical results agree well with references, which indicates that the new method is capable of solving the 3D heterogeneous neutron transport problem directly. (authors)

  3. Evaluating the variation of response of ionizing chamber type pencil for different collimators

    International Nuclear Information System (INIS)

    Andrade, Lucio das Chagas de; Peixoto, Jose Guilherme Pereira

    2014-01-01

    The pencil ionization chamber is used in dosimetric procedures for X-ray beams in the energy range of a scanner. Calibration of such camera is still being extensively studied because the procedure is different from the others. To study the variation of response of the camera for different collimators was analyzed three different collimators. It was found that among the other showed the best response was the opening of 30 mm. (author)

  4. Quantitative evaluation of the matrix effect in bioanalytical methods based on LC-MS: A comparison of two approaches.

    Science.gov (United States)

    Rudzki, Piotr J; Gniazdowska, Elżbieta; Buś-Kwaśnik, Katarzyna

    2018-06-05

    Liquid chromatography coupled to mass spectrometry (LC-MS) is a powerful tool for studying pharmacokinetics and toxicokinetics. Reliable bioanalysis requires the characterization of the matrix effect, i.e. influence of the endogenous or exogenous compounds on the analyte signal intensity. We have compared two methods for the quantitation of matrix effect. The CVs(%) of internal standard normalized matrix factors recommended by the European Medicines Agency were evaluated against internal standard normalized relative matrix effects derived from Matuszewski et al. (2003). Both methods use post-extraction spiked samples, but matrix factors require also neat solutions. We have tested both approaches using analytes of diverse chemical structures. The study did not reveal relevant differences in the results obtained with both calculation methods. After normalization with the internal standard, the CV(%) of the matrix factor was on average 0.5% higher than the corresponding relative matrix effect. The method adopted by the European Medicines Agency seems to be slightly more conservative in the analyzed datasets. Nine analytes of different structures enabled a general overview of the problem, still, further studies are encouraged to confirm our observations. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Impact of Intrafraction and Residual Interfraction Effect on Prostate Proton Pencil Beam Scanning

    International Nuclear Information System (INIS)

    Tang, Shikui; Deville, Curtiland; Tochner, Zelig; Wang, Ken Kang-Hsin; McDonough, James; Vapiwala, Neha; Both, Stefan

    2014-01-01

    Purpose: To quantitatively evaluate the impact of interplay effect and plan robustness associated with intrafraction and residual interfraction prostate motion for pencil beam scanning proton therapy. Methods and Materials: Ten prostate cancer patients with weekly verification CTs underwent pencil beam scanning with the bilateral single-field uniform dose (SFUD) modality. A typical field had 10-15 energy layers and 500-1000 spots. According to their treatment logs, each layer delivery time was <1 s, with average time to change layers of approximately 8 s. Real-time intrafraction prostate motion was determined from our previously reported prospective study using Calypso beacon transponders. Prostate motion and beam delivering sequence of the worst-case scenario patient were synchronized to calculate the “true” dose received by the prostate. The intrafraction effect was examined by applying the worst-case scenario prostate motion on the planning CT, and the residual interfraction effect was examined on the basis of weekly CT scans. The resultant dose variation of target and critical structures was examined to evaluate the interplay effect. Results: The clinical target volume (CTV) coverage was degraded because of both effects. The CTV D 99 (percentage dose to 99% of the CTV) varied up to 10% relative to the initial plan in individual fractions. However, over the entire course of treatment the total dose degradation of D 99 was 2%-3%, with a standard deviation of <2%. Absolute differences between SFUD, intensity modulate proton therapy, and one-field-per-day SFUD plans were small. The intrafraction effect dominated over the residual interfraction effect for CTV coverage. Mean dose to the anterior rectal wall increased approximately 10% because of combined residual interfraction and intrafraction effects, the interfraction effect being dominant. Conclusions: Both intrafraction and residual interfraction prostate motion degrade CTV coverage within a clinically

  6. Impact of Intrafraction and Residual Interfraction Effect on Prostate Proton Pencil Beam Scanning

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Shikui, E-mail: shktang@gmail.com [Department of Radiation Oncology, University of Pennsylvania, Philadelphia, Pennsylvania (United States); ProCure Proton Therapy Center, Somerset, New Jersey (United States); Deville, Curtiland; Tochner, Zelig [Department of Radiation Oncology, University of Pennsylvania, Philadelphia, Pennsylvania (United States); Wang, Ken Kang-Hsin [Department of Radiation Oncology, University of Pennsylvania, Philadelphia, Pennsylvania (United States); Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, Maryland (United States); McDonough, James; Vapiwala, Neha; Both, Stefan [Department of Radiation Oncology, University of Pennsylvania, Philadelphia, Pennsylvania (United States)

    2014-12-01

    Purpose: To quantitatively evaluate the impact of interplay effect and plan robustness associated with intrafraction and residual interfraction prostate motion for pencil beam scanning proton therapy. Methods and Materials: Ten prostate cancer patients with weekly verification CTs underwent pencil beam scanning with the bilateral single-field uniform dose (SFUD) modality. A typical field had 10-15 energy layers and 500-1000 spots. According to their treatment logs, each layer delivery time was <1 s, with average time to change layers of approximately 8 s. Real-time intrafraction prostate motion was determined from our previously reported prospective study using Calypso beacon transponders. Prostate motion and beam delivering sequence of the worst-case scenario patient were synchronized to calculate the “true” dose received by the prostate. The intrafraction effect was examined by applying the worst-case scenario prostate motion on the planning CT, and the residual interfraction effect was examined on the basis of weekly CT scans. The resultant dose variation of target and critical structures was examined to evaluate the interplay effect. Results: The clinical target volume (CTV) coverage was degraded because of both effects. The CTV D{sub 99} (percentage dose to 99% of the CTV) varied up to 10% relative to the initial plan in individual fractions. However, over the entire course of treatment the total dose degradation of D{sub 99} was 2%-3%, with a standard deviation of <2%. Absolute differences between SFUD, intensity modulate proton therapy, and one-field-per-day SFUD plans were small. The intrafraction effect dominated over the residual interfraction effect for CTV coverage. Mean dose to the anterior rectal wall increased approximately 10% because of combined residual interfraction and intrafraction effects, the interfraction effect being dominant. Conclusions: Both intrafraction and residual interfraction prostate motion degrade CTV coverage within a

  7. Evaluation of the streaming-matrix method for discrete-ordinates duct-streaming calculations

    International Nuclear Information System (INIS)

    Clark, B.A.; Urban, W.T.; Dudziak, D.J.

    1983-01-01

    A new deterministic streaming technique called the Streaming Matrix Hybrid Method (SMHM) is applied to two realistic duct-shielding problems. The results are compared to standard discrete-ordinates and Monte Carlo calculations. The SMHM shows promise as an alternative deterministic streaming method to standard discrete-ordinates

  8. 75 FR 38980 - Certain Cased Pencils From the People's Republic of China: Final Results of the Antidumping Duty...

    Science.gov (United States)

    2010-07-07

    ... erasers, etc.) in any fashion, and either sharpened or unsharpened. The pencils subject to the order are... in room 1117 in the main Department building, and is accessible on the web at http://www.ia.ita.doc...

  9. A Lexicographic Method for Matrix Games with Payoffs of Triangular Intuitionistic Fuzzy Numbers

    Directory of Open Access Journals (Sweden)

    Jiang-Xia Nan

    2010-09-01

    Full Text Available The intuitionistic fuzzy set (IF-set has not been applied to matrix game problems yet since it was introduced by K.T.Atanassov. The aim of this paper is to develop a methodology for solving matrix games with payoffs of triangular intuitionistic fuzzy numbers (TIFNs. Firstly the concept of TIFNs and their arithmetic operations and cut sets are introduced as well as the ranking order relations. Secondly the concept of solutions for matrix games with payoffs of TIFNs is defined. A lexicographic methodology is developed to determine the solutions of matrix games with payoffs of TIFNs for both Players through solving a pair of bi-objective linear programming models derived from two new auxiliary intuitionistic fuzzy programming models. The proposed method is illustrated with a numerical example.

  10. 78 FR 42932 - Certain Cased Pencils From the People's Republic of China: Final Results of Antidumping Duty...

    Science.gov (United States)

    2013-07-18

    ... the People's Republic of China: Final Results of Antidumping Duty Administrative Review and... (pencils) from the People's Republic of China (PRC). The period of review (POR) is December 1, 2010, through November 30, 2011. The review covers one exporter of subject merchandise, Beijing Fila Dixon...

  11. Technical Note: Spot characteristic stability for proton pencil beam scanning.

    Science.gov (United States)

    Chen, Chin-Cheng; Chang, Chang; Moyers, Michael F; Gao, Mingcheng; Mah, Dennis

    2016-02-01

    The spot characteristics for proton pencil beam scanning (PBS) were measured and analyzed over a 16 month period, which included one major site configuration update and six cyclotron interventions. The results provide a reference to establish the quality assurance (QA) frequency and tolerance for proton pencil beam scanning. A simple treatment plan was generated to produce an asymmetric 9-spot pattern distributed throughout a field of 16 × 18 cm for each of 18 proton energies (100.0-226.0 MeV). The delivered fluence distribution in air was measured using a phosphor screen based CCD camera at three planes perpendicular to the beam line axis (x-ray imaging isocenter and up/down stream 15.0 cm). The measured fluence distributions for each energy were analyzed using in-house programs which calculated the spot sizes and positional deviations of the Gaussian shaped spots. Compared to the spot characteristic data installed into the treatment planning system, the 16-month averaged deviations of the measured spot sizes at the isocenter plane were 2.30% and 1.38% in the IEC gantry x and y directions, respectively. The maximum deviation was 12.87% while the minimum deviation was 0.003%, both at the upstream plane. After the collinearity of the proton and x-ray imaging system isocenters was optimized, the positional deviations of the spots were all within 1.5 mm for all three planes. During the site configuration update, spot positions were found to deviate by 6 mm until the tuning parameters file was properly restored. For this beam delivery system, it is recommended to perform a spot size and position check at least monthly and any time after a database update or cyclotron intervention occurs. A spot size deviation tolerance of spot positions were <2 mm at any plane up/down stream 15 cm from the isocenter.

  12. Investigation of 0.38 THz backward-wave oscillator based on slotted sine waveguide and pencil electron beam

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Luqi; Wei, Yanyu; Wang, Bing; Shen, Wenan; Xu, Jin; Gong, Yubin [National Key Laboratory of Science and Technology on Vacuum Electronics, School of Physical Electronics, University of Electronic Science and Technology of China, Chengdu 610054 (China); Park, Gun-Sik [The Department of Physics and Astronomy, Seoul National University, Seoul 151-747 (Korea, Republic of)

    2016-03-15

    A novel backward wave oscillator (BWO) is presented by utilizing a slotted sine waveguide with a pencil electron beam to produce the high power terahertz wave. The high frequency characteristics including dispersion properties, interaction impedances, and transmission characteristics of the slotted sine waveguide are analyzed in detail. The high frequency system including the output coupler, slow wave structure (SWS), and reflector are designed properly. A 3-D particle-in-cell mode is applied to predict the device performance of the BWO based on the novel SWS. The investigation results demonstrate that this device can generate over 8.05 W output power in the frequency range of 363.4–383.8 GHz by using a 30 mA pencil electron beam and adjusting the beam voltage from 20 kV to 32 kV.

  13. The Split Coefficient Matrix method for hyperbolic systems of gasdynamic equations

    Science.gov (United States)

    Chakravarthy, S. R.; Anderson, D. A.; Salas, M. D.

    1980-01-01

    The Split Coefficient Matrix (SCM) finite difference method for solving hyperbolic systems of equations is presented. This new method is based on the mathematical theory of characteristics. The development of the method from characteristic theory is presented. Boundary point calculation procedures consistent with the SCM method used at interior points are explained. The split coefficient matrices that define the method for steady supersonic and unsteady inviscid flows are given for several examples. The SCM method is used to compute several flow fields to demonstrate its accuracy and versatility. The similarities and differences between the SCM method and the lambda-scheme are discussed.

  14. Assessing Energy Intake in Daily Life: Signal-Contingent Smartphone Application Versus Event-Contingent Paper and Pencil Estimated Diet Diary

    Directory of Open Access Journals (Sweden)

    Saskia Wouters

    2016-12-01

    Full Text Available Objectives: Investigating between-meal snack intake and its associated determinants such as emotions and stress presents challenges because both vary from moment to moment throughout the day. A smartphone application (app, was developed to map momentary between-meal snack intake and its associated determinants in the context of daily life. The aim of this study was to compare energy intake reported with the signal-contingent app and reported with an event-contingent paper and pencil diet diary. Methods: In a counterbalanced, cross-sectional design, adults (N = 46 from the general population reported between-meal snack intake during four consecutive days with the app and four consecutive days with a paper and pencil diet diary. A 10-day interval was applied between the two reporting periods. Multilevel regression analyses were conducted to compare both instruments on reported momentary and daily energy intake from snacks.  Results: Results showed no significant difference (B = 11.84, p = .14 in momentary energy intake from snacks between the two instruments. However, a significant difference (B = –105.89, p < .01 was found on energy intake from total daily snack consumption. Conclusions: As at momentary level both instruments were comparable in assessing energy intake, research purposes will largely determine the sampling procedure of choice. When momentary associations across time are the interest of study, a signal-contingent sampling procedure may be a suitable method. Since the compared instruments differed on two main features (i.e. the sampling procedure and the device used it is difficult to disentangle which instrument was the most accurate in assessing daily energy intake.

  15. Invariant Imbedding T-Matrix Method for Axial Symmetric Hydrometeors with Extreme Aspect Ratios

    Science.gov (United States)

    Pelissier, C.; Clune, T.; Kuo, K. S.; Munchak, S. J.; Adams, I. S.

    2017-12-01

    The single-scattering properties (SSPs) of hydrometeors are the fundamental quantities for physics-based precipitation retrievals. Thus, efficient computation of their electromagnetic scattering is of great value. Whereas the semi-analytical T-Matrix methods are likely the most efficient for nonspherical hydrometeors with axial symmetry, they are not suitable for arbitrarily shaped hydrometeors absent of any significant symmetry, for which volume integral methods such as those based on Discrete Dipole Approximation (DDA) are required. Currently the two leading T-matrix methods are the Extended Boundary Condition Method (EBCM) and the Invariant Imbedding T-matrix Method incorporating Lorentz-Mie Separation of Variables (IITM+SOV). EBCM is known to outperform IITM+SOV for hydrometeors with modest aspect ratios. However, in cases when aspect ratios become extreme, such as needle-like particles with large height to diameter values, EBCM fails to converge. Such hydrometeors with extreme aspect ratios are known to be present in solid precipitation and their SSPs are required to model the radiative responses accurately. In these cases, IITM+SOV is shown to converge. An efficient, parallelized C++ implementation for both EBCM and IITM+SOV has been developed to conduct a performance comparison between EBCM, IITM+SOV, and DDSCAT (a popular implementation of DDA). We present the comparison results and discuss details. Our intent is to release the combined ECBM & IITM+SOV software to the community under an open source license.

  16. Cyclic voltammetry deposition of copper nanostructure on MWCNTs modified pencil graphite electrode: An ultra-sensitive hydrazine sensor

    Energy Technology Data Exchange (ETDEWEB)

    Heydari, Hamid [Faculty of Sciences, Razi University, Kermanshah (Iran, Islamic Republic of); Gholivand, Mohammad B., E-mail: mbgholivand@razi.ac.ir [Faculty of Sciences, Razi University, Kermanshah (Iran, Islamic Republic of); Abdolmaleki, Abbas [Department of Chemistry, Malek Ashtar University of Technology, Tehran (Iran, Islamic Republic of)

    2016-09-01

    In this study, Copper (Cu) nanostructures (CuNS) were electrochemically deposited on a film of multiwall carbon nanotubes (MWCNTs) modified pencil graphite electrode (MWCNTs/PGE) by cyclic voltammetry method to fabricate a CuNS–MWCNTs composite sensor (CuNS–MWCNT/PGE) for hydrazine detection. Scanning electron microscopy (SEM) and Energy-dispersive X-ray spectroscopy (EDX) were used for the characterization of CuNS on the MWCNTs matrix. The composite of CuNS-MWCNTs was characterized with cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS). The preliminary studies showed that the proposed sensor have a synergistic electrocatalytic activity for the oxidation of hydrazine in phosphate buffer. The catalytic currents of square wave voltammetry had a linear correlation with the hydrazine concentration in the range of 0.1 to 800 μM with a low detection limit of 70 nM. Moreover, the amperometric oxidation current exhibited a linear correlation with hydrazine concentration in the concentration range of 50–800 μM with the detection limit of 4.3 μM. The proposed electrode was used for the determination of hydrazine in real samples and the results were promising. Empirical results also indicated that the sensor had good reproducibility, long-term stability, and the response of the sensor to hydrazine was free from interferences. Moreover, the proposed sensor benefits from simple preparation, low cost, outstanding sensitivity, selectivity, and reproducibility for hydrazine determination. - Highlights: • The Copper nanostructures (CuNS) were prepared by cyclic voltammetry deposition. • The CuNS-MWCNT/PGE sensor shows high activity toward hydrazine (N{sub 2}H{sub 4}). • The proposed sensor exhibits a wide linear range (0.1 to 800 μM), low detection limit (70 nM), high sensitivity and stability for hydrazine.

  17. NewIn-situ synthesis method of magnesium matrix composites reinforced with TiC particulates

    Directory of Open Access Journals (Sweden)

    Zhang Xiuqing

    2006-12-01

    Full Text Available Magnesium matrix composites reinforced with TiC particulates was prepared using a new in-situ synthesis method of remelting and dilution technique. And measurements were performed on the composites. The results of x ray diffraction (XRD analysis confirmed that TiC particulates were synthesized during the sintering process, and they retained in magnesium matrix composites after the remelting and dilution processing. From the microstructure characterization and electron probe microanalysis (EPMA, we could see that fine TiC particulates distributed uniformly in the matrix material.

  18. Character expansion methods for matrix models of dually weighted graphs

    International Nuclear Information System (INIS)

    Kazakov, V.A.; Staudacher, M.; Wynter, T.

    1996-01-01

    We consider generalized one-matrix models in which external fields allow control over the coordination numbers on both the original and dual lattices. We rederive in a simple fashion a character expansion formula for these models originally due to Itzykson and Di Francesco, and then demonstrate how to take the large N limit of this expansion. The relationship to the usual matrix model resolvent is elucidated. Our methods give as a by-product an extremely simple derivation of the Migdal integral equation describing the large N limit of the Itzykson-Zuber formula. We illustrate and check our methods by analysing a number of models solvable by traditional means. We then proceed to solve a new model: a sum over planar graphs possessing even coordination numbers on both the original and the dual lattice. We conclude by formulating equations for the case of arbitrary sets of even, self-dual coupling constants. This opens the way for studying the deep problem of phase transitions from random to flat lattices. (orig.). With 4 figs

  19. Measurement of stray neutron doses inside the treatment room from a proton pencil beam scanning system

    Czech Academy of Sciences Publication Activity Database

    Mojzeszek, N.; Farah, J.; Klodowska, M.; Ploc, Ondřej; Stolarczyk, L.; Waligorski, M. P. R.; Olko, P.

    2017-01-01

    Roč. 34, č. 2 (2017), s. 80-84 ISSN 1120-1797 Institutional support: RVO:61389005 Keywords : secondary neutrons * proton therapy * pencil beam scanning systtems * out-of-field doses * stray neutron doses * TEPC Subject RIV: FP - Other Medical Disciplines OBOR OECD: Radiology, nuclear medicine and medical imaging Impact factor: 1.990, year: 2016

  20. Linear programming models and methods of matrix games with payoffs of triangular fuzzy numbers

    CERN Document Server

    Li, Deng-Feng

    2016-01-01

    This book addresses two-person zero-sum finite games in which the payoffs in any situation are expressed with fuzzy numbers. The purpose of this book is to develop a suite of effective and efficient linear programming models and methods for solving matrix games with payoffs in fuzzy numbers. Divided into six chapters, it discusses the concepts of solutions of matrix games with payoffs of intervals, along with their linear programming models and methods. Furthermore, it is directly relevant to the research field of matrix games under uncertain economic management. The book offers a valuable resource for readers involved in theoretical research and practical applications from a range of different fields including game theory, operational research, management science, fuzzy mathematical programming, fuzzy mathematics, industrial engineering, business and social economics. .

  1. Algebraic method for analysis of nonlinear systems with a normal matrix

    International Nuclear Information System (INIS)

    Konyaev, Yu.A.; Salimova, A.F.

    2014-01-01

    A promising method has been proposed for analyzing a class of quasilinear nonautonomous systems of differential equations whose matrix can be represented as a sum of nonlinear normal matrices, which makes it possible to analyze stability without using the Lyapunov functions [ru

  2. Improved determination of hadron matrix elements using the variational method

    International Nuclear Information System (INIS)

    Dragos, J.; Kamleh, W.; Leinweber, D.B.; Zanotti, J.M.; Rakow, P.E.L.; Young, R.D.; Adelaide Univ.

    2015-11-01

    The extraction of hadron form factors in lattice QCD using the standard two- and three-point correlator functions has its limitations. One of the most commonly studied sources of systematic error is excited state contamination, which occurs when correlators are contaminated with results from higher energy excitations. We apply the variational method to calculate the axial vector current g A and compare the results to the more commonly used summation and two-exponential fit methods. The results demonstrate that the variational approach offers a more efficient and robust method for the determination of nucleon matrix elements.

  3. Legendre Wavelet Operational Matrix Method for Solution of Riccati Differential Equation

    Directory of Open Access Journals (Sweden)

    S. Balaji

    2014-01-01

    Full Text Available A Legendre wavelet operational matrix method (LWM is presented for the solution of nonlinear fractional-order Riccati differential equations, having variety of applications in quantum chemistry and quantum mechanics. The fractional-order Riccati differential equations converted into a system of algebraic equations using Legendre wavelet operational matrix. Solutions given by the proposed scheme are more accurate and reliable and they are compared with recently developed numerical, analytical, and stochastic approaches. Comparison shows that the proposed LWM approach has a greater performance and less computational effort for getting accurate solutions. Further existence and uniqueness of the proposed problem are given and moreover the condition of convergence is verified.

  4. Matrix-variational method: an efficient approach to bound state eigenproblems

    International Nuclear Information System (INIS)

    Gerck, E.; d'Oliveira, A.B.

    1978-11-01

    A new matrix-variational method for solving the radial Schroedinger equation is described. It consists in obtaining an adjustable matrix formulation for the boundary value differential equation, using a set of three functions that obey the boundary conditions. These functions are linearly combined at every three adjacents points to fit the true unknown eigenfunction by a variational technique. With the use of a new class of central differences, the exponential differences, tridiagonal or bidiagonal matrices are obtained. In the bidiagonal case, closed form expressions for the eigenvalues are given for the Coulomb, harmonic, linear, square-root and logarithmic potentials. The values obtained are within 0.1% of the true numerical value. The eigenfunction can be calculated using the eigenvectors to reconstruct the linear combination of the set functions [pt

  5. Computational evaluation of a pencil ionization chamber in a standard diagnostic radiology beam

    International Nuclear Information System (INIS)

    Mendonca, Dalila Souza Costa; Neves, Lucio Pereira; Perini, Ana Paula; Belinato, Walmir

    2016-01-01

    In this work a pencil ionization chamber was evaluated. This evaluation consisted in the determination of the influence of the ionization chamber components in its response. For this purpose, the Monte Carlo simulations and the spectrum of the standard diagnostic radiology beam (RQR5) were utilized. The results obtained, showed that the influence of the ionization chamber components presented no significant influence on the chamber response. Therefore, this ionization chamber is a good alternative for dosimetry in diagnostic radiology. (author)

  6. Instructional Supports for Representational Fluency in Solving Linear Equations with Computer Algebra Systems and Paper-and-Pencil

    Science.gov (United States)

    Fonger, Nicole L.; Davis, Jon D.; Rohwer, Mary Lou

    2018-01-01

    This research addresses the issue of how to support students' representational fluency--the ability to create, move within, translate across, and derive meaning from external representations of mathematical ideas. The context of solving linear equations in a combined computer algebra system (CAS) and paper-and-pencil classroom environment is…

  7. Response matrix method and its application to SCWR single channel stability analysis

    International Nuclear Information System (INIS)

    Zhao, Jiyun; Tseng, K.J.; Tso, C.P.

    2011-01-01

    To simulate the reactor system dynamic features during density wave oscillations (DWO), both the non-linear method and the linear method can be used. Although some transient information is lost through model linearization, the high computational efficiency and relatively accurate results make the linear analysis methodology attractive, especially for prediction of the onset of instability. In the linear stability analysis, the system models are simplified through linearization of the complex non-linear differential equations, and then, the linear differential equations are generally solved in the frequency domain through Laplace transformation. In this paper, a system response matrix method was introduced by directly solving the differential equations in the time domain. By using a system response matrix method, the complicated transfer function derivation, which must be done in the frequency domain method, can be avoided. Using the response matrix method, a model was developed and applied to the single channel or parallel channel type instability analyses of the typical proposed SCWR design. The sensitivity of the decay ratio (DR) to the axial mesh size was analyzed and it was found that the DR is not sensitive to mesh size once sufficient number of axial nodes is applied. To demonstrate the effects of the inlet orificing to the stability feature for the supercritical condition, the sensitivity of the stability to inlet orifice coefficient was conducted for hot channel. It is clearly shown that a higher inlet orifice coefficient will make the system more stable. The susceptibility of stability to operating parameters such as mass flow rate, power and system pressure was also performed. And the measure to improve the SCWR stability sensitivity to operating parameters was investigated. It was found that the SCWR stability sensitivity feature can be improved by carefully managing the inlet orifices and choosing proper operating parameters. (author)

  8. Comparison of different dose calculation methods for irregular photon fields

    International Nuclear Information System (INIS)

    Zakaria, G.A.; Schuette, W.

    2000-01-01

    In this work, 4 calculation methods (Wrede method, Clarskon method of sector integration, beam-zone method of Quast and pencil-beam method of Ahnesjoe) are introduced to calculate point doses in different irregular photon fields. The calculations cover a typical mantle field, an inverted Y-field and different blocked fields for 4 and 10 MV photon energies. The results are compared to those of measurements in a water phantom. The Clarkson and the pencil-beam method have been proved to be the methods of equal standard in relation to accuracy. Both of these methods are being distinguished by minimum deviations and applied in our clinical routine work. The Wrede and beam-zone methods deliver useful results to central beam and yet provide larger deviations in calculating points beyond the central axis. (orig.) [de

  9. CONFLICTS WITH "SKIN COLOR PENCIL": THE POLVO (OCTOPUS SERIES, ADRIANA VAREJÃO AND MULTICULTURALISM IN THE ART TEACHING

    Directory of Open Access Journals (Sweden)

    João Paulo Baliscei

    2017-01-01

    Full Text Available Based on past experiences with the students of the 3rd grade of elementary school to a public school in Maringá, Paraná, this article has aimed to question the role of art teacher as intermediate in multicultural education of the contemporary subject. From the discussions about the color stereotypes, we think of possible teaching strategies to question the use of "skin color pencil " and develop reflections on the naturalness with which is choosen to paint, such as the use of other color was "forbidden". Was that the only possible pencil for filling and characterization of the skin? To discuss these aspects, we approach the Polvo (Octopus series, the artist Adriana Varejão, multiculturalism and Art teaching practices. We believe that question about stereotypes in the classroom provides students with reflections that can change their looks and behavior in the face of differences.

  10. Application of the correction factor for radiation qualityK{sub q} in dosimetry with pencil-type ionization chambers using a Tandem system

    Energy Technology Data Exchange (ETDEWEB)

    Fontes, Ladyjane Pereira; Potiens, Maria da Penha Albuquerque, E-mail: lpfontes@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2017-11-01

    The pencil-type ionization chamber widely used in computed tomography (CT) dosimetry, is a measuring instrument that has a cylindrical shape and provides uniform response independent of the angle of incidence of ionizing radiation. Calibration and measurements performed with the pencil-type ionization chamber are done in terms of Kerma product in air-length (P{sub k,l}) and values are given in Gy.cm. To obtain the values of (P{sub k,l}) during clinical measurements, the readings performed with the ionization chamber are multiplied by the calibration coefficient (N{sub k,l}) and the correction factor C for quality (K{sub q}) which are given in Calibration certificates of the chambers. The application of the correction factor for radiation quality K{sub q} is done as a function of the effective energy of the beam that is determined by the Half Value layer (HVL) calculation. In order to estimate the HVL values in this work, a Tandem system made up of cylindrical aluminum and PMMA absorber layers was used as a low cost and easy to apply method. From the Tandem curve, it was possible to construct the calibration curve and obtain the appropriate K{sub q} to the beam of the computed tomography equipment studied. (author)

  11. Efficient Data Gathering Methods in Wireless Sensor Networks Using GBTR Matrix Completion

    Directory of Open Access Journals (Sweden)

    Donghao Wang

    2016-09-01

    Full Text Available To obtain efficient data gathering methods for wireless sensor networks (WSNs, a novel graph based transform regularized (GBTR matrix completion algorithm is proposed. The graph based transform sparsity of the sensed data is explored, which is also considered as a penalty term in the matrix completion problem. The proposed GBTR-ADMM algorithm utilizes the alternating direction method of multipliers (ADMM in an iterative procedure to solve the constrained optimization problem. Since the performance of the ADMM method is sensitive to the number of constraints, the GBTR-A2DM2 algorithm obtained to accelerate the convergence of GBTR-ADMM. GBTR-A2DM2 benefits from merging two constraint conditions into one as well as using a restart rule. The theoretical analysis shows the proposed algorithms obtain satisfactory time complexity. Extensive simulation results verify that our proposed algorithms outperform the state of the art algorithms for data collection problems in WSNs in respect to recovery accuracy, convergence rate, and energy consumption.

  12. Adaptive beamlet-based finite-size pencil beam dose calculation for independent verification of IMRT and VMAT

    International Nuclear Information System (INIS)

    Park, Justin C.; Li, Jonathan G.; Arhjoul, Lahcen; Yan, Guanghua; Lu, Bo; Fan, Qiyong; Liu, Chihray

    2015-01-01

    Purpose: The use of sophisticated dose calculation procedure in modern radiation therapy treatment planning is inevitable in order to account for complex treatment fields created by multileaf collimators (MLCs). As a consequence, independent volumetric dose verification is time consuming, which affects the efficiency of clinical workflow. In this study, the authors present an efficient adaptive beamlet-based finite-size pencil beam (AB-FSPB) dose calculation algorithm that minimizes the computational procedure while preserving the accuracy. Methods: The computational time of finite-size pencil beam (FSPB) algorithm is proportional to the number of infinitesimal and identical beamlets that constitute an arbitrary field shape. In AB-FSPB, dose distribution from each beamlet is mathematically modeled such that the sizes of beamlets to represent an arbitrary field shape no longer need to be infinitesimal nor identical. As a result, it is possible to represent an arbitrary field shape with combinations of different sized and minimal number of beamlets. In addition, the authors included the model parameters to consider MLC for its rounded edge and transmission. Results: Root mean square error (RMSE) between treatment planning system and conventional FSPB on a 10 × 10 cm 2 square field using 10 × 10, 2.5 × 2.5, and 0.5 × 0.5 cm 2 beamlet sizes were 4.90%, 3.19%, and 2.87%, respectively, compared with RMSE of 1.10%, 1.11%, and 1.14% for AB-FSPB. This finding holds true for a larger square field size of 25 × 25 cm 2 , where RMSE for 25 × 25, 2.5 × 2.5, and 0.5 × 0.5 cm 2 beamlet sizes were 5.41%, 4.76%, and 3.54% in FSPB, respectively, compared with RMSE of 0.86%, 0.83%, and 0.88% for AB-FSPB. It was found that AB-FSPB could successfully account for the MLC transmissions without major discrepancy. The algorithm was also graphical processing unit (GPU) compatible to maximize its computational speed. For an intensity modulated radiation therapy (∼12 segments) and a

  13. ECAP – New consolidation method for production of aluminium matrix composites with ceramic reinforcement

    Directory of Open Access Journals (Sweden)

    Mateja Šnajdar Musa

    2013-06-01

    Full Text Available Aluminium based metal matrix composites are rapidly developing group of materials due to their unique combination of properties that include low weight, elevated strength, improved wear and corrosion resistance and relatively good ductility. This combination of properties is a result of mixing two groups of materials with rather different properties with aluminium as ductile matrix and different oxides and carbides added as reinforcement. Al2O3, SiC and ZrO2 are the most popular choices of reinforcement material. One of the most common methods for producing this type of metal matrix composites is powder metallurgy since it has many variations and also is relatively low-cost method. Many different techniques of compacting aluminium and ceramic powders have been previously investigated. Among those techniques equal channel angular pressing (ECAP stands out due to its beneficial influence on the main problem that arises during powder compaction and that is a non-uniform distribution of reinforcement particles. This paper gives an overview on ECAP method principles, advantages and produced powder composite properties.

  14. A simple and sensitive methodology for voltammetric determination of valproic acid in human blood plasma samples using 3-aminopropyletriethoxy silane coated magnetic nanoparticles modified pencil graphite electrode.

    Science.gov (United States)

    Zabardasti, Abedin; Afrouzi, Hossein; Talemi, Rasoul Pourtaghavi

    2017-07-01

    In this work, we have prepared a nano-material modified pencil graphite electrode for the sensing of valproic acid (VA) by immobilization 3-aminopropyletriethoxy silane coated magnetic nanoparticles (APTES-MNPs) on the pencil graphite surface (PGE). Electrochemical studies indicated that the APTES-MNPs efficiently increased the electron transfer kinetics between VA and the electrode and the free NH 2 groups of the APTES on the outer surface of magnetic nanoparticles can interact with carboxyl groups of VA. Based on this, we have proposed a sensitive, rapid and convenient electrochemical method for VA determination. Under the optimized conditions, the reduction peak current of VA is found to be proportional to its concentration in the range of 1.0 (±0.2) to 100.0 (±0.3) ppm with a detection limit of 0.4 (±0.1) ppm. The whole sensor fabrication process was characterized by cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS) methods with using [Fe(CN) 6 ] 3-/4- as an electrochemical redox indicator. The prepared modified electrode showed several advantages such as high sensitivity, selectivity, ease of preparation and good repeatability, reproducibility and stability. The proposed method was applied to determination of valproic acid in blood plasma samples and the obtained results were satisfactory accurate. Copyright © 2017. Published by Elsevier B.V.

  15. Hexagonal pencil-like CdS nanorods: Facile synthesis and enhanced visible light photocatalytic performance

    Science.gov (United States)

    An, Liang; Wang, Guanghui; Zhao, Lei; Zhou, Yong; Gao, Fang; Cheng, Yang

    2015-07-01

    In the present study, hexagonal pencil-like CdS nanorods have been successfully synthesized through a typical facile and economical one-step hydrothermal method without using any surfactant or template. The product was characterized by X-ray powder diffraction (XRD), field-emission scanning electron microscopy (FE-SEM) and energy dispersive analysis of X-ray (EDX). The results revealed that the prepared CdS photocatalyst consisted of a large quantity of straight and smooth solid hexagonal nanorods and a few nanoparticles. The photocatalytic activities of CdS nanorods and commercial CdS powders were investigated by the photodegradation of Orange II (OII) in aqueous solution under visible light, and the CdS nanorods presented the highest photocatalytic activity. Its photocatalytic efficiency enhancement was attributed to the improved transmission of photogenerated electron-hole pairs in the CdS nanostructures. The present findings may provide a facile approach to synthesize high efficient CdS photocatalysts.

  16. Investigating Plane Geometry Problem-Solving Strategies of Prospective Mathematics Teachers in Technology and Paper-and-Pencil Environments

    Science.gov (United States)

    Koyuncu, Ilhan; Akyuz, Didem; Cakiroglu, Erdinc

    2015-01-01

    This study aims to investigate plane geometry problem-solving strategies of prospective mathematics teachers using dynamic geometry software (DGS) and paper-and-pencil (PPB) environments after receiving an instruction with GeoGebra (GGB). Four plane geometry problems were used in a multiple case study design to understand the solution strategies…

  17. 76 FR 27988 - Certain Cased Pencils From the People's Republic of China: Final Results of the Antidumping Duty...

    Science.gov (United States)

    2011-05-13

    ... any fashion, and either sharpened or unsharpened. The pencils subject to the order are currently...'') of their responsibility concerning the return or destruction of proprietary information disclosed under the APO in accordance with 19 CFR 351.305. Timely written notification of the return or...

  18. Methods for the visualization and analysis of extracellular matrix protein structure and degradation.

    Science.gov (United States)

    Leonard, Annemarie K; Loughran, Elizabeth A; Klymenko, Yuliya; Liu, Yueying; Kim, Oleg; Asem, Marwa; McAbee, Kevin; Ravosa, Matthew J; Stack, M Sharon

    2018-01-01

    This chapter highlights methods for visualization and analysis of extracellular matrix (ECM) proteins, with particular emphasis on collagen type I, the most abundant protein in mammals. Protocols described range from advanced imaging of complex in vivo matrices to simple biochemical analysis of individual ECM proteins. The first section of this chapter describes common methods to image ECM components and includes protocols for second harmonic generation, scanning electron microscopy, and several histological methods of ECM localization and degradation analysis, including immunohistochemistry, Trichrome staining, and in situ zymography. The second section of this chapter details both a common transwell invasion assay and a novel live imaging method to investigate cellular behavior with respect to collagen and other ECM proteins of interest. The final section consists of common electrophoresis-based biochemical methods that are used in analysis of ECM proteins. Use of the methods described herein will enable researchers to gain a greater understanding of the role of ECM structure and degradation in development and matrix-related diseases such as cancer and connective tissue disorders. © 2018 Elsevier Inc. All rights reserved.

  19. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    Science.gov (United States)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  20. Matrix algebra for linear models

    CERN Document Server

    Gruber, Marvin H J

    2013-01-01

    Matrix methods have evolved from a tool for expressing statistical problems to an indispensable part of the development, understanding, and use of various types of complex statistical analyses. This evolution has made matrix methods a vital part of statistical education. Traditionally, matrix methods are taught in courses on everything from regression analysis to stochastic processes, thus creating a fractured view of the topic. Matrix Algebra for Linear Models offers readers a unique, unified view of matrix analysis theory (where and when necessary), methods, and their applications. Written f

  1. Matrix-type multiple reciprocity boundary element method for solving three-dimensional two-group neutron diffusion equations

    International Nuclear Information System (INIS)

    Itagaki, Masafumi; Sahashi, Naoki.

    1997-01-01

    The multiple reciprocity boundary element method has been applied to three-dimensional two-group neutron diffusion problems. A matrix-type boundary integral equation has been derived to solve the first and the second group neutron diffusion equations simultaneously. The matrix-type fundamental solutions used here satisfy the equation which has a point source term and is adjoint to the neutron diffusion equations. A multiple reciprocity method has been employed to transform the matrix-type domain integral related to the fission source into an equivalent boundary one. The higher order fundamental solutions required for this formulation are composed of a series of two types of analytic functions. The eigenvalue itself is also calculated using only boundary integrals. Three-dimensional test calculations indicate that the present method provides stable and accurate solutions for criticality problems. (author)

  2. Roundtrip matrix method for calculating the leaky resonant modes of open nanophotonic structures

    DEFF Research Database (Denmark)

    de Lasson, Jakob Rosenkrantz; Kristensen, Philip Trøst; Mørk, Jesper

    2014-01-01

    We present a numerical method for calculating quasi-normal modes of open nanophotonic structures. The method is based on scattering matrices and a unity eigenvalue of the roundtrip matrix of an internal cavity, and we develop it in detail with electromagnetic fields expanded on Bloch modes...

  3. Maternal Scaffolding of Preschoolers' Writing Using Tablet and Paper-Pencil Tasks: Relations with Emergent Literacy Skills

    Science.gov (United States)

    Neumann, Michelle M.

    2018-01-01

    Mothers play a key role in scaffolding children's writing using traditional tools, such as paper and pencil. However, little is known about how mothers scaffold young children's writing using touch-screen tablets (e.g., iPads) and the associations between maternal scaffolding and emergent literacy. Mother-child dyads (N = 47; M child…

  4. 77 FR 53176 - Certain Cased Pencils From the People's Republic of China: Final Results of Antidumping Duty...

    Science.gov (United States)

    2012-08-31

    ... dimension (except as described below) which are writing and/or drawing instruments that feature cores of graphite or other materials, encased in wood and/or man-made materials, whether or not decorated and..., requested revocation, in part, of the AD order with respect to its novelty pencil, which is shaped like a...

  5. Calculating Relativistic Transition Matrix Elements for Hydrogenic Atoms Using Monte Carlo Methods

    Science.gov (United States)

    Alexander, Steven; Coldwell, R. L.

    2015-03-01

    The nonrelativistic transition matrix elements for hydrogen atoms can be computed exactly and these expressions are given in a number of classic textbooks. The relativistic counterparts of these equations can also be computed exactly but these expressions have been described in only a few places in the literature. In part, this is because the relativistic equations lack the elegant simplicity of the nonrelativistic equations. In this poster I will describe how variational Monte Carlo methods can be used to calculate the energy and properties of relativistic hydrogen atoms and how the wavefunctions for these systems can be used to calculate transition matrix elements.

  6. Solving Eigenvalue response matrix equations with Jacobian-Free Newton-Krylov methods

    International Nuclear Information System (INIS)

    Roberts, Jeremy A.; Forget, Benoit

    2011-01-01

    The response matrix method for reactor eigenvalue problems is motivated as a technique for solving coarse mesh transport equations, and the classical approach of power iteration (PI) for solution is described. The method is then reformulated as a nonlinear system of equations, and the associated Jacobian is derived. A Jacobian-Free Newton-Krylov (JFNK) method is employed to solve the system, using an approximate Jacobian coupled with incomplete factorization as a preconditioner. The unpreconditioned JFNK slightly outperforms PI, and preconditioned JFNK outperforms both PI and Steffensen-accelerated PI significantly. (author)

  7. A Pencil Graphite Electrode In Situ Modified by Monovalent Copper: a Promising Tool for the Determination of Methylxanthines

    Czech Academy of Sciences Publication Activity Database

    Navrátil, R.; Jelen, František; Kayran, Y.U.; Trnková, L.

    2014-01-01

    Roč. 26, č. 5 (2014), s. 952-961 ISSN 1040-0397 R&D Projects: GA MŠk(CZ) ED1.1.00/02.0068 Institutional support: RVO:68081707 Keywords : Methylxanthines * Pencil graphite electrode * Elimination voltammetry Subject RIV: BO - Biophysics Impact factor: 2.138, year: 2014

  8. Synthesizing (ZrAl3 + AlN)/Mg-Al composites by a 'matrix exchange' method

    Science.gov (United States)

    Gao, Tong; Li, Zengqiang; Hu, Kaiqi; Han, Mengxia; Liu, Xiangfa

    2018-06-01

    A method named 'matrix exchange' to synthesize ZrAl3 and AlN reinforced Mg-Al composite was developed in this paper. By inserting Al-10ZrN master alloy into Mg matrix and reheating the cooled ingot to 550 °C, Al and Mg atoms diffuse to the opposite side. As a result, liquid melt occurs once the interface areas reach to proper compositions. Then dissolved Al atoms react with ZrN, leading to the in-situ formation of ZrAl3 and AlN particles, while the Al matrix is finally replaced by Mg. This study provides a new insight for preparing Mg composites.

  9. The Multimedia Piers-Harris Children's Self-Concept Scale 2: Its Psychometric Properties, Equivalence with the Paper-and-Pencil Version, and Respondent Preferences.

    Science.gov (United States)

    Flahive, Mon-hsin Wang; Chuang, Ying-Chih; Li, Chien-Mo

    2015-01-01

    A multimedia version of Piers-Harris Children's Self-Concept Scale 2 (Piers-Harris 2) was created with audio and cartoon animation to facilitate the measurement of self-concept among younger children. This study aimed to assess the psychometric qualities of the computer version of Piers-Harris 2 scores, examine its score equivalence with the paper-and-pencil version, and survey the respondent preference of the two versions. Two hundred and forty eight Taiwanese students from the first to fourth grade were recruited. In regard to the psychometric properties, high internal consistency (α = .91) was found for the total score of multimedia Piers-Harris 2. High interscale correlations (.77 to .83) of the multimedia Piers-Harris 2 scores and the results of confirmatory factor analysis suggested the multimedia Piers-Harris 2 contained good structural characteristics. The scores of the multimedia Piers-Harris 2 also had significant correlations with the scores of the Elementary School Children's Self Concept Scale. The equality of convergence and criterion-related validities of Piers-Harris 2 scores for the multimedia and paper-and-pencil versions and the results of ICCs between the scores of the multimedia and paper-and-pencil Piers-Harris 2 suggested their high level of equivalence. Participants showed more positive attitudes towards the multimedia version.

  10. The Multimedia Piers-Harris Children's Self-Concept Scale 2: Its Psychometric Properties, Equivalence with the Paper-and-Pencil Version, and Respondent Preferences

    Science.gov (United States)

    Flahive, Mon-hsin Wang; Chuang, Ying-Chih; Li, Chien-Mo

    2015-01-01

    A multimedia version of Piers-Harris Children's Self-Concept Scale 2 (Piers-Harris 2) was created with audio and cartoon animation to facilitate the measurement of self-concept among younger children. This study aimed to assess the psychometric qualities of the computer version of Piers-Harris 2 scores, examine its score equivalence with the paper-and-pencil version, and survey the respondent preference of the two versions. Two hundred and forty eight Taiwanese students from the first to fourth grade were recruited. In regard to the psychometric properties, high internal consistency (α = .91) was found for the total score of multimedia Piers-Harris 2. High interscale correlations (.77 to .83) of the multimedia Piers-Harris 2 scores and the results of confirmatory factor analysis suggested the multimedia Piers-Harris 2 contained good structural characteristics. The scores of the multimedia Piers-Harris 2 also had significant correlations with the scores of the Elementary School Children’s Self Concept Scale. The equality of convergence and criterion-related validities of Piers-Harris 2 scores for the multimedia and paper-and-pencil versions and the results of ICCs between the scores of the multimedia and paper-and-pencil Piers-Harris 2 suggested their high level of equivalence. Participants showed more positive attitudes towards the multimedia version. PMID:26252499

  11. A reduced-scaling density matrix-based method for the computation of the vibrational Hessian matrix at the self-consistent field level

    International Nuclear Information System (INIS)

    Kussmann, Jörg; Luenser, Arne; Beer, Matthias; Ochsenfeld, Christian

    2015-01-01

    An analytical method to calculate the molecular vibrational Hessian matrix at the self-consistent field level is presented. By analysis of the multipole expansions of the relevant derivatives of Coulomb-type two-electron integral contractions, we show that the effect of the perturbation on the electronic structure due to the displacement of nuclei decays at least as r −2 instead of r −1 . The perturbation is asymptotically local, and the computation of the Hessian matrix can, in principle, be performed with O(N) complexity. Our implementation exhibits linear scaling in all time-determining steps, with some rapid but quadratic-complexity steps remaining. Sample calculations illustrate linear or near-linear scaling in the construction of the complete nuclear Hessian matrix for sparse systems. For more demanding systems, scaling is still considerably sub-quadratic to quadratic, depending on the density of the underlying electronic structure

  12. DANTE, Activation Analysis Neutron Spectra Unfolding by Covariance Matrix Method

    International Nuclear Information System (INIS)

    Petilli, M.

    1981-01-01

    1 - Description of problem or function: The program evaluates activation measurements of reactor neutron spectra and unfolds the results for dosimetry purposes. Different evaluation options are foreseen: absolute or relative fluxes and different iteration algorithms. 2 - Method of solution: A least-square fit method is used. A correlation between available data and their uncertainties has been introduced by means of flux and activity variance-covariance matrices. Cross sections are assumed to be constant, i.e. with variance-covariance matrix equal to zero. The Lagrange multipliers method has been used for calculating the solution. 3 - Restrictions on the complexity of the problem: 9 activation experiments can be analyzed. 75 energy groups are accepted

  13. Matrix removal in state of the art sample preparation methods for serum by charged aerosol detection and metabolomics-based LC-MS.

    Science.gov (United States)

    Schimek, Denise; Francesconi, Kevin A; Mautner, Anton; Libiseller, Gunnar; Raml, Reingard; Magnes, Christoph

    2016-04-07

    Investigations into sample preparation procedures usually focus on analyte recovery with no information provided about the fate of other components of the sample (matrix). For many analyses, however, and particularly those using liquid chromatography-mass spectrometry (LC-MS), quantitative measurements are greatly influenced by sample matrix. Using the example of the drug amitriptyline and three of its metabolites in serum, we performed a comprehensive investigation of nine commonly used sample clean-up procedures in terms of their suitability for preparing serum samples. We were monitoring the undesired matrix compounds using a combination of charged aerosol detection (CAD), LC-CAD, and a metabolomics-based LC-MS/MS approach. In this way, we compared analyte recovery of protein precipitation-, liquid-liquid-, solid-phase- and hybrid solid-phase extraction methods. Although all methods provided acceptable recoveries, the highest recovery was obtained by protein precipitation with acetonitrile/formic acid (amitriptyline 113%, nortriptyline 92%, 10-hydroxyamitriptyline 89%, and amitriptyline N-oxide 96%). The quantification of matrix removal by LC-CAD showed that the solid phase extraction method (SPE) provided the lowest remaining matrix load (48-123 μg mL(-1)), which is a 10-40 fold better matrix clean-up than the precipitation- or hybrid solid phase extraction methods. The metabolomics profiles of eleven compound classes, comprising 70 matrix compounds showed the trends of compound class removal for each sample preparation strategy. The collective data set of analyte recovery, matrix removal and matrix compound profile was used to assess the effectiveness of each sample preparation method. The best performance in matrix clean-up and practical handling of small sample volumes was showed by the SPE techniques, particularly HLB SPE. CAD proved to be an effective tool for revealing the considerable differences between the sample preparation methods. This detector can

  14. A Synthetic Approach to the Transfer Matrix Method in Classical and Quantum Physics

    Science.gov (United States)

    Pujol, O.; Perez, J. P.

    2007-01-01

    The aim of this paper is to propose a synthetic approach to the transfer matrix method in classical and quantum physics. This method is an efficient tool to deal with complicated physical systems of practical importance in geometrical light or charged particle optics, classical electronics, mechanics, electromagnetics and quantum physics. Teaching…

  15. Current matrix element in HAL QCD's wavefunction-equivalent potential method

    Science.gov (United States)

    Watanabe, Kai; Ishii, Noriyoshi

    2018-04-01

    We give a formula to calculate a matrix element of a conserved current in the effective quantum mechanics defined by the wavefunction-equivalent potentials proposed by the HAL QCD collaboration. As a first step, a non-relativistic field theory with two-channel coupling is considered as the original theory, with which a wavefunction-equivalent HAL QCD potential is obtained in a closed analytic form. The external field method is used to derive the formula by demanding that the result should agree with the original theory. With this formula, the matrix element is obtained by sandwiching the effective current operator between the left and right eigenfunctions of the effective Hamiltonian associated with the HAL QCD potential. In addition to the naive one-body current, the effective current operator contains an additional two-body term emerging from the degrees of freedom which has been integrated out.

  16. Exact solution of some linear matrix equations using algebraic methods

    Science.gov (United States)

    Djaferis, T. E.; Mitter, S. K.

    1977-01-01

    A study is done of solution methods for Linear Matrix Equations including Lyapunov's equation, using methods of modern algebra. The emphasis is on the use of finite algebraic procedures which are easily implemented on a digital computer and which lead to an explicit solution to the problem. The action f sub BA is introduced a Basic Lemma is proven. The equation PA + BP = -C as well as the Lyapunov equation are analyzed. Algorithms are given for the solution of the Lyapunov and comment is given on its arithmetic complexity. The equation P - A'PA = Q is studied and numerical examples are given.

  17. General-purpose parallel algorithm based on CUDA for source pencils' deployment of large γ irradiator

    International Nuclear Information System (INIS)

    Yang Lei; Gong Xueyu; Wang Ling

    2013-01-01

    Combined with standard mathematical model for evaluating quality of deploying results, a new high-performance parallel algorithm for source pencils' deployment was obtained by using parallel plant growth simulation algorithm which was completely parallelized with CUDA execute model, and the corresponding code can run on GPU. Based on such work, several instances in various scales were used to test the new version of algorithm. The results show that, based on the advantage of old versions. the performance of new one is improved more than 500 times comparing with the CPU version, and also 30 times with the CPU plus GPU hybrid version. The computation time of new version is less than ten minutes for the irradiator of which the activity is less than 111 PBq. For a single GTX275 GPU, the maximum computing power of new version is no more than 167 PBq as well as the computation time is no more than 25 minutes, and for multiple GPUs, the power can be improved more. Overall, the new version of algorithm running on GPU can satisfy the requirement of source pencils' deployment of any domestic irradiator, and it is of high competitiveness. (authors)

  18. Newton's method for solving a quadratic matrix equation with special coefficient matrices

    International Nuclear Information System (INIS)

    Seo, Sang-Hyup; Seo, Jong Hyun; Kim, Hyun-Min

    2014-01-01

    We consider the iterative method for solving a quadratic matrix equation with special coefficient matrices which arises in the quasi-birth-death problem. In this paper, we show that the elementwise minimal positive solvents to quadratic matrix equations can be obtained using Newton's method. We also prove that the convergence rate of the Newton iteration is quadratic if the Fréchet derivative at the elementwise minimal positive solvent is nonsingular. However, if the Fréchet derivative is singular, the convergence rate is at least linear. Numerical experiments of the convergence rate are given.(This is summarized a paper which is to appear in Honam Mathematical Journal.)

  19. Inventory of Motive of Preference for Conventional Paper-and-Pencil Tests: A Study of Validity and Reliability

    Science.gov (United States)

    Eser, Mehmet Taha; Dogan, Nuri

    2017-01-01

    Purpose: The objective of this study is to develop the Inventory of Motive of Preference for Conventional Paper-And-Pencil Tests and to evaluate students' motives for preferring written tests, short-answer tests, true/false tests or multiple-choice tests. This will add a measurement tool to the literature with valid and reliable results to help…

  20. A matrix structured LED backlight system with 2D-DHT local dimming method

    Science.gov (United States)

    Liu, Jia; Li, Yang; Du, Sidan

    To reduce the number of the drivers in the conventional local dimming method for LCDs, a novel LED backlight local dimming system is proposed in this paper. The backlight of this system is generated by 2D discrete Hadamard transform and its matrix structured LED modules. Compared with the conventional 2D local dimming method, the proposed method costs much fewer drivers but with little degradation.

  1. Description of elastic scattering in U-matrix method

    International Nuclear Information System (INIS)

    Edneral, V.F.; Troshin, S.M.; Tyurin, N.E.; Khrustalev, O.A.

    1975-01-01

    The elastic pp-scattering has been analyzed using a generalized reaction matrix (the U-matrix). A good agreement has been reached with the experimental total cross sections for the (pp) reaction beginning with an energy of 30 GeV and for the dsub(t)(dt)(pp) for four ISR energies [ru

  2. A Brief Research Review for Improvement Methods the Wettability between Ceramic Reinforcement Particulate and Aluminium Matrix Composites

    Science.gov (United States)

    Razzaq, Alaa Mohammed; Majid, Dayang Laila Abang Abdul; Ishak, M. R.; B, Uday M.

    2017-05-01

    The development of new methods for addition fine ceramic powders to Al aluminium alloy melts, which would lead to more uniform distribution and effective incorporation of the reinforcement particles into the aluminium matrix alloy. Recently the materials engineering research has moved to composite materials from monolithic, adapting to the global need for lightweight, low cost, quality, and high performance advanced materials. Among the different methods, stir casting is one of the simplest ways of making aluminium matrix composites. However, it suffers from poor distribution and combination of the reinforcement ceramic particles in the metal matrix. These problems become significantly effect to reduce reinforcement size, more agglomeration and tendency with less wettability for the ceramic particles in the melt process. Many researchers have carried out different studies on the wettability between the metal matrix and dispersion phase, which includes added wettability agents, fluxes, preheating the reinforcement particles, coating the reinforcement particles, and use composting techniques. The enhancement of wettability of ceramic particles by the molten matrix alloy and the reinforcement particles distribution improvement in the solidified matrix is the main objective for many studies that will be discussed in this paper.

  3. On the generalized eigenvalue method for energies and matrix elements in lattice field theory

    Energy Technology Data Exchange (ETDEWEB)

    Blossier, Benoit [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)]|[Paris-XI Univ., 91 - Orsay (France). Lab. de Physique Theorique; Morte, Michele della [CERN, Geneva (Switzerland). Physics Dept.]|[Mainz Univ. (Germany). Inst. fuer Kernphysik; Hippel, Georg von; Sommer, Rainer [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Mendes, Tereza [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)]|[Sao Paulo Univ. (Brazil). IFSC

    2009-02-15

    We discuss the generalized eigenvalue problem for computing energies and matrix elements in lattice gauge theory, including effective theories such as HQET. It is analyzed how the extracted effective energies and matrix elements converge when the time separations are made large. This suggests a particularly efficient application of the method for which we can prove that corrections vanish asymptotically as exp(-(E{sub N+1}-E{sub n}) t). The gap E{sub N+1}-E{sub n} can be made large by increasing the number N of interpolating fields in the correlation matrix. We also show how excited state matrix elements can be extracted such that contaminations from all other states disappear exponentially in time. As a demonstration we present numerical results for the extraction of ground state and excited B-meson masses and decay constants in static approximation and to order 1/m{sub b} in HQET. (orig.)

  4. On the generalized eigenvalue method for energies and matrix elements in lattice field theory

    International Nuclear Information System (INIS)

    Blossier, Benoit; Mendes, Tereza; Sao Paulo Univ.

    2009-02-01

    We discuss the generalized eigenvalue problem for computing energies and matrix elements in lattice gauge theory, including effective theories such as HQET. It is analyzed how the extracted effective energies and matrix elements converge when the time separations are made large. This suggests a particularly efficient application of the method for which we can prove that corrections vanish asymptotically as exp(-(E N+1 -E n ) t). The gap E N+1 -E n can be made large by increasing the number N of interpolating fields in the correlation matrix. We also show how excited state matrix elements can be extracted such that contaminations from all other states disappear exponentially in time. As a demonstration we present numerical results for the extraction of ground state and excited B-meson masses and decay constants in static approximation and to order 1/m b in HQET. (orig.)

  5. Upper bound for the span of pencil graph

    Science.gov (United States)

    Parvathi, N.; Vimala Rani, A.

    2018-04-01

    An L(2,1)-Coloring or Radio Coloring or λ coloring of a graph is a function f from the vertex set V(G) to the set of all nonnegative integers such that |f(x) ‑ f(y)| ≥ 2 if d(x,y) = 1 and |f(x) ‑ f(y)| ≥ 1 if d(x,y)=2, where d(x,y) denotes the distance between x and y in G. The L(2,1)-coloring number or span number λ(G) of G is the smallest number k such that G has an L(2,1)-coloring with max{f(v) : v ∈ V(G)} = k. [2]The minimum number of colors used in L(2,1)-coloring is called the radio number rn(G) of G (Positive integer). Griggs and yeh conjectured that λ(G) ≤ Δ2 for any simple graph with maximum degree Δ>2. In this article, we consider some special graphs like, n-sunlet graph, pencil graph families and derive its upper bound of (G) and rn(G).

  6. Determination of Dispersion Curves for Composite Materials with the Use of Stiffness Matrix Method

    Directory of Open Access Journals (Sweden)

    Barski Marek

    2017-06-01

    Full Text Available Elastic waves used in Structural Health Monitoring systems have strongly dispersive character. Therefore it is necessary to determine the appropriate dispersion curves in order to proper interpretation of a received dynamic response of an analyzed structure. The shape of dispersion curves as well as number of wave modes depends on mechanical properties of layers and frequency of an excited signal. In the current work, the relatively new approach is utilized, namely stiffness matrix method. In contrast to transfer matrix method or global matrix method, this algorithm is considered as numerically unconditionally stable and as effective as transfer matrix approach. However, it will be demonstrated that in the case of hybrid composites, where mechanical properties of particular layers differ significantly, obtaining results could be difficult. The theoretical relationships are presented for the composite plate of arbitrary stacking sequence and arbitrary direction of elastic waves propagation. As a numerical example, the dispersion curves are estimated for the lamina, which is made of carbon fibers and epoxy resin. It is assumed that elastic waves travel in the parallel, perpendicular and arbitrary direction to the fibers in lamina. Next, the dispersion curves are determined for the following laminate [0°, 90°, 0°, 90°, 0°, 90°, 0°, 90°] and hybrid [Al, 90°, 0°, 90°, 0°, 90°, 0°], where Al is the aluminum alloy PA38 and the rest of layers are made of carbon fibers and epoxy resin.

  7. A novel method for morphological pleomorphism and heterogeneity quantitative measurement: Named cell feature level co-occurrence matrix.

    Science.gov (United States)

    Saito, Akira; Numata, Yasushi; Hamada, Takuya; Horisawa, Tomoyoshi; Cosatto, Eric; Graf, Hans-Peter; Kuroda, Masahiko; Yamamoto, Yoichiro

    2016-01-01

    Recent developments in molecular pathology and genetic/epigenetic analysis of cancer tissue have resulted in a marked increase in objective and measurable data. In comparison, the traditional morphological analysis approach to pathology diagnosis, which can connect these molecular data and clinical diagnosis, is still mostly subjective. Even though the advent and popularization of digital pathology has provided a boost to computer-aided diagnosis, some important pathological concepts still remain largely non-quantitative and their associated data measurements depend on the pathologist's sense and experience. Such features include pleomorphism and heterogeneity. In this paper, we propose a method for the objective measurement of pleomorphism and heterogeneity, using the cell-level co-occurrence matrix. Our method is based on the widely used Gray-level co-occurrence matrix (GLCM), where relations between neighboring pixel intensity levels are captured into a co-occurrence matrix, followed by the application of analysis functions such as Haralick features. In the pathological tissue image, through image processing techniques, each nucleus can be measured and each nucleus has its own measureable features like nucleus size, roundness, contour length, intra-nucleus texture data (GLCM is one of the methods). In GLCM each nucleus in the tissue image corresponds to one pixel. In this approach the most important point is how to define the neighborhood of each nucleus. We define three types of neighborhoods of a nucleus, then create the co-occurrence matrix and apply Haralick feature functions. In each image pleomorphism and heterogeneity are then determined quantitatively. For our method, one pixel corresponds to one nucleus feature, and we therefore named our method Cell Feature Level Co-occurrence Matrix (CFLCM). We tested this method for several nucleus features. CFLCM is showed as a useful quantitative method for pleomorphism and heterogeneity on histopathological image

  8. Analysis of a wavelength selectable cascaded DFB laser based on the transfer matrix method

    International Nuclear Information System (INIS)

    Xie Hongyun; Chen Liang; Shen Pei; Sun Botao; Wang Renqing; Xiao Ying; You Yunxia; Zhang Wanrong

    2010-01-01

    A novel cascaded DFB laser, which consists of two serial gratings to provide selectable wavelengths, is presented and analyzed by the transfer matrix method. In this method, efficient facet reflectivity is derived from the transfer matrix built for each serial section and is then used to simulate the performance of the novel cascaded DFB laser through self-consistently solving the gain equation, the coupled wave equation and the current continuity equations. The simulations prove the feasibility of this kind of wavelength selectable laser and a corresponding designed device with two selectable wavelengths of 1.51 μm and 1.53 μm is realized by experiments on InP-based multiple quantum well structure. (semiconductor devices)

  9. Using GPU to calculate electron dose for hybrid pencil beam model

    International Nuclear Information System (INIS)

    Guo Chengjun; Li Xia; Hou Qing; Wu Zhangwen

    2011-01-01

    Hybrid pencil beam model (HPBM) offers an efficient approach to calculate the three-dimension dose distribution from a clinical electron beam. Still, clinical radiation treatment activity desires faster treatment plan process. Our work presented the fast implementation of HPBM-based electron dose calculation using graphics processing unit (GPU). The HPBM algorithm was implemented in compute unified device architecture running on the GPU, and C running on the CPU, respectively. Several tests with various sizes of the field, beamlet and voxel were used to evaluate our implementation. On an NVIDIA GeForce GTX470 GPU card, we achieved speedup factors of 2.18- 98.23 with acceptable accuracy, compared with the results from a Pentium E5500 2.80 GHz Dual-core CPU. (authors)

  10. Alternating optimization method based on nonnegative matrix factorizations for deep neural networks

    OpenAIRE

    Sakurai, Tetsuya; Imakura, Akira; Inoue, Yuto; Futamura, Yasunori

    2016-01-01

    The backpropagation algorithm for calculating gradients has been widely used in computation of weights for deep neural networks (DNNs). This method requires derivatives of objective functions and has some difficulties finding appropriate parameters such as learning rate. In this paper, we propose a novel approach for computing weight matrices of fully-connected DNNs by using two types of semi-nonnegative matrix factorizations (semi-NMFs). In this method, optimization processes are performed b...

  11. Numerical Methods Application for Reinforced Concrete Elements-Theoretical Approach for Direct Stiffness Matrix Method

    Directory of Open Access Journals (Sweden)

    Sergiu Ciprian Catinas

    2015-07-01

    Full Text Available A detailed theoretical and practical investigation of the reinforced concrete elements is due to recent techniques and method that are implemented in the construction market. More over a theoretical study is a demand for a better and faster approach nowadays due to rapid development of the calculus technique. The paper above will present a study for implementing in a static calculus the direct stiffness matrix method in order capable to address phenomena related to different stages of loading, rapid change of cross section area and physical properties. The method is a demand due to the fact that in our days the FEM (Finite Element Method is the only alternative to such a calculus and FEM are considered as expensive methods from the time and calculus resources point of view. The main goal in such a method is to create the moment-curvature diagram in the cross section that is analyzed. The paper above will express some of the most important techniques and new ideas as well in order to create the moment curvature graphic in the cross sections considered.

  12. Intercomparison of the GOS approach, superposition T-matrix method, and laboratory measurements for black carbon optical properties during aging

    International Nuclear Information System (INIS)

    He, Cenlin; Takano, Yoshi; Liou, Kuo-Nan; Yang, Ping; Li, Qinbin; Mackowski, Daniel W.

    2016-01-01

    We perform a comprehensive intercomparison of the geometric-optics surface-wave (GOS) approach, the superposition T-matrix method, and laboratory measurements for optical properties of fresh and coated/aged black carbon (BC) particles with complex structures. GOS and T-matrix calculations capture the measured optical (i.e., extinction, absorption, and scattering) cross sections of fresh BC aggregates, with 5–20% differences depending on particle size. We find that the T-matrix results tend to be lower than the measurements, due to uncertainty in theoretical approximations of realistic BC structures, particle property measurements, and numerical computations in the method. On the contrary, the GOS results are higher than the measurements (hence the T-matrix results) for BC radii 100 nm. We find good agreement (differences 100 nm. We find small deviations (≤10%) in asymmetry factors computed from the two methods for most BC coating structures and sizes, but several complex structures have 10–30% differences. This study provides the foundation for downstream application of the GOS approach in radiative transfer and climate studies. - Highlights: • The GOS and T-matrix methods capture laboratory measurements of BC optical properties. • The GOS results are consistent with the T-matrix results for BC optical properties. • BC optical properties vary remarkably with coating structures and sizes during aging.

  13. Development of spectral history methods for pin-by-pin core analysis method using three-dimensional direct response matrix

    International Nuclear Information System (INIS)

    Mitsuyasu, T.; Ishii, K.; Hino, T.; Aoyama, M.

    2009-01-01

    Spectral history methods for pin-by-pin core analysis method using the three-dimensional direct response matrix have been developed. The direct response matrix is formalized by four sub-response matrices in order to respond to a core eigenvalue k and thus can be recomposed at each outer iteration in the core analysis. For core analysis, it is necessary to take into account the burn-up effect related to spectral history. One of the methods is to evaluate the nodal burn-up spectrum obtained using the out-going neutron current. The other is to correct the fuel rod neutron production rates obtained the pin-by-pin correction. These spectral history methods were tested in a heterogeneous system. The test results show that the neutron multiplication factor error can be reduced by half during burn-up, the nodal neutron production rates errors can be reduced by 30% or more. The root-mean-square differences between the relative fuel rod neutron production rate distributions can be reduced within 1.1% error. This means that these methods can accurately reflect the effects of intra- and inter-assembly heterogeneities during burn-up and can be used for core analysis. Core analysis with the DRM method was carried out for an ABWR quarter core and it was found that both thermal power and coolant-flow distributions were smoothly converged. (authors)

  14. A Numerical Matrix-Based method in Harmonic Studies in Wind Power Plants

    DEFF Research Database (Denmark)

    Dowlatabadi, Mohammadkazem Bakhshizadeh; Hjerrild, Jesper; Kocewiak, Łukasz Hubert

    2016-01-01

    In the low frequency range, there are some couplings between the positive- and negative-sequence small-signal impedances of the power converter due to the nonlinear and low bandwidth control loops such as the synchronization loop. In this paper, a new numerical method which also considers...... these couplings will be presented. The numerical data are advantageous to the parametric differential equations, because analysing the high order and complex transfer functions is very difficult, and finally one uses the numerical evaluation methods. This paper proposes a numerical matrix-based method, which...

  15. WE-E-BRB-00: Motion Management for Pencil Beam Scanning Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2016-06-15

    Strategies for treating thoracic and liver tumors using pencil beam scanning proton therapy Thoracic and liver tumors have not been treated with pencil beam scanning (PBS) proton therapy until recently. This is because of concerns about the significant interplay effects between proton spot scanning and patient’s respiratory motion. However, not all tumors have unacceptable magnitude of motion for PBS proton therapy. Therefore it is important to analyze the motion and understand the significance of the interplay effect for each patient. The factors that affect interplay effect and its washout include magnitude of motion, spot size, spot scanning sequence and speed. Selection of beam angle, scanning direction, repainting and fractionation can all reduce the interplay effect. An overview of respiratory motion management in PBS proton therapy including assessment of tumor motion and WET evaluation will be first presented. As thoracic tumors have very different motion patterns from liver tumors, examples would be provided for both anatomic sites. As thoracic tumors are typically located within highly heterogeneous environments, dose calculation accuracy is a concern for both treatment target and surrounding organs such as spinal cord or esophagus. Strategies for mitigating the interplay effect in PBS will be presented and the pros and cons of various motion mitigation strategies will be discussed. Learning Objectives: Motion analysis for individual patients with respect to interplay effect Interplay effect and mitigation strategies for treating thoracic/liver tumors with PBS Treatment planning margins for PBS The impact of proton dose calculation engines over heterogeneous treatment target and surrounding organs I have a current research funding from Varian Medical System under the master agreement between University of Pennsylvania and Varian; L. Lin, I have a current funding from Varian Medical System under the master agreement between University of Pennsylvania and

  16. WE-E-BRB-00: Motion Management for Pencil Beam Scanning Proton Therapy

    International Nuclear Information System (INIS)

    2016-01-01

    Strategies for treating thoracic and liver tumors using pencil beam scanning proton therapy Thoracic and liver tumors have not been treated with pencil beam scanning (PBS) proton therapy until recently. This is because of concerns about the significant interplay effects between proton spot scanning and patient’s respiratory motion. However, not all tumors have unacceptable magnitude of motion for PBS proton therapy. Therefore it is important to analyze the motion and understand the significance of the interplay effect for each patient. The factors that affect interplay effect and its washout include magnitude of motion, spot size, spot scanning sequence and speed. Selection of beam angle, scanning direction, repainting and fractionation can all reduce the interplay effect. An overview of respiratory motion management in PBS proton therapy including assessment of tumor motion and WET evaluation will be first presented. As thoracic tumors have very different motion patterns from liver tumors, examples would be provided for both anatomic sites. As thoracic tumors are typically located within highly heterogeneous environments, dose calculation accuracy is a concern for both treatment target and surrounding organs such as spinal cord or esophagus. Strategies for mitigating the interplay effect in PBS will be presented and the pros and cons of various motion mitigation strategies will be discussed. Learning Objectives: Motion analysis for individual patients with respect to interplay effect Interplay effect and mitigation strategies for treating thoracic/liver tumors with PBS Treatment planning margins for PBS The impact of proton dose calculation engines over heterogeneous treatment target and surrounding organs I have a current research funding from Varian Medical System under the master agreement between University of Pennsylvania and Varian; L. Lin, I have a current funding from Varian Medical System under the master agreement between University of Pennsylvania and

  17. The augmented lagrange multipliers method for matrix completion from corrupted samplings with application to mixed Gaussian-impulse noise removal.

    Directory of Open Access Journals (Sweden)

    Fan Meng

    Full Text Available This paper studies the problem of the restoration of images corrupted by mixed Gaussian-impulse noise. In recent years, low-rank matrix reconstruction has become a research hotspot in many scientific and engineering domains such as machine learning, image processing, computer vision and bioinformatics, which mainly involves the problem of matrix completion and robust principal component analysis, namely recovering a low-rank matrix from an incomplete but accurate sampling subset of its entries and from an observed data matrix with an unknown fraction of its entries being arbitrarily corrupted, respectively. Inspired by these ideas, we consider the problem of recovering a low-rank matrix from an incomplete sampling subset of its entries with an unknown fraction of the samplings contaminated by arbitrary errors, which is defined as the problem of matrix completion from corrupted samplings and modeled as a convex optimization problem that minimizes a combination of the nuclear norm and the l(1-norm in this paper. Meanwhile, we put forward a novel and effective algorithm called augmented Lagrange multipliers to exactly solve the problem. For mixed Gaussian-impulse noise removal, we regard it as the problem of matrix completion from corrupted samplings, and restore the noisy image following an impulse-detecting procedure. Compared with some existing methods for mixed noise removal, the recovery quality performance of our method is dominant if images possess low-rank features such as geometrically regular textures and similar structured contents; especially when the density of impulse noise is relatively high and the variance of Gaussian noise is small, our method can outperform the traditional methods significantly not only in the simultaneous removal of Gaussian noise and impulse noise, and the restoration ability for a low-rank image matrix, but also in the preservation of textures and details in the image.

  18. Quantum mechanics in matrix form

    CERN Document Server

    Ludyk, Günter

    2018-01-01

    This book gives an introduction to quantum mechanics with the matrix method. Heisenberg's matrix mechanics is described in detail. The fundamental equations are derived by algebraic methods using matrix calculus. Only a brief description of Schrödinger's wave mechanics is given (in most books exclusively treated), to show their equivalence to Heisenberg's matrix  method. In the first part the historical development of Quantum theory by Planck, Bohr and Sommerfeld is sketched, followed by the ideas and methods of Heisenberg, Born and Jordan. Then Pauli's spin and exclusion principles are treated. Pauli's exclusion principle leads to the structure of atoms. Finally, Dirac´s relativistic quantum mechanics is shortly presented. Matrices and matrix equations are today easy to handle when implementing numerical algorithms using standard software as MAPLE and Mathematica.

  19. Comparison of matrix exponential methods for fuel burnup calculations

    International Nuclear Information System (INIS)

    Oh, Hyung Suk; Yang, Won Sik

    1999-01-01

    Series expansion methods to compute the exponential of a matrix have been compared by applying them to fuel depletion calculations. Specifically, Taylor, Pade, Chebyshev, and rational Chebyshev approximations have been investigated by approximating the exponentials of bum matrices by truncated series of each method with the scaling and squaring algorithm. The accuracy and efficiency of these methods have been tested by performing various numerical tests using one thermal reactor and two fast reactor depletion problems. The results indicate that all the four series methods are accurate enough to be used for fuel depletion calculations although the rational Chebyshev approximation is relatively less accurate. They also show that the rational approximations are more efficient than the polynomial approximations. Considering the computational accuracy and efficiency, the Pade approximation appears to be better than the other methods. Its accuracy is better than the rational Chebyshev approximation, while being comparable to the polynomial approximations. On the other hand, its efficiency is better than the polynomial approximations and is similar to the rational Chebyshev approximation. In particular, for fast reactor depletion calculations, it is faster than the polynomial approximations by a factor of ∼ 1.7. (author). 11 refs., 4 figs., 2 tabs

  20. An Efficient GPU General Sparse Matrix-Matrix Multiplication for Irregular Data

    DEFF Research Database (Denmark)

    Liu, Weifeng; Vinter, Brian

    2014-01-01

    General sparse matrix-matrix multiplication (SpGEMM) is a fundamental building block for numerous applications such as algebraic multigrid method, breadth first search and shortest path problem. Compared to other sparse BLAS routines, an efficient parallel SpGEMM algorithm has to handle extra...... irregularity from three aspects: (1) the number of the nonzero entries in the result sparse matrix is unknown in advance, (2) very expensive parallel insert operations at random positions in the result sparse matrix dominate the execution time, and (3) load balancing must account for sparse data in both input....... Load balancing builds on the number of the necessary arithmetic operations on the nonzero entries and is guaranteed in all stages. Compared with the state-of-the-art GPU SpGEMM methods in the CUSPARSE library and the CUSP library and the latest CPU SpGEMM method in the Intel Math Kernel Library, our...

  1. Calculation of the fast multiplication factor by the fission matrix method

    International Nuclear Information System (INIS)

    Naumov, V.A.; Rozin, S.G.; Ehl'perin, T.I.

    1976-01-01

    A variation of the Monte Carlo method to calculate an effective breeding factor of a nuclear reactor is described. The evaluation procedure of reactivity perturbations by the Monte Carlo method in the first order perturbation theory is considered. The method consists in reducing an integral neutron transport equation to a set of linear algebraic equations. The coefficients of this set are elements of a fission matrix. The fission matrix being a Grin function of the neutron transport equation, is evaluated by the Monte Carlo method. In the program realizing the suggested algorithm, the game for initial neutron energy of a fission spectrum and then for the region of neutron birth, ΔVsub(f)sup(i)has been played in proportion to the product of Σsub(f)sup(i)ΔVsub(f)sup(i), where Σsub(f)sup(i) is a macroscopic cross section in the region numbered at the birth energy. Further iterations of a space distribution of neutrons in the system are performed by the generation method. In the adopted scheme of simulation of neutron histories the emission of secondary neutrons is controlled by weights; it occurs at every collision and not only in the end on the history. The breeding factor is calculated simultaneously with the space distribution of neutron worth in the system relative to the fission process and neutron flux. Efficiency of the described procedure has been tested on the calculation of the breeding factor for the Godiva assembly, simulating a fast reactor with a hard spectrum. A high accuracy of calculations at moderate number of zones in the core and reasonable statistics has been stated

  2. Teaching Improvement Model Designed with DEA Method and Management Matrix

    Science.gov (United States)

    Montoneri, Bernard

    2014-01-01

    This study uses student evaluation of teachers to design a teaching improvement matrix based on teaching efficiency and performance by combining management matrix and data envelopment analysis. This matrix is designed to formulate suggestions to improve teaching. The research sample consists of 42 classes of freshmen following a course of English…

  3. Empirical method for matrix effects correction in liquid samples

    International Nuclear Information System (INIS)

    Vigoda de Leyt, Dora; Vazquez, Cristina

    1987-01-01

    A simple method for the determination of Cr, Ni and Mo in stainless steels is presented. In order to minimize the matrix effects, the conditions of liquid system to dissolve stainless steels chips has been developed. Pure element solutions were used as standards. Preparation of synthetic solutions with all the elements of steel and also mathematic corrections are avoided. It results in a simple chemical operation which simplifies the method of analysis. The variance analysis of the results obtained with steel samples show that the three elements may be determined from the comparison with the analytical curves obtained with the pure elements if the same parameters in the calibration curves are used. The accuracy and the precision were checked against other techniques using the British Chemical Standards of the Bureau of Anlysed Samples Ltd. (England). (M.E.L.) [es

  4. Love waves in functionally graded piezoelectric materials by stiffness matrix method.

    Science.gov (United States)

    Ben Salah, Issam; Wali, Yassine; Ben Ghozlen, Mohamed Hédi

    2011-04-01

    A numerical matrix method relative to the propagation of ultrasonic guided waves in functionally graded piezoelectric heterostructure is given in order to make a comparative study with the respective performances of analytical methods proposed in literature. The preliminary obtained results show a good agreement, however numerical approach has the advantage of conceptual simplicity and flexibility brought about by the stiffness matrix method. The propagation behaviour of Love waves in a functionally graded piezoelectric material (FGPM) is investigated in this article. It involves a thin FGPM layer bonded perfectly to an elastic substrate. The inhomogeneous FGPM heterostructure has been stratified along the depth direction, hence each state can be considered as homogeneous and the ordinary differential equation method is applied. The obtained solutions are used to study the effect of an exponential gradient applied to physical properties. Such numerical approach allows applying different gradient variation for mechanical and electrical properties. For this case, the obtained results reveal opposite effects. The dispersive curves and phase velocities of the Love wave propagation in the layered piezoelectric film are obtained for electrical open and short cases on the free surface, respectively. The effect of gradient coefficients on coupled electromechanical factor, on the stress fields, the electrical potential and the mechanical displacement are discussed, respectively. Illustration is achieved on the well known heterostructure PZT-5H/SiO(2), the obtained results are especially useful in the design of high-performance acoustic surface devices and accurately prediction of the Love wave propagation behaviour. Copyright © 2010 Elsevier B.V. All rights reserved.

  5. The nuclear reaction matrix

    International Nuclear Information System (INIS)

    Krenciglowa, E.M.; Kung, C.L.; Kuo, T.T.S.; Osnes, E.; and Department of Physics, State University of New York at Stony Brook, Stony Brook, New York 11794)

    1976-01-01

    Different definitions of the reaction matrix G appropriate to the calculation of nuclear structure are reviewed and discussed. Qualitative physical arguments are presented in support of a two-step calculation of the G-matrix for finite nuclei. In the first step the high-energy excitations are included using orthogonalized plane-wave intermediate states, and in the second step the low-energy excitations are added in, using harmonic oscillator intermediate states. Accurate calculations of G-matrix elements for nuclear structure calculations in the Aapprox. =18 region are performed following this procedure and treating the Pauli exclusion operator Q 2 /sub p/ by the method of Tsai and Kuo. The treatment of Q 2 /sub p/, the effect of the intermediate-state spectrum and the energy dependence of the reaction matrix are investigated in detail. The present matrix elements are compared with various matrix elements given in the literature. In particular, close agreement is obtained with the matrix elements calculated by Kuo and Brown using approximate methods

  6. The classical r-matrix method for nonlinear sigma-model

    OpenAIRE

    Sevostyanov, Alexey

    1995-01-01

    The canonical Poisson structure of nonlinear sigma-model is presented as a Lie-Poisson r-matrix bracket on coadjoint orbits. It is shown that the Poisson structure of this model is determined by some `hidden singularities' of the Lax matrix.

  7. New computational method for non-LTE, the linear response matrix

    International Nuclear Information System (INIS)

    Fournier, K.B.; Grasiani, F.R.; Harte, J.A.; Libby, S.B.; More, R.M.; Zimmerman, G.B.

    1998-01-01

    My coauthors have done extensive theoretical and computational calculations that lay the ground work for a linear response matrix method to calculate non-LTE (local thermodynamic equilibrium) opacities. I will give briefly review some of their work and list references. Then I will describe what has been done to utilize this theory to create a computational package to rapidly calculate mild non-LTE emission and absorption opacities suitable for use in hydrodynamic calculations. The opacities are obtained by performing table look-ups on data that has been generated with a non-LTE package. This scheme is currently under development. We can see that it offers a significant computational speed advantage. It is suitable for mild non-LTE, quasi-steady conditions. And it offers a new insertion path for high-quality non-LTE data. Currently, the linear response matrix data file is created using XSN. These data files could be generated by more detailed and rigorous calculations without changing any part of the implementation in the hydro code. The scheme is running in Lasnex and is being tested and developed

  8. SU-E-T-209: Independent Dose Calculation in FFF Modulated Fields with Pencil Beam Kernels Obtained by Deconvolution

    International Nuclear Information System (INIS)

    Azcona, J; Burguete, J

    2014-01-01

    Purpose: To obtain the pencil beam kernels that characterize a megavoltage photon beam generated in a FFF linac by experimental measurements, and to apply them for dose calculation in modulated fields. Methods: Several Kodak EDR2 radiographic films were irradiated with a 10 MV FFF photon beam from a Varian True Beam (Varian Medical Systems, Palo Alto, CA) linac, at the depths of 5, 10, 15, and 20cm in polystyrene (RW3 water equivalent phantom, PTW Freiburg, Germany). The irradiation field was a 50 mm diameter circular field, collimated with a lead block. Measured dose leads to the kernel characterization, assuming that the energy fluence exiting the linac head and further collimated is originated on a point source. The three-dimensional kernel was obtained by deconvolution at each depth using the Hankel transform. A correction on the low dose part of the kernel was performed to reproduce accurately the experimental output factors. The kernels were used to calculate modulated dose distributions in six modulated fields and compared through the gamma index to their absolute dose measured by film in the RW3 phantom. Results: The resulting kernels properly characterize the global beam penumbra. The output factor-based correction was carried out adding the amount of signal necessary to reproduce the experimental output factor in steps of 2mm, starting at a radius of 4mm. There the kernel signal was in all cases below 10% of its maximum value. With this correction, the number of points that pass the gamma index criteria (3%, 3mm) in the modulated fields for all cases are at least 99.6% of the total number of points. Conclusion: A system for independent dose calculations in modulated fields from FFF beams has been developed. Pencil beam kernels were obtained and their ability to accurately calculate dose in homogeneous media was demonstrated

  9. Parallel R-matrix computation

    International Nuclear Information System (INIS)

    Heggarty, J.W.

    1999-06-01

    response to the challenge of achieving parallel R-matrix computation. The primary objective was to develop parallel codes, targeted at multicomputers, that are capable of performing R-matrix calculations hitherto intractable using classic supercomputers. In particular, Fortran implementations of two internal region methods (the R-matrix Floquet method and the two-dimensional R-matrix propagation method) and three external region methods (the Light-Walker propagation method, the Baluja, Burke and Morgan propagation method and the Variable Phase Method) from four widely utilised R-matrix packages were investigated to ascertain whether, in these cases, parallel R-matrix computation was practicable and, if so, to determine the most effective way to port such codes to contemporary multicomputers. When attempting to develop the parallel codes, a number of computer aided automatic parallelization tools were investigated. These were found to be inadequate. Consequently, a parallelization approach was developed to provide simple guidelines for manual parallelization. This parallelization approach proved effective and efficient parallel versions of the five R-matrix codes were successfully developed. (author)

  10. Consensus Guidelines for Implementing Pencil-Beam Scanning Proton Therapy for Thoracic Malignancies on Behalf of the PTCOG Thoracic and Lymphoma Subcommittee

    NARCIS (Netherlands)

    Chang, Joe Y.; Zhang, Xiaodong; Knopf, Antje; Li, Heng; Mori, Shinichiro; Dong, Lei; Lu, Hsiao-Ming; Liu, Wei; Badiyan, Shahed N.; Both, Stephen; Meijers, Arturs; Lin, Liyong; Flampouri, Stella; Li, Zuofeng; Umegaki, Kikuo; Simone, Charles B.; Zhu, Xiaorong R.

    2017-01-01

    Pencil-beam scanning (PBS) proton therapy (PT), particularly intensity modulated PT, represents the latest advanced PT technology for treating cancers, including thoracic malignancies. On the basis of virtual clinical studies, PBS-PT appears to have great potential in its ability to tightly tailor

  11. Radiation safety assessment of cobalt 60 external beam radiotherapy using the risk-matrix method

    International Nuclear Information System (INIS)

    Dumenigo, C; Vilaragut, J.J.; Ferro, R.; Guillen, A.; Ramirez, M.L.; Ortiz Lopez, P.; Rodriguez, M.; McDonnell, J.D.; Papadopulos, S.; Pereira, P.P.; Goncalvez, M.; Morales, J.; Larrinaga, E.; Lopez Morones, R.; Sanchez, R.; Delgado, J.M.; Sanchez, C.; Somoano, F.

    2008-01-01

    External beam radiotherapy is the only practice in which humans are placed directly in a radiation beam with the intention to deliver a very high dose. This is why safety in radiotherapy is very critical, and is a matter of interest to both radiotherapy departments and regulatory bodies. Accidental exposures have occurred throughout the world, thus showing the need for systematic safety assessments, capable to identify preventive measures and to minimize consequences of accidental exposure. Risk-matrix is a systematic approach which combines the relevant event features to assess the overall risk of each particular event. Once an event sequence is identified, questions such as how frequent the event, how severe the potential consequences and how reliable the existing safety measures are answered in a risk-matrix table. The ultimate goal is to achieve that the overall risk for events with severe consequences should always be low o very low. In the present study, the risk-matrix method has been applied to an hypothetical radiotherapy department, which could be equivalent to an upper level hospital of the Ibero American region, in terms of safety checks and preventive measures. The application of the method has identified 76 event sequences and revealed that the hypothetical radiotherapy department is sufficiently protected (low risk) against them, including 23 event sequences with severe consequences. The method has revealed that the risk of these sequences could grow to high level if certain specific preventive measures were degraded with time. This study has identified these preventive measures, thus facilitating a rational allocation of resources in regular controls to detect any loss of reliability. The method has proven to have an important practical value and is affordable at hospital level. The elaborated risk-matrix can be easily adapted to local circumstances, in terms of existing controls and safety measures. This approach can help hospitals to identify

  12. Optimized Projection Matrix for Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Jianping Xu

    2010-01-01

    Full Text Available Compressive sensing (CS is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed to optimize the projection matrix. This method is based on equiangular tight frame (ETF design because an ETF has minimum coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.

  13. Methods for converging correlation energies within the dielectric matrix formalism

    Science.gov (United States)

    Dixit, Anant; Claudot, Julien; Gould, Tim; Lebègue, Sébastien; Rocca, Dario

    2018-03-01

    Within the dielectric matrix formalism, the random-phase approximation (RPA) and analogous methods that include exchange effects are promising approaches to overcome some of the limitations of traditional density functional theory approximations. The RPA-type methods however have a significantly higher computational cost, and, similarly to correlated quantum-chemical methods, are characterized by a slow basis set convergence. In this work we analyzed two different schemes to converge the correlation energy, one based on a more traditional complete basis set extrapolation and one that converges energy differences by accounting for the size-consistency property. These two approaches have been systematically tested on the A24 test set, for six points on the potential-energy surface of the methane-formaldehyde complex, and for reaction energies involving the breaking and formation of covalent bonds. While both methods converge to similar results at similar rates, the computation of size-consistent energy differences has the advantage of not relying on the choice of a specific extrapolation model.

  14. Electroanalysis of cardioselective beta-adrenoreceptor blocking agent acebutolol by disposable graphite pencil electrodes with detailed redox mechanism

    OpenAIRE

    Atmanand M. Bagoji; Shreekant M. Patil; Sharanappa T. Nandibewoor

    2016-01-01

    A simple economic graphite pencil electrode (GPE) was used for analysis of cardioselective, hydrophilic-adrenoreceptor blocking agent, acebutolol (ACBT) using the cyclic voltammetric, linear sweep voltammetric, differential pulse voltammetric (DPV), and square-wave voltammetric (SWV) techniques. The dependence of the current on pH, concentration, and scan rate was investigated to optimize the experimental condition for determination of ACBT. The electrochemical behavior of the ACBT at GPE was...

  15. Matrix elements and few-body calculations within the unitary correlation operator method

    International Nuclear Information System (INIS)

    Roth, R.; Hergert, H.; Papakonstantinou, P.

    2005-01-01

    We employ the unitary correlation operator method (UCOM) to construct correlated, low-momentum matrix elements of realistic nucleon-nucleon interactions. The dominant short-range central and tensor correlations induced by the interaction are included explicitly by an unitary transformation. Using correlated momentum-space matrix elements of the Argonne V18 potential, we show that the unitary transformation eliminates the strong off-diagonal contributions caused by the short-range repulsion and the tensor interaction and leaves a correlated interaction dominated by low-momentum contributions. We use correlated harmonic oscillator matrix elements as input for no-core shell model calculations for few-nucleon systems. Compared to the bare interaction, the convergence properties are dramatically improved. The bulk of the binding energy can already be obtained in very small model spaces or even with a single Slater determinant. Residual long-range correlations, not treated explicitly by the unitary transformation, can easily be described in model spaces of moderate size allowing for fast convergence. By varying the range of the tensor correlator we are able to map out the Tjon line and can in turn constrain the optimal correlator ranges. (orig.)

  16. Matrix elements and few-body calculations within the unitary correlation operator method

    International Nuclear Information System (INIS)

    Roth, R.; Hergert, H.; Papakonstantinou, P.; Neff, T.; Feldmeier, H.

    2005-01-01

    We employ the unitary correlation operator method (UCOM) to construct correlated, low-momentum matrix elements of realistic nucleon-nucleon interactions. The dominant short-range central and tensor correlations induced by the interaction are included explicitly by an unitary transformation. Using correlated momentum-space matrix elements of the Argonne V18 potential, we show that the unitary transformation eliminates the strong off-diagonal contributions caused by the short-range repulsion and the tensor interaction and leaves a correlated interaction dominated by low-momentum contributions. We use correlated harmonic oscillator matrix elements as input for no-core shell model calculations for few-nucleon systems. Compared to the bare interaction, the convergence properties are dramatically improved. The bulk of the binding energy can already be obtained in very small model spaces or even with a single Slater determinant. Residual long-range correlations, not treated explicitly by the unitary transformation, can easily be described in model spaces of moderate size allowing for fast convergence. By varying the range of the tensor correlator we are able to map out the Tjon line and can in turn constrain the optimal correlator ranges

  17. Hamiltonian formalism, quantization and S matrix for supergravity. [S matrix, canonical constraints

    Energy Technology Data Exchange (ETDEWEB)

    Fradkin, E S; Vasiliev, M A [AN SSSR, Moscow. Fizicheskij Inst.

    1977-12-05

    The canonical formalism for supergravity is constructed. The algebra of canonical constraints is found. The correct expression for the S matrix is obtained. Usual 'covariant methods' lead to an incorrect S matrix in supergravity since a new four-particle interaction of ghostfields survives in the Lagrangian expression of the S matrix.

  18. A Simple DTC-SVM method for Matrix Converter Drives Using a Deadbeat Scheme

    DEFF Research Database (Denmark)

    Lee, Kyo-Beum; Blaabjerg, Frede; Lee, Kwang-Won

    2005-01-01

    In this paper, a simple direct torque control (DTC) method for sensorless matrix converter drives is proposed, which is characterized by a simple structure, minimal torque ripple and unity input power factor. Also a good sensorless speed-control performance in the low speed operation is obtained,...

  19. Robustness of the Voluntary Breath-Hold Approach for the Treatment of Peripheral Lung Tumors Using Hypofractionated Pencil Beam Scanning Proton Therapy

    DEFF Research Database (Denmark)

    Dueck, Jenny; Knopf, Antje-Christin; Lomax, Antony

    2016-01-01

    PURPOSE: The safe clinical implementation of pencil beam scanning (PBS) proton therapy for lung tumors is complicated by the delivery uncertainties caused by breathing motion. The purpose of this feasibility study was to investigate whether a voluntary breath-hold technique could limit the delive...

  20. Charge-constrained auxiliary-density-matrix methods for the Hartree–Fock exchange contribution

    DEFF Research Database (Denmark)

    Merlot, Patrick; Izsak, Robert; Borgoo, Alex

    2014-01-01

    Three new variants of the auxiliary-density-matrix method (ADMM) of Guidon, Hutter, and VandeVondele [J. Chem. Theory Comput. 6, 2348 (2010)] are presented with the common feature thatthey have a simplified constraint compared with the full orthonormality requirement of the earlier ADMM1 method. ....... All ADMM variants are tested for accuracy and performance in all-electron B3LYP calculations with several commonly used basis sets. The effect of the choice of the exchange functional for the ADMM exchange–correction term is also investigated....

  1. Computerized and Paper-and-Pencil Versions of the Rosenberg Self-Esteem Scale: A Comparison of Psychometric Features and Respondent Preferences.

    Science.gov (United States)

    Vispoel, Walter P.; Boo, Jaeyool; Bleiler, Timothy

    2001-01-01

    Evaluated the characteristics of computerized and paper-and-pencil versions of the Rosenberg Self-Esteem Scale (SES) using scores for 224 college students. Results show that mode of administration has little effect on the psychometric properties of the SES although the computerized version took longer and was preferred by examinees. (SLD)

  2. Paper-and-Pencil and Web-Based Testing: The Measurement Invariance of the Big Five Personality Tests in Applied Settings

    Science.gov (United States)

    Vecchione, Michele; Alessandri, Guido; Barbaranelli, Claudio

    2012-01-01

    This study investigates the measurement equivalence of a five-factor measure of personality across two groups applying for jobs, who completed the same questionnaire using either a paper-and-pencil (n = 429) or a web online answer format (n = 651). The data were collected using the Big Five Questionnaire-2 (BFQ-2; which is a measure of the Five…

  3. Dose distribution of secondary radiation in a water phantom for a proton pencil beam-EURADOS WG9 intercomparison exercise

    Czech Academy of Sciences Publication Activity Database

    Stolarczyk, L.; Trinkl, S.; Romero-Exposito, M.; Mojzeszek, N.; Ambrožová, Iva; Domingo, C.; Davídková, Marie; Farah, J.; Klodowska, M.; Kneževic, Z.; Liszka, M.; Majer, M.; Miljanic, S.; Ploc, Ondřej; Schwarz, M.; Harrison, R. M.; Olko, P.

    2018-01-01

    Roč. 63, č. 8 (2018), č. článku 085017. ISSN 0031-9155 Institutional support: RVO:61389005 Keywords : passive detectors * neutron dosimetry * gamma radiation dosimetry * water phantom measurements * secondary radiation measurements * pencil beam scanning proton radiotherapy Subject RIV: FP - Other Medical Disciplines OBOR OECD: Radiology, nuclear medicine and medical imaging Impact factor: 2.742, year: 2016

  4. Vibration analysis of pipes conveying fluid by transfer matrix method

    International Nuclear Information System (INIS)

    Li, Shuai-jun; Liu, Gong-min; Kong, Wei-tao

    2014-01-01

    Highlights: • A theoretical study on vibration analysis of pipes with FSI is presented. • Pipelines with high fluid pressure and velocity can be solved by developed method. • Several pipeline schemes are discussed to illustrate the application of the method. • The proposed method is easier to apply compared to most existing procedures. • Influence laws of structural and fluid parameters on FSI of pipe are analyzed. -- Abstract: Considering the effects of pipe wall thickness, fluid pressure and velocity, a developed 14-equation model is presented, which describes the fluid–structure interaction behavior of pipelines. The transfer matrix method has been used for numerical modeling of both hydraulic and structural equations. Based on these models and algorithms, several pipeline schemes are presented to illustrate the application of the proposed method. Furthermore, the influence laws of supports, structural properties and fluid parameters on the dynamic response and natural frequencies of pipeline are analyzed, which shows using the optimal supports and structural properties is beneficial to reduce vibration of pipelines

  5. A pseudospectral matrix method for time-dependent tensor fields on a spherical shell

    International Nuclear Information System (INIS)

    Brügmann, Bernd

    2013-01-01

    We construct a pseudospectral method for the solution of time-dependent, non-linear partial differential equations on a three-dimensional spherical shell. The problem we address is the treatment of tensor fields on the sphere. As a test case we consider the evolution of a single black hole in numerical general relativity. A natural strategy would be the expansion in tensor spherical harmonics in spherical coordinates. Instead, we consider the simpler and potentially more efficient possibility of a double Fourier expansion on the sphere for tensors in Cartesian coordinates. As usual for the double Fourier method, we employ a filter to address time-step limitations and certain stability issues. We find that a tensor filter based on spin-weighted spherical harmonics is successful, while two simplified, non-spin-weighted filters do not lead to stable evolutions. The derivatives and the filter are implemented by matrix multiplication for efficiency. A key technical point is the construction of a matrix multiplication method for the spin-weighted spherical harmonic filter. As example for the efficient parallelization of the double Fourier, spin-weighted filter method we discuss an implementation on a GPU, which achieves a speed-up of up to a factor of 20 compared to a single core CPU implementation

  6. Computing wave functions in multichannel collisions with non-local potentials using the R-matrix method

    Science.gov (United States)

    Bonitati, Joey; Slimmer, Ben; Li, Weichuan; Potel, Gregory; Nunes, Filomena

    2017-09-01

    The calculable form of the R-matrix method has been previously shown to be a useful tool in approximately solving the Schrodinger equation in nuclear scattering problems. We use this technique combined with the Gauss quadrature for the Lagrange-mesh method to efficiently solve for the wave functions of projectile nuclei in low energy collisions (1-100 MeV) involving an arbitrary number of channels. We include the local Woods-Saxon potential, the non-local potential of Perey and Buck, a Coulomb potential, and a coupling potential to computationally solve for the wave function of two nuclei at short distances. Object oriented programming is used to increase modularity, and parallel programming techniques are introduced to reduce computation time. We conclude that the R-matrix method is an effective method to predict the wave functions of nuclei in scattering problems involving both multiple channels and non-local potentials. Michigan State University iCER ACRES REU.

  7. Comparison between two pencil-type ionization chambers with sensitive volume length of 30 cm

    International Nuclear Information System (INIS)

    Castro, Maysa C. de; Xavier, Marcos; Silva, Natalia F.; Caldas, Linda V.E.

    2016-01-01

    Computed tomography (CT) for imaging procedures has been growing due to advances in the equipment technology, providing a higher dose to the patient, in relation to other diagnostic radiology tests, resulting in a concern for the patients. The dosimetry in CT is carried out with a pencil-type ionization chamber with sensitive volume length of 10 cm. Studies have shown the underestimation of the dose values. In this work two ionization chambers with the sensitive volume length of 30 cm were developed. They were submitted to the main characterization tests; the results showed to be within the international recommended limits. (author)

  8. Performance of three pencil-type ionization chambers (10 cm) in computed tomography standard beams

    International Nuclear Information System (INIS)

    Castro, Maysa C. de; Xavier, Marcos; Caldas, Linda V.E.

    2015-01-01

    The use of computed tomography (CT) has increased over the years, thus generating a concern about the doses received by patients undergoing this procedure. Therefore, it is necessary to perform routinely beam dosimetry with the use of a pencil-type ionization chamber. This detector is the most utilized in the procedures of quality control tests on this kind of equipment. The objective of this work was to perform some characterization tests in standard CT beams, as the saturation curve, polarity effect, ion collection efficiency and linearity of response, using three ionization chambers, one commercial and two developed at the IPEN. (author)

  9. On the Numerical Behavior of Matrix Splitting Iteration Methods for Solving Linear Systems

    Czech Academy of Sciences Publication Activity Database

    Bai, Z.-Z.; Rozložník, Miroslav

    2015-01-01

    Roč. 53, č. 4 (2015), s. 1716-1737 ISSN 0036-1429 R&D Projects: GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : matrix splitting * stationary iteration method * backward error * rounding error analysis Subject RIV: BA - General Mathematics Impact factor: 1.899, year: 2015

  10. Distance descending ordering method: An O(n) algorithm for inverting the mass matrix in simulation of macromolecules with long branches

    Science.gov (United States)

    Xu, Xiankun; Li, Peiwen

    2017-11-01

    Fixman's work in 1974 and the follow-up studies have developed a method that can factorize the inverse of mass matrix into an arithmetic combination of three sparse matrices-one of them is positive definite and needs to be further factorized by using the Cholesky decomposition or similar methods. When the molecule subjected to study is of serial chain structure, this method can achieve O (n) time complexity. However, for molecules with long branches, Cholesky decomposition about the corresponding positive definite matrix will introduce massive fill-in due to its nonzero structure. Although there are several methods can be used to reduce the number of fill-in, none of them could strictly guarantee for zero fill-in for all molecules according to our test, and thus cannot obtain O (n) time complexity by using these traditional methods. In this paper we present a new method that can guarantee for no fill-in in doing the Cholesky decomposition, which was developed based on the correlations between the mass matrix and the geometrical structure of molecules. As a result, the inverting of mass matrix will remain the O (n) time complexity, no matter the molecule structure has long branches or not.

  11. Matrix-operator method for calculation of dynamics of intense beams of charged particles

    International Nuclear Information System (INIS)

    Kapchinskij, M.I.; Korenev, I.L.; Rinskij, L.A.

    1989-01-01

    Calculation algorithm for particle dynamics in high-current cyclic and linear accelerators is suggested. Particle movement in six-dimensional phase space is divided into coherent and incoherent components. Incoherent movement is described by envelope method; particle cluster is considered to be even-charged by tri-axial ellipsoid. Coherent movement is described in para-axial approximation; each structure element of the accelerator transport channel is characterized by six-dimensional matrix of phase coordinate transformation of cluster centre and by shift vector resulting from deviation of focusing element parameters from calculated values. Effect of space charge reflected forces is taken into account in the element matrix. Algorithm software is realized using well-known TRANSPORT program

  12. Hartree--Fock density matrix equation

    International Nuclear Information System (INIS)

    Cohen, L.; Frishberg, C.

    1976-01-01

    An equation for the Hartree--Fock density matrix is discussed and the possibility of solving this equation directly for the density matrix instead of solving the Hartree--Fock equation for orbitals is considered. Toward that end the density matrix is expanded in a finite basis to obtain the matrix representative equation. The closed shell case is considered. Two numerical schemes are developed and applied to a number of examples. One example is given where the standard orbital method does not converge while the method presented here does

  13. Immersed multipurpose device for the inspection of PWR spent pencils in a loop reloaded in the Osiris pool

    International Nuclear Information System (INIS)

    Farny, Gerard.

    1982-09-01

    The equipment for the examination of fuel cans of irradiated fuel pencils by eddy current testing in the pool is described. Despite a high residual power there are no mechanical strains for the cladding and no interference with cooling. A high detection sensitivity is obtained. Friction is eliminated by water cushions and speed of operation is limited only by safety and data acquisition [fr

  14. Development and Clinical Implementation of a Universal Bolus to Maintain Spot Size During Delivery of Base of Skull Pencil Beam Scanning Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Both, Stefan, E-mail: Stefan.Both@uphs.upenn.edu [Department of Radiation Oncology, University of Pennsylvania, Philadelphia, Pennsylvania (United States); Shen, Jiajian [Department of Radiation Oncology, University of Pennsylvania, Philadelphia, Pennsylvania (United States); Department of Radiation Oncology, Mayo Clinic, Phoenix, Arizona (United States); Kirk, Maura; Lin, Liyong; Tang, Shikui; Alonso-Basanta, Michelle; Lustig, Robert; Lin, Haibo; Deville, Curtiland; Hill-Kayser, Christine; Tochner, Zelig; McDonough, James [Department of Radiation Oncology, University of Pennsylvania, Philadelphia, Pennsylvania (United States)

    2014-09-01

    Purpose: To report on a universal bolus (UB) designed to replace the range shifter (RS); the UB allows the treatment of shallow tumors while keeping the pencil beam scanning (PBS) spot size small. Methods and Materials: Ten patients with brain cancers treated from 2010 to 2011 were planned using the PBS technique with bolus and the RS. In-air spot sizes of the pencil beam were measured and compared for 4 conditions (open field, with RS, and with UB at 2- and 8-cm air gap) in isocentric geometry. The UB was applied in our clinic to treat brain tumors, and the plans with UB were compared with the plans with RS. Results: A UB of 5.5 cm water equivalent thickness was found to meet the needs of the majority of patients. By using the UB, the PBS spot sizes are similar with the open beam (P>.1). The heterogeneity index was found to be approximately 10% lower for the UB plans than for the RS plans. The coverage for plans with UB is more conformal than for plans with RS; the largest increase in sparing is usually for peripheral organs at risk. Conclusions: The integrity of the physical properties of the PBS beam can be maintained using a UB that allows for highly conformal PBS treatment design, even in a simple geometry of the fixed beam line when noncoplanar beams are used.

  15. A transfer matrix method for the analysis of fractal quantum potentials

    International Nuclear Information System (INIS)

    Monsoriu, Juan A; Villatoro, Francisco R; Marin, Maria J; UrchueguIa, Javier F; Cordoba, Pedro Fernandez de

    2005-01-01

    The scattering properties of quantum particles on a sequence of potentials converging towards a fractal one are obtained by means of the transfer matrix method. The reflection coefficients for both the fractal potential and finite periodic potential are calculated and compared. It is shown that the reflection coefficient for the fractal potential has a self-similar structure associated with the fractal distribution of the potential whose degree of self-similarity has been quantified by means of the correlation function

  16. A transfer matrix method for the analysis of fractal quantum potentials

    Energy Technology Data Exchange (ETDEWEB)

    Monsoriu, Juan A [Departamento de Fisica Aplicada, Universidad Politecnica de Valencia, E-46022 Valencia (Spain); Villatoro, Francisco R [Departamento de Lenguajes y Ciencias de la Computacion, Universidad de Malaga, E-29071 Malaga (Spain); Marin, Maria J [Departamento de Termodinamica, Universitat de Valencia, E-46100 Burjassot (Spain); UrchueguIa, Javier F [Departamento de Fisica Aplicada, Universidad Politecnica de Valencia, E-46022 Valencia (Spain); Cordoba, Pedro Fernandez de [Departamento de Matematica Aplicada, Universidad Politecnica de Valencia, E-46022 Valencia (Spain)

    2005-07-01

    The scattering properties of quantum particles on a sequence of potentials converging towards a fractal one are obtained by means of the transfer matrix method. The reflection coefficients for both the fractal potential and finite periodic potential are calculated and compared. It is shown that the reflection coefficient for the fractal potential has a self-similar structure associated with the fractal distribution of the potential whose degree of self-similarity has been quantified by means of the correlation function.

  17. Carbonate fuel cell matrix

    Science.gov (United States)

    Farooque, Mohammad; Yuh, Chao-Yi

    1996-01-01

    A carbonate fuel cell matrix comprising support particles and crack attenuator particles which are made platelet in shape to increase the resistance of the matrix to through cracking. Also disclosed is a matrix having porous crack attenuator particles and a matrix whose crack attenuator particles have a thermal coefficient of expansion which is significantly different from that of the support particles, and a method of making platelet-shaped crack attenuator particles.

  18. Lessons from half a century experience of Japanese solid rocketry since Pencil rocket

    Science.gov (United States)

    Matogawa, Yasunori

    2007-12-01

    50 years have passed since a tiny rocket "Pencil" was launched horizontally at Kokubunji near Tokyo in 1955. Though there existed high level of rocket technology in Japan before the end of the second World War, it was not succeeded by the country after the War. Pencil therefore was the substantial start of Japanese rocketry that opened the way to the present stage. In the meantime, a rocket group of the University of Tokyo contributed to the International Geophysical Year in 1957-1958 by developing bigger rockets, and in 1970, the group succeeded in injecting first Japanese satellite OHSUMI into earth orbit. It was just before the launch of OHSUMI that Japan had built up the double feature system of science and applications in space efforts. The former has been pursued by ISAS (the Institute of Space and Astronautical Science) of the University of Tokyo, and the latter by NASDA (National Space Development Agency). This unique system worked quite efficiently because space activities in scientific and applicational areas could develop rather independently without affecting each other. Thus Japan's space science ran up rapidly to the international stage under the support of solid propellant rocket technology, and, after a 20 year technological introduction period from the US, a big liquid propellant launch vehicle, H-II, at last was developed on the basis of Japan's own technology in the early 1990's. On October 1, 2003, as a part of Governmental Reform, three Japanese space agencies were consolidated into a single agency, JAXA (Japan Aerospace Exploration Agency), and Japan's space efforts began to walk toward the future in a globally coordinated fashion, including aeronautics, astronautics, space science, satellite technology, etc., at the same time. This paper surveys the history of Japanese rocketry briefly, and draws out the lessons from it to make a new history of Japan's space efforts more meaningful.

  19. Non-negative Matrix Factorization for Binary Data

    DEFF Research Database (Denmark)

    Larsen, Jacob Søgaard; Clemmensen, Line Katrine Harder

    We propose the Logistic Non-negative Matrix Factorization for decomposition of binary data. Binary data are frequently generated in e.g. text analysis, sensory data, market basket data etc. A common method for analysing non-negative data is the Non-negative Matrix Factorization, though...... this is in theory not appropriate for binary data, and thus we propose a novel Non-negative Matrix Factorization based on the logistic link function. Furthermore we generalize the method to handle missing data. The formulation of the method is compared to a previously proposed method (Tome et al., 2015). We compare...... the performance of the Logistic Non-negative Matrix Factorization to Least Squares Non-negative Matrix Factorization and Kullback-Leibler (KL) Non-negative Matrix Factorization on sets of binary data: a synthetic dataset, a set of student comments on their professors collected in a binary term-document matrix...

  20. Extending the Matrix Element Method beyond the Born approximation: calculating event weights at next-to-leading order accuracy

    International Nuclear Information System (INIS)

    Martini, Till; Uwer, Peter

    2015-01-01

    In this article we illustrate how event weights for jet events can be calculated efficiently at next-to-leading order (NLO) accuracy in QCD. This is a crucial prerequisite for the application of the Matrix Element Method in NLO. We modify the recombination procedure used in jet algorithms, to allow a factorisation of the phase space for the real corrections into resolved and unresolved regions. Using an appropriate infrared regulator the latter can be integrated numerically. As illustration, we reproduce differential distributions at NLO for two sample processes. As further application and proof of concept, we apply the Matrix Element Method in NLO accuracy to the mass determination of top quarks produced in e"+e"− annihilation. This analysis is relevant for a future Linear Collider. We observe a significant shift in the extracted mass depending on whether the Matrix Element Method is used in leading or next-to-leading order.

  1. Development of a Java Package for Matrix Programming

    OpenAIRE

    Lim, Ngee-Peng; Ling, Maurice HT; Lim, Shawn YC; Choi, Ji-Hee; Teo, Henry BK

    2003-01-01

    We had assembled a Java package, known as MatrixPak, of four classes for the purpose of numerical matrix computation. The classes are matrix, matrix_operations, StrToMatrix, and MatrixToStr; all of which are inherited from java.lang.Object class. Class matrix defines a matrix as a two-dimensional array of float types, and contains the following mathematical methods: transpose, adjoint, determinant, inverse, minor and cofactor. Class matrix_operations contains the following mathematical method...

  2. The R-matrix theory

    International Nuclear Information System (INIS)

    Descouvemont, P; Baye, D

    2010-01-01

    The different facets of the R-matrix method are presented pedagogically in a general framework. Two variants have been developed over the years: (i) The 'calculable' R-matrix method is a calculational tool to derive scattering properties from the Schroedinger equation in a large variety of physical problems. It was developed rather independently in atomic and nuclear physics with too little mutual influence. (ii) The 'phenomenological' R-matrix method is a technique to parametrize various types of cross sections. It was mainly (or uniquely) used in nuclear physics. Both directions are explained by starting from the simple problem of scattering by a potential. They are illustrated by simple examples in nuclear and atomic physics. In addition to elastic scattering, the R-matrix formalism is applied to inelastic and radiative-capture reactions. We also present more recent and more ambitious applications of the theory in nuclear physics.

  3. Leakage localisation method in a water distribution system based on sensitivity matrix: methodology and real test

    OpenAIRE

    Pascual Pañach, Josep

    2010-01-01

    Leaks are present in all water distribution systems. In this paper a method for leakage detection and localisation is presented. It uses pressure measurements and simulation models. Leakage localisation methodology is based on pressure sensitivity matrix. Sensitivity is normalised and binarised using a common threshold for all nodes, so a signatures matrix is obtained. A pressure sensor optimal distribution methodology is developed too, but it is not used in the real test. To validate this...

  4. Adaptation of chemical methods of analysis to the matrix of pyrite-acidified mining lakes

    International Nuclear Information System (INIS)

    Herzsprung, P.; Friese, K.

    2000-01-01

    Owing to the unusual matrix of pyrite-acidified mining lakes, the analysis of chemical parameters may be difficult. A number of methodological improvements have been developed so far, and a comprehensive validation of methods is envisaged. The adaptation of the available methods to small-volume samples of sediment pore waters and the adaptation of sensitivity to the expected concentration ranges is an important element of the methods applied in analyses of biogeochemical processes in mining lakes [de

  5. Producing accurate wave propagation time histories using the global matrix method

    International Nuclear Information System (INIS)

    Obenchain, Matthew B; Cesnik, Carlos E S

    2013-01-01

    This paper presents a reliable method for producing accurate displacement time histories for wave propagation in laminated plates using the global matrix method. The existence of inward and outward propagating waves in the general solution is highlighted while examining the axisymmetric case of a circular actuator on an aluminum plate. Problems with previous attempts to isolate the outward wave for anisotropic laminates are shown. The updated method develops a correction signal that can be added to the original time history solution to cancel the inward wave and leave only the outward propagating wave. The paper demonstrates the effectiveness of the new method for circular and square actuators bonded to the surface of isotropic laminates, and these results are compared with exact solutions. Results for circular actuators on cross-ply laminates are also presented and compared with experimental results, showing the ability of the new method to successfully capture the displacement time histories for composite laminates. (paper)

  6. Polychoric/Tetrachoric Matrix or Pearson Matrix? A methodological study

    Directory of Open Access Journals (Sweden)

    Dominguez Lara, Sergio Alexis

    2014-04-01

    Full Text Available The use of product-moment correlation of Pearson is common in most studies in factor analysis in psychology, but it is known that this statistic is only applicable when the variables related are in interval scale and normally distributed, and when are used in ordinal data may to produce a distorted correlation matrix . Thus is a suitable option using polychoric/tetrachoric matrices in item-level factor analysis when the items are in level measurement nominal or ordinal. The aim of this study was to show the differences in the KMO, Bartlett`s Test and Determinant of the Matrix, percentage of variance explained and factor loadings in depression trait scale of Depression Inventory Trait - State and the Neuroticism dimension of the short form of the Eysenck Personality Questionnaire -Revised, regarding the use of matrices polychoric/tetrachoric matrices and Pearson. These instruments was analyzed with different extraction methods (Maximum Likelihood, Minimum Rank Factor Analysis, Unweighted Least Squares and Principal Components, keeping constant the rotation method Promin were analyzed. Were observed differences regarding sample adequacy measures, as well as with respect to the explained variance and the factor loadings, for solutions having as polychoric/tetrachoric matrix. So it can be concluded that the polychoric / tetrachoric matrix give better results than Pearson matrices when it comes to item-level factor analysis using different methods.

  7. Impact of dose engine algorithm in pencil beam scanning proton therapy for breast cancer.

    Science.gov (United States)

    Tommasino, Francesco; Fellin, Francesco; Lorentini, Stefano; Farace, Paolo

    2018-06-01

    Proton therapy for the treatment of breast cancer is acquiring increasing interest, due to the potential reduction of radiation-induced side effects such as cardiac and pulmonary toxicity. While several in silico studies demonstrated the gain in plan quality offered by pencil beam scanning (PBS) compared to passive scattering techniques, the related dosimetric uncertainties have been poorly investigated so far. Five breast cancer patients were planned with Raystation 6 analytical pencil beam (APB) and Monte Carlo (MC) dose calculation algorithms. Plans were optimized with APB and then MC was used to recalculate dose distribution. Movable snout and beam splitting techniques (i.e. using two sub-fields for the same beam entrance, one with and the other without the use of a range shifter) were considered. PTV dose statistics were recorded. The same planning configurations were adopted for the experimental benchmark. Dose distributions were measured with a 2D array of ionization chambers and compared to APB and MC calculated ones by means of a γ analysis (agreement criteria 3%, 3 mm). Our results indicate that, when using proton PBS for breast cancer treatment, the Raystation 6 APB algorithm does not allow obtaining sufficient accuracy, especially with large air gaps. On the contrary, the MC algorithm resulted into much higher accuracy in all beam configurations tested and has to be recommended. Centers where a MC algorithm is not yet available should consider a careful use of APB, possibly combined with a movable snout system or in any case with strategies aimed at minimizing air gaps. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  8. Experimental determination and verification of the parameters used in a proton pencil beam algorithm

    International Nuclear Information System (INIS)

    Szymanowski, H.; Mazal, A.; Nauraye, C.; Biensan, S.; Ferrand, R.; Murillo, M.C.; Caneva, S.; Gaboriaud, G.; Rosenwald, J.C.

    2001-01-01

    We present an experimental procedure for the determination and the verification under practical conditions of physical and computational parameters used in our proton pencil beam algorithm. The calculation of the dose delivered by a single pencil beam relies on a measured spread-out Bragg peak, and the description of its radial spread at depth features simple specific parameters accounting individually for the influence of the beam line as a whole, the beam energy modulation, the compensator, and the patient medium. For determining the experimental values of the physical parameters related to proton scattering, we utilized a simple relation between Gaussian radial spreads and the width of lateral penumbras. The contribution from the beam line has been extracted from lateral penumbra measurements in air: a linear variation with the distance collimator-point has been observed. Analytically predicted radial spreads within the patient were in good agreement with experimental values in water under various reference conditions. Results indicated no significant influence of the beam energy modulation. Using measurements in presence of Plexiglas slabs, a simple assumption on the effective source of scattering due to the compensator has been stated, leading to accurate radial spread calculations. Dose measurements in presence of complexly shaped compensators have been used to assess the performances of the algorithm supplied with the adequate physical parameters. One of these compensators has also been used, together with a reference configuration, for investigating a set of computational parameters decreasing the calculation time while maintaining a high level of accuracy. Faster dose computations have been performed for algorithm evaluation in the presence of geometrical and patient compensators, and have shown good agreement with the measured dose distributions

  9. Extending the Matrix Element Method beyond the Born approximation: calculating event weights at next-to-leading order accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Martini, Till; Uwer, Peter [Humboldt-Universität zu Berlin, Institut für Physik,Newtonstraße 15, 12489 Berlin (Germany)

    2015-09-14

    In this article we illustrate how event weights for jet events can be calculated efficiently at next-to-leading order (NLO) accuracy in QCD. This is a crucial prerequisite for the application of the Matrix Element Method in NLO. We modify the recombination procedure used in jet algorithms, to allow a factorisation of the phase space for the real corrections into resolved and unresolved regions. Using an appropriate infrared regulator the latter can be integrated numerically. As illustration, we reproduce differential distributions at NLO for two sample processes. As further application and proof of concept, we apply the Matrix Element Method in NLO accuracy to the mass determination of top quarks produced in e{sup +}e{sup −} annihilation. This analysis is relevant for a future Linear Collider. We observe a significant shift in the extracted mass depending on whether the Matrix Element Method is used in leading or next-to-leading order.

  10. Response effects due to bystander presence in CASI and paper-and-pencil surveys of drug use and alcohol use.

    Science.gov (United States)

    Aquilino, W S; Wright, D L; Supple, A J

    2000-01-01

    In this study we investigated the influence of bystanders on self-administered interviews asking about the use of alcohol and illicit drugs. Interview participants were adolescents and young adults living in urban and suburban areas of the United States. Participants were assigned randomly to either a computerized or a paper-and-pencil self-administered interview. Results show that the impact of bystanders during the interview varies according to the identity of the bystander, age of the person interviewed, and the mode of interview. When a parent was present during the interview, survey participants were less likely to report the use of alcohol and marijuana. The influence of parents was stronger for adolescents than for young adults. The use of computer-assisted self-administered interviewing, compared to interviews with paper-and-pencil forms, reduced the effects due to the presence of parents during the interview. The presence of siblings during the interview had a small, negative effect on reports of using alcohol or illicit drugs. Among married or cohabiting respondents, the presence of the husband, wife, or live-in partner had no influence on reports of alcohol use or drug use.

  11. Generalized matrix method for transmission of neutrons through multilayer magnetic system with non-colinear magnetization

    International Nuclear Information System (INIS)

    Radu, F.; Ignatovich, V.K.

    1999-01-01

    A generalized matrix method (GMM) for reflection and transmission of polarized and nonpolarized neutrons for multilayer systems with non-colinear magnetization of neighboring layers is developed. Several methods exist for calculation of the reflection and transmission coefficients of the multilayer systems (MS). We consider here only two of them. One is the recurrence method (RM), and another one is the matrix method. Previously these methods were used for scalar particles and for spinor particles. In the last case a limitation was imposed on the directions of the magnetization of different layers: they were required to lie in the plane parallel to the layers. In 1995 Fermon has described a different approach of the neutrons in MS. Here, the behaviour of the wave inside the layers depends on the position within the plane. The RM, as shown by us earlier, permits to treat multilayer systems with arbitrary directions of the magnetization. We show how to treat these systems with the updated matrix method, which we call the generalized matrix method. In the GMM method the transmission and reflection of a layered system are obtained by finding a 4 x 4 matrix, which is a product of elementary 4 x 4 matrices related to the different layers, and in the RM the solution is found by recurrent application of the same procedure of finding the reflection and transmission matrices for a continuously increasing number of layers. The RM method permits to use a simple algorithm to write analytical formulas for the reflection and transmission. However, for more or less complicated systems these formulas become useless and one needs to do numerical calculations. The GMM does not give a simple analytical algorithm, but it gives a very simple numerical algorithm. We have developed two computer codes for computing the coefficients of reflection and transmission of a layered system using the GMM and RM methods. The calculated reflectivities R ++ and R +- for a polarized beam which fall on

  12. A Concise Method for Storing and Communicating the Data Covariance Matrix

    Energy Technology Data Exchange (ETDEWEB)

    Larson, Nancy M [ORNL

    2008-10-01

    The covariance matrix associated with experimental cross section or transmission data consists of several components. Statistical uncertainties on the measured quantity (counts) provide a diagonal contribution. Off-diagonal components arise from uncertainties on the parameters (such as normalization or background) that figure into the data reduction process; these are denoted systematic or common uncertainties, since they affect all data points. The full off-diagonal data covariance matrix (DCM) can be extremely large, since the size is the square of the number of data points. Fortunately, it is not necessary to explicitly calculate, store, or invert the DCM. Likewise, it is not necessary to explicitly calculate, store, or use the inverse of the DCM. Instead, it is more efficient to accomplish the same results using only the various component matrices that appear in the definition of the DCM. Those component matrices are either diagonal or small (the number of data points times the number of data-reduction parameters); hence, this implicit data covariance method requires far less array storage and far fewer computations while producing more accurate results.

  13. ASTM and VAMAS activities in titanium matrix composites test methods development

    Science.gov (United States)

    Johnson, W. S.; Harmon, D. M.; Bartolotta, P. A.; Russ, S. M.

    1994-01-01

    Titanium matrix composites (TMC's) are being considered for a number of aerospace applications ranging from high performance engine components to airframe structures in areas that require high stiffness to weight ratios at temperatures up to 400 C. TMC's exhibit unique mechanical behavior due to fiber-matrix interface failures, matrix cracks bridged by fibers, thermo-viscoplastic behavior of the matrix at elevated temperatures, and the development of significant thermal residual stresses in the composite due to fabrication. Standard testing methodology must be developed to reflect the uniqueness of this type of material systems. The purpose of this paper is to review the current activities in ASTM and Versailles Project on Advanced Materials and Standards (VAMAS) that are directed toward the development of standard test methodology for titanium matrix composites.

  14. Analysis of Off Gas From Disintegration Process of Graphite Matrix by Electrochemical Method

    International Nuclear Information System (INIS)

    Tian Lifang; Wen Mingfen; Chen Jing

    2010-01-01

    Using electrochemical method with salt solutions as electrolyte, some gaseous substances (off gas) would be generated during the disintegration of graphite from high-temperature gas-cooled reactor fuel elements. The off gas is determined to be composed of H 2 , O 2 , N 2 , CO 2 and NO x by gas chromatography. Only about 1.5% graphite matrix is oxidized to CO 2 . Compared to the direct burning-graphite method, less off gas,especially CO 2 , is generated in the disintegration process of graphite by electrochemical method and the treatment of off gas becomes much easier. (authors)

  15. New Multi-HAzard and MulTi-RIsk Assessment MethodS for Europe (MATRIX): A research program towards mitigating multiple hazards and risks in Europe

    Science.gov (United States)

    Fleming, K. M.; Zschau, J.; Gasparini, P.; Modaressi, H.; Matrix Consortium

    2011-12-01

    Scientists, engineers, civil protection and disaster managers typically treat natural hazards and risks individually. This leads to the situation where the frequent causal relationships between the different hazards and risks, e.g., earthquakes and volcanos, or floods and landslides, are ignored. Such an oversight may potentially lead to inefficient mitigation planning. As part of their efforts to confront this issue, the European Union, under its FP7 program, is supporting the New Multi-HAzard and MulTi-RIsK Assessment MethodS for Europe or MATRIX project. The focus of MATRIX is on natural hazards, in particular earthquakes, landslides, volcanos, wild fires, storms and fluvial and coastal flooding. MATRIX will endeavour to develop methods and tools to tackle multi-type natural hazards and risks within a common framework, focusing on methodologies that are suited to the European context. The work will involve an assessment of current single-type hazard and risk assessment methodologies, including a comparison and quantification of uncertainties and harmonization of single-type methods, examining the consequence of cascade effects within a multi-hazard environment, time-dependent vulnerability, decision making and support for multi-hazard mitigation and adaption, and a series of test cases. Three test sites are being used to assess the methods developed within the project (Naples, Cologne, and the French West Indies), as well as a "virtual city" based on a comprehensive IT platform that will allow scenarios not represented by the test cases to be examined. In addition, a comprehensive dissemination program that will involve national platforms for disaster management, as well as various outreach activities, will be undertaken. The MATRIX consortium consists of ten research institutions (nine European and one Canadian), an end-user (i.e., one of the European national platforms for disaster reduction) and a partner from industry.

  16. Using computer-assisted survey instruments instead of paper and pencil increased completeness of self-administered sexual behavior questionnaires.

    Science.gov (United States)

    Spark, Simone; Lewis, Dyani; Vaisey, Alaina; Smyth, Eris; Wood, Anna; Temple-Smith, Meredith; Lorch, Rebecca; Guy, Rebecca; Hocking, Jane

    2015-01-01

    To compare the data quality, logistics, and cost of a self-administered sexual behavior questionnaire administered either using a computer-assisted survey instrument (CASI) or by paper and pencil in a primary care clinic. A self-administered sexual behavior questionnaire was administered to 16-29 year olds attending general practice. Questionnaires were administered by either paper and pencil (paper) or CASI. A personal digital assistant was used to self-administer the CASI. A total of 4,491 people completed the questionnaire, with 46.9% responses via CASI and 53.2% by paper. Completion of questions was greater for CASI than for paper for sexual behavior questions: number of sexual partners [odds ratio (OR), 6.85; 95% confidence interval (CI): 3.32, 14.11] and ever having had sex with a person of the same gender (OR, 2.89; 95% CI: 1.52, 5.49). The median number of questions answered was higher for CASI than for paper (17.6 vs. 17.2; P questionnaire compared with $11.83 for paper. Electronic devices using CASI are a tool that can increase participants' questionnaire responses and deliver more complete data for a sexual behavior questionnaire in primary care clinics. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Aspects of fabrication aluminium matrix heterophase composites by suspension method

    Science.gov (United States)

    Dolata, A. J.; Dyzia, M.

    2012-05-01

    Composites with an aluminium alloy matrix (AlMMC) exhibit several advantageous properties such as good strength, stiffness, low density, resistance and dimensional stability to elevated temperatures, good thermal expansion coefficient and particularly high resistance to friction wear. Therefore such composites are more and more used in modern engineering constructions. Composites reinforced with hard ceramic particles (Al2O3, SiC) are gradually being implemented into production in automotive or aircraft industries. Another application of AlMMC is in the electronics industry, where the dimensional stability and capacity to absorb and remove heat is used in radiators. However the main problems are still: a reduction of production costs, developing methods of composite material tests and final product quality assessment, standardisation, development of recycling and mechanical processing methods. AlMMC production technologies, based on liquid-phase methods, and the shaping of products by casting methods, belong to the cheapest production methods. Application of a suspension method for the production of composites with heterophase reinforcement may turn out to be a new material and technological solution. The article presents the material and technological aspects of the transfer procedures for the production of composite suspensions from laboratory scale to a semi-industrial scale.

  18. Aspects of fabrication aluminium matrix heterophase composites by suspension method

    International Nuclear Information System (INIS)

    Dolata, A J; Dyzia, M

    2012-01-01

    Composites with an aluminium alloy matrix (AlMMC) exhibit several advantageous properties such as good strength, stiffness, low density, resistance and dimensional stability to elevated temperatures, good thermal expansion coefficient and particularly high resistance to friction wear. Therefore such composites are more and more used in modern engineering constructions. Composites reinforced with hard ceramic particles (Al 2 O 3 , SiC) are gradually being implemented into production in automotive or aircraft industries. Another application of AlMMC is in the electronics industry, where the dimensional stability and capacity to absorb and remove heat is used in radiators. However the main problems are still: a reduction of production costs, developing methods of composite material tests and final product quality assessment, standardisation, development of recycling and mechanical processing methods. AlMMC production technologies, based on liquid-phase methods, and the shaping of products by casting methods, belong to the cheapest production methods. Application of a suspension method for the production of composites with heterophase reinforcement may turn out to be a new material and technological solution. The article presents the material and technological aspects of the transfer procedures for the production of composite suspensions from laboratory scale to a semi-industrial scale.

  19. Efficient improvement of virtual crack extension method by a derivative of the finite element stiffness matrix

    International Nuclear Information System (INIS)

    Ishikawa, H.; Nakano, S.; Yuuki, R.; Chung, N.Y.

    1991-01-01

    In the virtual crack extension method, the stress intensity factor, K, is obtained from the converged value of the energy release rate by the difference of the finite element stiffness matrix when some crack extension are taken. Instead of the numerical difference of the finite element stiffness, a new method to use a direct dirivative of the finite element stiffness matrix with respect to crack length is proposed. By the present method, the results of some example problems, such as uniform tension problems of a square plate with a center crack and a rectangular plate with an internal slant crack, are obtained with high accuracy and good efficiency. Comparing with analytical results, the present values of the stress intensity factors of the problems are obtained with the error that is less than 0.6%. This shows the numerical assurance of the usefulness of the present method. A personal computer program for the analysis is developed

  20. Simplified LCA and matrix methods in identifying the environmental aspects of a product system.

    Science.gov (United States)

    Hur, Tak; Lee, Jiyong; Ryu, Jiyeon; Kwon, Eunsun

    2005-05-01

    In order to effectively integrate environmental attributes into the product design and development processes, it is crucial to identify the significant environmental aspects related to a product system within a relatively short period of time. In this study, the usefulness of life cycle assessment (LCA) and a matrix method as tools for identifying the key environmental issues of a product system were examined. For this, a simplified LCA (SLCA) method that can be applied to Electrical and Electronic Equipment (EEE) was developed to efficiently identify their significant environmental aspects for eco-design, since a full scale LCA study is usually very detailed, expensive and time-consuming. The environmentally responsible product assessment (ERPA) method, which is one of the matrix methods, was also analyzed. Then, the usefulness of each method in eco-design processes was evaluated and compared using the case studies of the cellular phone and vacuum cleaner systems. It was found that the SLCA and the ERPA methods provided different information but they complemented each other to some extent. The SLCA method generated more information on the inherent environmental characteristics of a product system so that it might be useful for new design/eco-innovation when developing a completely new product or method where environmental considerations play a major role from the beginning. On the other hand, the ERPA method gave more information on the potential for improving a product so that it could be effectively used in eco-redesign which intends to alleviate environmental impacts of an existing product or process.

  1. A Matrix Method Based on the Fibonacci Polynomials to the Generalized Pantograph Equations with Functional Arguments

    Directory of Open Access Journals (Sweden)

    Ayşe Betül Koç

    2014-01-01

    Full Text Available A pseudospectral method based on the Fibonacci operational matrix is proposed to solve generalized pantograph equations with linear functional arguments. By using this method, approximate solutions of the problems are easily obtained in form of the truncated Fibonacci series. Some illustrative examples are given to verify the efficiency and effectiveness of the proposed method. Then, the numerical results are compared with other methods.

  2. On matrix diffusion: formulations, solution methods and qualitative effects

    Science.gov (United States)

    Carrera, Jesús; Sánchez-Vila, Xavier; Benet, Inmaculada; Medina, Agustín; Galarza, Germán; Guimerà, Jordi

    Matrix diffusion has become widely recognized as an important transport mechanism. Unfortunately, accounting for matrix diffusion complicates solute-transport simulations. This problem has led to simplified formulations, partly motivated by the solution method. As a result, some confusion has been generated about how to properly pose the problem. One of the objectives of this work is to find some unity among existing formulations and solution methods. In doing so, some asymptotic properties of matrix diffusion are derived. Specifically, early-time behavior (short tests) depends only on φm2RmDm / Lm2, whereas late-time behavior (long tracer tests) depends only on φmRm, and not on matrix diffusion coefficient or block size and shape. The latter is always true for mean arrival time. These properties help in: (a) analyzing the qualitative behavior of matrix diffusion; (b) explaining one paradox of solute transport through fractured rocks (the apparent dependence of porosity on travel time); (c) discriminating between matrix diffusion and other problems (such as kinetic sorption or heterogeneity); and (d) describing identifiability problems and ways to overcome them. RésuméLa diffusion matricielle est un phénomène reconnu maintenant comme un mécanisme de transport important. Malheureusement, la prise en compte de la diffusion matricielle complique la simulation du transport de soluté. Ce problème a conduit à des formulations simplifiées, en partie à cause de la méthode de résolution. Il s'en est suivi une certaine confusion sur la façon de poser correctement le problème. L'un des objectifs de ce travail est de trouver une certaine unité parmi les formulations et les méthodes de résolution. C'est ainsi que certaines propriétés asymptotiques de la diffusion matricielle ont été dérivées. En particulier, le comportement à l'origine (expériences de traçage courtes) dépend uniquement du terme φm2RmDm / Lm2, alors que le comportement à long terme

  3. Large-N limit of the two-Hermitian-matrix model by the hidden BRST method

    International Nuclear Information System (INIS)

    Alfaro, J.

    1993-01-01

    This paper discusses the large-N limit of the two-Hermitian-matrix model in zero dimensions, using the hidden Becchi-Rouet-Stora-Tyutin method. A system of integral equations previously found is solved, showing that it contained the exact solution of the model in leading order of large N

  4. The fitness for purpose of analytical methods applied to fluorimetric uranium determination in water matrix

    International Nuclear Information System (INIS)

    Grinman, Ana; Giustina, Daniel; Mondini, Julia; Diodat, Jorge

    2008-01-01

    Full text: This paper describes the steps which should be followed by a laboratory in order to validate the fluorimetric method for natural uranium in water matrix. The validation of an analytical method is a necessary requirement prior accreditation under Standard norm ISO/IEC 17025, of a non normalized method. Different analytical techniques differ in a sort of variables to be validated. Depending on the chemical process, measurement technique, matrix type, data fitting and measurement efficiency, a laboratory must set up experiments to verify reliability of data, through the application of several statistical tests and by participating in Quality Programs (QP) organized by reference laboratories such as the National Institute of Standards and Technology (NIST), National Physics Laboratory (NPL), or Environmental Measurements Laboratory (EML). However, the participation in QP not only involves international reference laboratories, but also, the national ones which are able to prove proficiency to the Argentinean Accreditation Board. The parameters that the ARN laboratory had to validate in the fluorimetric method to fit in accordance with Eurachem guide and IUPAC definitions, are: Detection Limit, Quantification Limit, Precision, Intra laboratory Precision, Reproducibility Limit, Repeatability Limit, Linear Range and Robustness. Assays to fit the above parameters were designed on the bases of statistics requirements, and a detailed data treatment is presented together with the respective tests in order to show the parameters validated. As a final conclusion, the uranium determination by fluorimetry is a reliable method for direct measurement to meet radioprotection requirements in water matrix, within its linear range which is fixed every time a calibration is carried out at the beginning of the analysis. The detection limit ( depending on blank standard deviation and slope) varies between 3 ug U and 5 ug U which yields minimum detectable concentrations (MDC) of

  5. Validity of Cognitive ability tests – comparison of computerized adaptive testing with paper and pencil and computer-based forms of administrations

    Czech Academy of Sciences Publication Activity Database

    Žitný, P.; Halama, P.; Jelínek, Martin; Květon, Petr

    2012-01-01

    Roč. 54, č. 3 (2012), s. 181-194 ISSN 0039-3320 R&D Projects: GA ČR GP406/09/P284 Institutional support: RVO:68081740 Keywords : item response theory * computerized adaptive testing * paper and pencil * computer-based * criterion and construct validity * efficiency Subject RIV: AN - Psychology Impact factor: 0.215, year: 2012

  6. Thermal and mechanical behavior of metal matrix and ceramic matrix composites

    Science.gov (United States)

    Kennedy, John M. (Editor); Moeller, Helen H. (Editor); Johnson, W. S. (Editor)

    1990-01-01

    The present conference discusses local stresses in metal-matrix composites (MMCs) subjected to thermal and mechanical loads, the computational simulation of high-temperature MMCs' cyclic behavior, an analysis of a ceramic-matrix composite (CMC) flexure specimen, and a plasticity analysis of fibrous composite laminates under thermomechanical loads. Also discussed are a comparison of methods for determining the fiber-matrix interface frictional stresses of CMCs, the monotonic and cyclic behavior of an SiC/calcium aluminosilicate CMC, the mechanical and thermal properties of an SiC particle-reinforced Al alloy MMC, the temperature-dependent tensile and shear response of a graphite-reinforced 6061 Al-alloy MMC, the fiber/matrix interface bonding strength of MMCs, and fatigue crack growth in an Al2O3 short fiber-reinforced Al-2Mg matrix MMC.

  7. Applying the response matrix method for solving coupled neutron diffusion and transport problems

    International Nuclear Information System (INIS)

    Sibiya, G.S.

    1980-01-01

    The numerical determination of the flux and power distribution in the design of large power reactors is quite a time-consuming procedure if the space under consideration is to be subdivided into very fine weshes. Many computing methods applied in reactor physics (such as the finite-difference method) require considerable computing time. In this thesis it is shown that the response matrix method can be successfully used as an alternative approach to solving the two-dimension diffusion equation. Furthermore it is shown that sufficient accuracy of the method is achieved by assuming a linear space dependence of the neutron currents on the boundaries of the geometries defined for the given space. (orig.) [de

  8. Improved parallel solution techniques for the integral transport matrix method

    Energy Technology Data Exchange (ETDEWEB)

    Zerr, R. Joseph, E-mail: rjz116@psu.edu [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, University Park, PA (United States); Azmy, Yousry Y., E-mail: yyazmy@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Burlington Engineering Laboratories, Raleigh, NC (United States)

    2011-07-01

    Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution time by up to 10´ when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing cases are optically thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block pre conditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient pre conditioner. (author)

  9. Improved parallel solution techniques for the integral transport matrix method

    International Nuclear Information System (INIS)

    Zerr, R. Joseph; Azmy, Yousry Y.

    2011-01-01

    Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution time by up to 10´ when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing cases are optically thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block pre conditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient pre conditioner. (author)

  10. A new version of transfer matrix method for multibody systems

    Energy Technology Data Exchange (ETDEWEB)

    Rui, Xiaoting, E-mail: ruixt@163.net [Nanjing University of Science and Technology, Institute of Launch Dynamics (China); Bestle, Dieter, E-mail: bestle@b-tu.de [Brandenburg University of Technology, Engineering Mechanics and Vehicle Dynamics (Germany); Zhang, Jianshu, E-mail: zhangdracpa@sina.com; Zhou, Qinbo, E-mail: zqb912-new@163.com [Nanjing University of Science and Technology, Institute of Launch Dynamics (China)

    2016-10-15

    In order to avoid the global dynamics equations and increase the computational efficiency for multibody system dynamics (MSD), the transfer matrix method of multibody system (MSTMM) has been developed and applied very widely in research and engineering in recent 20 years. It differs from ordinary methods in multibody system dynamics with respect to the feature that there is no need for a global dynamics equation, and it uses low-order matrices for high computational efficiency. For linear systems, MSTMM is exact even if continuous elements like beams are involved. The discrete time MSTMM, however, has to use local linearization. In order to release the method from such approximations, a new version of MSTMM is presented in this paper where translational and angular accelerations, on the one hand, and internal forces and moments, on the other hand, are used as state variables. Already linear relationships among these quantities are utilized, which results in new element transfer matrices and algorithms making the study of multibody systems as simple as the study of single bodies. The proposed approach also allows combining MSTMM with any general numerical integration procedure. Some numerical examples of MSD are given to demonstrate the proposed method.

  11. Method and apparatus for evaluating structural weakness in polymer matrix composites

    Science.gov (United States)

    Wachter, Eric A.; Fisher, Walter G.

    1996-01-01

    A method and apparatus for evaluating structural weaknesses in polymer matrix composites is described. An object to be studied is illuminated with laser radiation and fluorescence emanating therefrom is collected and filtered. The fluorescence is then imaged and the image is studied to determine fluorescence intensity over the surface of the object being studied and the wavelength of maximum fluorescent intensity. Such images provide a map of the structural integrity of the part being studied and weaknesses, particularly weaknesses created by exposure of the object to heat, are readily visible in the image.

  12. Analytic matrix elements with shifted correlated Gaussians

    DEFF Research Database (Denmark)

    Fedorov, D. V.

    2017-01-01

    Matrix elements between shifted correlated Gaussians of various potentials with several form-factors are calculated analytically. Analytic matrix elements are of importance for the correlated Gaussian method in quantum few-body physics.......Matrix elements between shifted correlated Gaussians of various potentials with several form-factors are calculated analytically. Analytic matrix elements are of importance for the correlated Gaussian method in quantum few-body physics....

  13. Immobilization of cellulase using porous polymer matrix

    International Nuclear Information System (INIS)

    Kumakura, M.; Kaetsu, I.

    1984-01-01

    A new method is discussed for the immobilization of cellulase using porous polymer matrices, which were obtained by radiation polymerization of hydrophilic monomers. In this method, the immobilized enzyme matrix was prepared by enzyme absorbtion in the porous polymer matrix and drying treatment. The enzyme activity of the immobilized enzyme matrix varied with monomer concentration, cooling rate of the monomer solution, and hydrophilicity of the polymer matrix, takinn the change of the nature of the porous structure in the polymer matrix. The leakage of the enzymes from the polymer matrix was not observed in the repeated batch enzyme reactions

  14. WE-E-BRB-01: Personalized Motion Management Strategies for Pencil Beam Scanning Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, X. [UT MD Anderson Cancer Center (United States)

    2016-06-15

    Strategies for treating thoracic and liver tumors using pencil beam scanning proton therapy Thoracic and liver tumors have not been treated with pencil beam scanning (PBS) proton therapy until recently. This is because of concerns about the significant interplay effects between proton spot scanning and patient’s respiratory motion. However, not all tumors have unacceptable magnitude of motion for PBS proton therapy. Therefore it is important to analyze the motion and understand the significance of the interplay effect for each patient. The factors that affect interplay effect and its washout include magnitude of motion, spot size, spot scanning sequence and speed. Selection of beam angle, scanning direction, repainting and fractionation can all reduce the interplay effect. An overview of respiratory motion management in PBS proton therapy including assessment of tumor motion and WET evaluation will be first presented. As thoracic tumors have very different motion patterns from liver tumors, examples would be provided for both anatomic sites. As thoracic tumors are typically located within highly heterogeneous environments, dose calculation accuracy is a concern for both treatment target and surrounding organs such as spinal cord or esophagus. Strategies for mitigating the interplay effect in PBS will be presented and the pros and cons of various motion mitigation strategies will be discussed. Learning Objectives: Motion analysis for individual patients with respect to interplay effect Interplay effect and mitigation strategies for treating thoracic/liver tumors with PBS Treatment planning margins for PBS The impact of proton dose calculation engines over heterogeneous treatment target and surrounding organs I have a current research funding from Varian Medical System under the master agreement between University of Pennsylvania and Varian; L. Lin, I have a current funding from Varian Medical System under the master agreement between University of Pennsylvania and

  15. WE-E-BRB-01: Personalized Motion Management Strategies for Pencil Beam Scanning Proton Therapy

    International Nuclear Information System (INIS)

    Zhu, X.

    2016-01-01

    Strategies for treating thoracic and liver tumors using pencil beam scanning proton therapy Thoracic and liver tumors have not been treated with pencil beam scanning (PBS) proton therapy until recently. This is because of concerns about the significant interplay effects between proton spot scanning and patient’s respiratory motion. However, not all tumors have unacceptable magnitude of motion for PBS proton therapy. Therefore it is important to analyze the motion and understand the significance of the interplay effect for each patient. The factors that affect interplay effect and its washout include magnitude of motion, spot size, spot scanning sequence and speed. Selection of beam angle, scanning direction, repainting and fractionation can all reduce the interplay effect. An overview of respiratory motion management in PBS proton therapy including assessment of tumor motion and WET evaluation will be first presented. As thoracic tumors have very different motion patterns from liver tumors, examples would be provided for both anatomic sites. As thoracic tumors are typically located within highly heterogeneous environments, dose calculation accuracy is a concern for both treatment target and surrounding organs such as spinal cord or esophagus. Strategies for mitigating the interplay effect in PBS will be presented and the pros and cons of various motion mitigation strategies will be discussed. Learning Objectives: Motion analysis for individual patients with respect to interplay effect Interplay effect and mitigation strategies for treating thoracic/liver tumors with PBS Treatment planning margins for PBS The impact of proton dose calculation engines over heterogeneous treatment target and surrounding organs I have a current research funding from Varian Medical System under the master agreement between University of Pennsylvania and Varian; L. Lin, I have a current funding from Varian Medical System under the master agreement between University of Pennsylvania and

  16. The Psoriatic Arthritis Impact of Disease 12-item questionnaire: equivalence, reliability, validity, and feasibility of the touch-screen administration versus the paper-and-pencil version

    Science.gov (United States)

    Salaffi, Fausto; Di Carlo, Marco; Carotti, Marina; Farah, Sonia; Gutierrez, Marwin

    2016-01-01

    Background Over the last few years, there has been a shift toward a more patient-centered perspective of the disease by adopting patient-reported outcomes. Touch-screen formats are increasingly being used for data collection in routine care and research. Objectives The aim of this study is to examine the equivalence, reliability, validity and respondent preference for a computerized touch-screen version of the Psoriatic Arthritis Impact of Disease 12-item (PsAID-12) questionnaire in comparison with the original paper-and-pencil version, in a cohort of patients with psoriatic arthritis (PsA). Methods One hundred and fifty-nine patients with PsA completed both the touch screen- and the conventional paper-and-pencil administered PsAID-12 questionnaire. Agreement between formats was assessed by intraclass correlation coefficients. Spearman’s rho correlation coefficient was used to test convergent validity of the touch screen format of PsAID-12, while receiver operating characteristic curve analysis was performed to test discriminant validity. In order to assess the patient’s preference, the participants filled in an additional questionnaire. The time taken to complete both formats was measured. Results A high concordance between the responses to the two modes of the PsAID-12 tested was found, with no significant mean differences. Intraclass correlation coefficients between data obtained for touch-screen and paper versions ranged from 0.801 to 0.962. There was a very high degree of correlation between the touch-screen format of PsAID-12 and composite disease activity indices (all at a P level touch-screen format of PsAID-12, assessed using the minimal disease activity – Outcome Measurements in Rheumatology Clinical Trials criteria, was very good, with an area under the receiver operating characteristic curve of 0.937 and a resulting cutoff value of 2.5. The touch-screen questionnaire was readily accepted and preferred. The mean time spent for completing the

  17. Study on the Seismic Response of a Portal Frame Structure Based on the Transfer Matrix Method of Multibody System

    Directory of Open Access Journals (Sweden)

    Jianguo Ding

    2014-11-01

    Full Text Available Portal frame structures are widely used in industrial building design but unfortunately are often damaged during an earthquake. As a result, a study on the seismic response of this type of structure is important to both human safety and future building designs. Traditionally, finite element methods such as the ANSYS and MIDAS have been used as the primary methods of computing the response of such a structure during an earthquake; however, these methods yield low calculation efficiencies. In this paper, the mechanical model of a single-story portal frame structure with two spans is constructed based on the transfer matrix method of multibody system (MS-TMM; both the transfer matrix of the components in the model and the total transfer matrix equation of the structure are derived, and the corresponding MATLAB program is compiled to determine the natural period and seismic response of the structure. The results show that the results based on the MS-TMM are similar to those obtained by ANSYS, but the calculation time of the MS-TMM method is only 1/20 of that of the ANSYS method. Additionally, it is shown that the MS-TMM method greatly increases the calculation efficiency while maintaining accuracy.

  18. A spot-matching method using cumulative frequency matrix in 2D gel images

    Science.gov (United States)

    Han, Chan-Myeong; Park, Joon-Ho; Chang, Chu-Seok; Ryoo, Myung-Chun

    2014-01-01

    A new method for spot matching in two-dimensional gel electrophoresis images using a cumulative frequency matrix is proposed. The method improves on the weak points of the previous method called ‘spot matching by topological patterns of neighbour spots’. It accumulates the frequencies of neighbour spot pairs produced through the entire matching process and determines spot pairs one by one in order of higher frequency. Spot matching by frequencies of neighbour spot pairs shows a fairly better performance. However, it can give researchers a hint for whether the matching results can be trustworthy or not, which can save researchers a lot of effort for verification of the results. PMID:26019609

  19. A self-consistent nodal method in response matrix formalism for the multigroup diffusion equations

    International Nuclear Information System (INIS)

    Malambu, E.M.; Mund, E.H.

    1996-01-01

    We develop a nodal method for the multigroup diffusion equations, based on the transverse integration procedure (TIP). The efficiency of the method rests upon the convergence properties of a high-order multidimensional nodal expansion and upon numerical implementation aspects. The discrete 1D equations are cast in response matrix formalism. The derivation of the transverse leakage moments is self-consistent i.e. does not require additional assumptions. An outstanding feature of the method lies in the linear spatial shape of the local transverse leakage for the first-order scheme. The method is described in the two-dimensional case. The method is validated on some classical benchmark problems. (author)

  20. Efficient propagation of the hierarchical equations of motion using the matrix product state method

    Science.gov (United States)

    Shi, Qiang; Xu, Yang; Yan, Yaming; Xu, Meng

    2018-05-01

    We apply the matrix product state (MPS) method to propagate the hierarchical equations of motion (HEOM). It is shown that the MPS approximation works well in different type of problems, including boson and fermion baths. The MPS method based on the time-dependent variational principle is also found to be applicable to HEOM with over one thousand effective modes. Combining the flexibility of the HEOM in defining the effective modes and the efficiency of the MPS method thus may provide a promising tool in simulating quantum dynamics in condensed phases.

  1. The body and the pencil. Ten questions to Claudio Patané

    Directory of Open Access Journals (Sweden)

    Sebastiano Nucifora

    2014-05-01

    Full Text Available The relevance of matter in drawing. The body weight of the designer and his/her tools, the color stains in his/her fingers after s/he uses them. The sound of footsteps all over the place. The need to know, the desire to remember and cherish, but also to forget. The freedom to get lost while wandering around, notebook in hand and no street sign to rely on. Life drawing not meant as an ancient ritual, but as a necessary practice connecting people with the third dimension.  Ten questions to urban sketcher Claudio Patane share with the reader the experiences and point of view of a true wandering inquirer - among other things - of the urban scene. An artist working with paper and pencil in the digital age, in search of a different approach to digital technology: critical but not hostile, open to the social values of sharing and yet aware of the dangers concealed in its possible self-referential drift.

  2. Evaluation of a new pencil-type ionization chamber for dosimetry in computerized tomography beams

    International Nuclear Information System (INIS)

    Castro, Maysa C. de; Neves, Lucio P.; Silva, Natalia F. da; Santos, William de S.; Caldas, Linda V.E.

    2014-01-01

    For performing dosimetry in computed tomography beams (CT), use is made of a pencil-type ionization chamber, since this has a uniform response to this type of beam. The common commercial chambers in Brazil have a sensitive volume length of 10 cm. Several studies of prototypes of this type of ionization chamber have been conducted, using different materials and geometric configurations, in the Calibration Laboratory Instruments of the Institute of Nuclear and Energy Research (LCI) and these showed results within internationally acceptable limits. These ion chambers of 10 cm are widely used nowadays, however studies have revealed that they have underestimated the dose values. In order to solve this problem, we developed a chamber with sensitive volume length of 30 cm. As these are not yet very common and no study has yet been performed on LCI conditions on their behavior, is important that the characteristics of these dosemeters are known, and the influence of its various components. For your review, we will use the Monte Carlo code Penelope, freely distributed by the IAEA. This method has revealed results consistent with other codes. The results for this new prototype can be used in dosimetry of the CT of the hospitals and calibration laboratories as the LCI

  3. Elementary matrix theory

    CERN Document Server

    Eves, Howard

    1980-01-01

    The usefulness of matrix theory as a tool in disciplines ranging from quantum mechanics to psychometrics is widely recognized, and courses in matrix theory are increasingly a standard part of the undergraduate curriculum.This outstanding text offers an unusual introduction to matrix theory at the undergraduate level. Unlike most texts dealing with the topic, which tend to remain on an abstract level, Dr. Eves' book employs a concrete elementary approach, avoiding abstraction until the final chapter. This practical method renders the text especially accessible to students of physics, engineeri

  4. Hyperfine electron-nuclear interactions in the frame of the Density Functional and of the Density Matrix Methods

    International Nuclear Information System (INIS)

    Pavlov, R.L.; Pavlov, L.I.; Raychev, P.P.; Garistov, V.P.; Dimitrova-Ivanovich, M.

    2002-01-01

    The matrix elements and expectation values of the hyperfine interaction operators are presented in a form suitable for numerical implementation in density matrix methods. The electron-nuclear spin-spin (dipolar and contact) interactions are considered, as well as the interaction between nuclear spin and electron-orbital motions. These interactions from the effective Breit-Pauli Hamiltonian determine the hyperfine structure in ESR spectra and contribute to chemical shifts in NMR. Applying the Wigner-Eckart theorem in the irreducible tensor-operator technique and the spin-space separation scheme, the matrix elements and expectation values of these relativistic corrections are expressed in analytical form. The final results are presented as products, or sums of products, of factors determined by the spin and (or) angular momentum symmetry and a spatial part determined by the action of the symmetrized tensor-operators on the normalized matrix or function of the spin or charge distribution.

  5. Differential expression of matrix metalloproteinase-13 in mucinous and nonmucinous colorectal carcinomas.

    Science.gov (United States)

    Foda, Abd Al-Rahman Mohammad; El-Hawary, Amira K; Abdel-Aziz, Azza

    2013-08-01

    Colorectal carcinoma (CRC) is a major health problem all over the world. Mucinous CRCs are known to have a peculiar behavior and genetic derangements. This study aimed to investigate matrix metalloproteinase (MMP)-13 expression in mucinous and nonmucinous CRCs. We studied tumor tissue specimens from 150 patients with mucinous and nonmucinous CRC who underwent radical surgery from January 2007 to January 2012. High-density manual tissue microarrays were constructed using a modified mechanical pencil tip technique, and paraffin sections were submitted for immunohistochemistry using MMP-13. Statistical analysis was performed for clinical and pathological data of all studied cases together with MMP-13 expression in mucinous and nonmucinous groups. Mucinous carcinoma was significantly associated with young age, more depth of invasion, lymph node metastasis, and less peritumoral and intratumoral neutrophils. Nonmucinous carcinomas showed higher MMP-13 expression compared with mucinous carcinomas. Despite the negative or low expression of MMP-13, mucinous carcinomas had more depth of invasion and more frequency of lymph node metastasis than did nonmucinous carcinomas. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. PREDICTION OF RESERVOIR FLOW RATE OF DEZ DAM BY THE PROBABILITY MATRIX METHOD

    Directory of Open Access Journals (Sweden)

    Mohammad Hashem Kanani

    2012-12-01

    Full Text Available The data collected from the operation of existing storage reservoirs, could offer valuable information for the better allocation and management of fresh water rates for future use to mitigation droughts effect. In this paper the long-term Dez reservoir (IRAN water rate prediction is presented using probability matrix method. Data is analyzed to find the probability matrix of water rates in Dez reservoir based on the previous history of annual water entrance during the past and present years(40 years. The algorithm developed covers both, the overflow and non-overflow conditions in the reservoir. Result of this study shows that in non-overflow conditions the most exigency case is equal to 75%. This means that, if the reservoir is empty (the stored water is less than 100 MCM this year, it would be also empty by 75% next year. The stored water in the reservoir would be less than 300 MCM by 85% next year if the reservoir is empty this year. This percentage decreases to 70% next year if the water of reservoir is less than 300 MCM this year. The percentage also decreases to 5% next year if the reservoir is full this year. In overflow conditions the most exigency case is equal to 75% again. The reservoir volume would be less than 150 MCM by 90% next year, if it is empty this year. This percentage decreases to 70% if its water volume is less than 300 MCM and 55% if the water volume is less than 500 MCM this year. Result shows that too, if the probability matrix of water rates to a reservoir is multiplied by itself repeatedly; it converges to a constant probability matrix, which could be used to predict the long-term water rate of the reservoir. In other words, the probability matrix of series of water rates is changed to a steady probability matrix in the course of time, which could reflect the hydrological behavior of the watershed and could be easily used for the long-term prediction of water storage in the down stream reservoirs.

  7. Non destructive testing of PWR type fuel pencils by Foucault currents and metrology in the OSIRIS reactor pool

    International Nuclear Information System (INIS)

    Marchand, L.

    1984-09-01

    The device presented in this paper, has been developed to satisfy, first, the requirements of nondestructive examinations of irradiated fuel pencils. The technics used allow the acquisition of reliable and detailed information on the state of cans. The computerized data exploitation allows to integrate quickly any improvement factor while allowing to carry out the researches on the automatic diagnostic of Eddy current signals. Now, the equipments of the bench allow visual, metrological and Foucault current simultaneous examinations [fr

  8. Hybrid matrix method for stable numerical analysis of the propagation of Dirac electrons in gapless bilayer graphene superlattices

    Science.gov (United States)

    Briones-Torres, J. A.; Pernas-Salomón, R.; Pérez-Álvarez, R.; Rodríguez-Vargas, I.

    2016-05-01

    Gapless bilayer graphene (GBG), like monolayer graphene, is a material system with unique properties, such as anti-Klein tunneling and intrinsic Fano resonances. These properties rely on the gapless parabolic dispersion relation and the chiral nature of bilayer graphene electrons. In addition, propagating and evanescent electron states coexist inherently in this material, giving rise to these exotic properties. In this sense, bilayer graphene is unique, since in most material systems in which Fano resonance phenomena are manifested an external source that provides extended states is required. However, from a numerical standpoint, the presence of evanescent-divergent states in the eigenfunctions linear superposition representing the Dirac spinors, leads to a numerical degradation (the so called Ωd problem) in the practical applications of the standard Coefficient Transfer Matrix (K) method used to study charge transport properties in Bilayer Graphene based multi-barrier systems. We present here a straightforward procedure based in the hybrid compliance-stiffness matrix method (H) that can overcome this numerical degradation. Our results show that in contrast to standard matrix method, the proposed H method is suitable to study the transmission and transport properties of electrons in GBG superlattice since it remains numerically stable regardless the size of the superlattice and the range of values taken by the input parameters: the energy and angle of the incident electrons, the barrier height and the thickness and number of barriers. We show that the matrix determinant can be used as a test of the numerical accuracy in real calculations.

  9. Parallel Programming Application to Matrix Algebra in the Spectral Method for Control Systems Analysis, Synthesis and Identification

    Directory of Open Access Journals (Sweden)

    V. Yu. Kleshnin

    2016-01-01

    Full Text Available The article describes the matrix algebra libraries based on the modern technologies of parallel programming for the Spectrum software, which can use a spectral method (in the spectral form of mathematical description to analyse, synthesise and identify deterministic and stochastic dynamical systems. The developed matrix algebra libraries use the following technologies for the GPUs: OmniThreadLibrary, OpenMP, Intel Threading Building Blocks, Intel Cilk Plus for CPUs nVidia CUDA, OpenCL, and Microsoft Accelerated Massive Parallelism.The developed libraries support matrices with real elements (single and double precision. The matrix dimensions are limited by 32-bit or 64-bit memory model and computer configuration. These libraries are general-purpose and can be used not only for the Spectrum software. They can also find application in the other projects where there is a need to perform operations with large matrices.The article provides a comparative analysis of the libraries developed for various matrix operations (addition, subtraction, scalar multiplication, multiplication, powers of matrices, tensor multiplication, transpose, inverse matrix, finding a solution of the system of linear equations through the numerical experiments using different CPU and GPU. The article contains sample programs and performance test results for matrix multiplication, which requires most of all computational resources in regard to the other operations.

  10. Functional Techniques for Data Analysis

    Science.gov (United States)

    Tomlinson, John R.

    1997-01-01

    This dissertation develops a new general method of solving Prony's problem. Two special cases of this new method have been developed previously. They are the Matrix Pencil and the Osculatory Interpolation. The dissertation shows that they are instances of a more general solution type which allows a wide ranging class of linear functional to be used in the solution of the problem. This class provides a continuum of functionals which provide new methods that can be used to solve Prony's problem.

  11. Sintering by infiltration of loose mixture of powders, a method for metal matrix composite elaboration

    International Nuclear Information System (INIS)

    Constantinescu, V.; Orban, R.; Colan, H.

    1993-01-01

    Starting from the observation that Sintering by Infiltration of Loose Mixture of Powders confers large possibilities for both complex shaped and of large dimensions Particulate Reinforced Metal Matrix Composite components elaboration, its mechanism comparative with those of the classical melt infiltration was investigated. Appropriate measures in order to prevent an excessive hydrostatic flow of the melt and, consequently, reinforcement particle dispersion, as well as to promote wetting in both infiltration and liquid phase sintering stages of the process were established as necessary. Some experimental results in the method application to the fusion tungsten carbide and diamond reinforced metal matrix composite elaboration are, also, presented. (orig.)

  12. The Matrix Method of Representation, Analysis and Classification of Long Genetic Sequences

    Directory of Open Access Journals (Sweden)

    Ivan V. Stepanyan

    2017-01-01

    Full Text Available The article is devoted to a matrix method of comparative analysis of long nucleotide sequences by means of presenting each sequence in the form of three digital binary sequences. This method uses a set of symmetries of biochemical attributes of nucleotides. It also uses the possibility of presentation of every whole set of N-mers as one of the members of a Kronecker family of genetic matrices. With this method, a long nucleotide sequence can be visually represented as an individual fractal-like mosaic or another regular mosaic of binary type. In contrast to natural nucleotide sequences, artificial random sequences give non-regular patterns. Examples of binary mosaics of long nucleotide sequences are shown, including cases of human chromosomes and penicillins. The obtained results are then discussed.

  13. Sensitive and selective determination of Cu2+ at D-penicillamine functionalized nano-cellulose modified pencil graphite electrode

    Science.gov (United States)

    Taheri, M.; Ahour, F.; Keshipour, S.

    2018-06-01

    A novel electrochemical sensor based on D-penicillamine anchored nano-cellulose (DPA-NC) modified pencil graphite electrode was fabricated and used for highly selective and sensitive determination of copper (II) ions in the picomolar concentration by square wave adsorptive stripping voltammetric (SWV) method. The modified electrode showed better and increased SWV response compared to the bare and NC modified electrodes which may be related to the porous structure of modifier along with formation of complex between Cu2+ ions and nitrogen or oxygen containing groups in DPA-NC. Optimization of various experimental parameters influence the performance of the sensor, were investigated. Under optimized condition, DPA-NC modified electrode was used for the analysis of Cu2+ in the concentration range from 0.2 to 50 pM, and a lower detection limit of 0.048 pM with good stability, repeatability, and selectivity. Finally, the practical applicability of DPA-NC-PGE was confirmed via measuring trace amount of Cu (II) in tap and river water samples.

  14. Bulk metallic glass matrix composites

    International Nuclear Information System (INIS)

    Choi-Yim, H.; Johnson, W.L.

    1997-01-01

    Composites with a bulk metallic glass matrix were synthesized and characterized. This was made possible by the recent development of bulk metallic glasses that exhibit high resistance to crystallization in the undercooled liquid state. In this letter, experimental methods for processing metallic glass composites are introduced. Three different bulk metallic glass forming alloys were used as the matrix materials. Both ceramics and metals were introduced as reinforcement into the metallic glass. The metallic glass matrix remained amorphous after adding up to a 30 vol% fraction of particles or short wires. X-ray diffraction patterns of the composites show only peaks from the second phase particles superimposed on the broad diffuse maxima from the amorphous phase. Optical micrographs reveal uniformly distributed particles in the matrix. The glass transition of the amorphous matrix and the crystallization behavior of the composites were studied by calorimetric methods. copyright 1997 American Institute of Physics

  15. Measurement of the top quark mass in the dilepton final state using the matrix element method

    Energy Technology Data Exchange (ETDEWEB)

    Grohsjean, Alexander [Ludwig Maximilian Univ., Munich (Germany)

    2008-12-15

    The top quark, discovered in 1995 by the CDF and D0 experiments at the Fermilab Tevatron Collider, is the heaviest known fundamental particle. The precise knowledge of its mass yields important constraints on the mass of the yet-unobserved Higgs boson and allows to probe for physics beyond the Standard Model. The first measurement of the top quark mass in the dilepton channel with the Matrix Element method at the D0 experiment is presented. After a short description of the experimental environment and the reconstruction chain from hits in the detector to physical objects, a detailed review of the Matrix Element method is given. The Matrix Element method is based on the likelihood to observe a given event under the assumption of the quantity to be measured, e.g. the mass of the top quark. The method has undergone significant modifications and improvements compared to previous measurements in the lepton+jets channel: the two undetected neutrinos require a new reconstruction scheme for the four-momenta of the final state particles, the small event sample demands the modeling of additional jets in the signal likelihood, and a new likelihood is designed to account for the main source of background containing tauonic Z decay. The Matrix Element method is validated on Monte Carlo simulated events at the generator level. For the measurement, calibration curves are derived from events that are run through the full D0 detector simulation. The analysis makes use of the Run II data set recorded between April 2002 and May 2008 corresponding to an integrated luminosity of 2.8 fb-1. A total of 107 t$\\bar{t}$ candidate events with one electron and one muon in the final state are selected. Applying the Matrix Element method to this data set, the top quark mass is measured to be mtopRun IIa = 170.6 ± 6.1(stat.)-1.5+2.1(syst.)GeV; mtopRun IIb = 174.1 ± 4.4(stat.)-1.8+2.5(syst.)GeV; m

  16. No-cost manual method for preparation of tissue microarrays having high quality comparable to semiautomated methods.

    Science.gov (United States)

    Foda, Abd Al-Rahman Mohammad

    2013-05-01

    Manual tissue microarray (TMA) construction had been introduced to avoid the high cost of automated and semiautomated techniques. The cheapest and simplest technique for constructing manual TMA was that of using mechanical pencil tips. This study was carried out to modify this method, aiming to raise its quality to reach that of expensive ones. Some modifications were introduced to Shebl's technique. Two conventional mechanical pencil tips of different diameters were used to construct the recipient blocks. A source of mild heat was used, and blocks were incubated at 38°C overnight. With our modifications, 3 high-density TMA blocks were constructed. We successfully performed immunostaining without substantial tissue loss. Our modifications increased the number of cores per block and improved the stability of the cores within the paraffin block. This new, modified technique is a good alternative for expensive machines in many laboratories.

  17. Library designs for generic C++ sparse matrix computations of iterative methods

    Energy Technology Data Exchange (ETDEWEB)

    Pozo, R.

    1996-12-31

    A new library design is presented for generic sparse matrix C++ objects for use in iterative algorithms and preconditioners. This design extends previous work on C++ numerical libraries by providing a framework in which efficient algorithms can be written *independent* of the matrix layout or format. That is, rather than supporting different codes for each (element type) / (matrix format) combination, only one version of the algorithm need be maintained. This not only reduces the effort for library developers, but also simplifies the calling interface seen by library users. Furthermore, the underlying matrix library can be naturally extended to support user-defined objects, such as hierarchical block-structured matrices, or application-specific preconditioners. Utilizing optimized kernels whenever possible, the resulting performance of such framework can be shown to be competitive with optimized Fortran programs.

  18. Direct method of solving finite difference nonlinear equations for multicomponent diffusion in a gas centrifuge

    International Nuclear Information System (INIS)

    Potemki, Valeri G.; Borisevich, Valentine D.; Yupatov, Sergei V.

    1996-01-01

    This paper describes the the next evolution step in development of the direct method for solving systems of Nonlinear Algebraic Equations (SNAE). These equations arise from the finite difference approximation of original nonlinear partial differential equations (PDE). This method has been extended on the SNAE with three variables. The solving SNAE bases on Reiterating General Singular Value Decomposition of rectangular matrix pencils (RGSVD-algorithm). In contrast to the computer algebra algorithm in integer arithmetic based on the reduction to the Groebner's basis that algorithm is working in floating point arithmetic and realizes the reduction to the Kronecker's form. The possibilities of the method are illustrated on the example of solving the one-dimensional diffusion equation for 3-component model isotope mixture in a ga centrifuge. The implicit scheme for the finite difference equations without simplifying the nonlinear properties of the original equations is realized. The technique offered provides convergence to the solution for the single run. The Toolbox SNAE is developed in the framework of the high performance numeric computation and visualization software MATLAB. It includes more than 30 modules in MATLAB language for solving SNAE with two and three variables. (author)

  19. Adaptive beamlet-based finite-size pencil beam dose calculation for independent verification of IMRT and VMAT.

    Science.gov (United States)

    Park, Justin C; Li, Jonathan G; Arhjoul, Lahcen; Yan, Guanghua; Lu, Bo; Fan, Qiyong; Liu, Chihray

    2015-04-01

    The use of sophisticated dose calculation procedure in modern radiation therapy treatment planning is inevitable in order to account for complex treatment fields created by multileaf collimators (MLCs). As a consequence, independent volumetric dose verification is time consuming, which affects the efficiency of clinical workflow. In this study, the authors present an efficient adaptive beamlet-based finite-size pencil beam (AB-FSPB) dose calculation algorithm that minimizes the computational procedure while preserving the accuracy. The computational time of finite-size pencil beam (FSPB) algorithm is proportional to the number of infinitesimal and identical beamlets that constitute an arbitrary field shape. In AB-FSPB, dose distribution from each beamlet is mathematically modeled such that the sizes of beamlets to represent an arbitrary field shape no longer need to be infinitesimal nor identical. As a result, it is possible to represent an arbitrary field shape with combinations of different sized and minimal number of beamlets. In addition, the authors included the model parameters to consider MLC for its rounded edge and transmission. Root mean square error (RMSE) between treatment planning system and conventional FSPB on a 10 × 10 cm(2) square field using 10 × 10, 2.5 × 2.5, and 0.5 × 0.5 cm(2) beamlet sizes were 4.90%, 3.19%, and 2.87%, respectively, compared with RMSE of 1.10%, 1.11%, and 1.14% for AB-FSPB. This finding holds true for a larger square field size of 25 × 25 cm(2), where RMSE for 25 × 25, 2.5 × 2.5, and 0.5 × 0.5 cm(2) beamlet sizes were 5.41%, 4.76%, and 3.54% in FSPB, respectively, compared with RMSE of 0.86%, 0.83%, and 0.88% for AB-FSPB. It was found that AB-FSPB could successfully account for the MLC transmissions without major discrepancy. The algorithm was also graphical processing unit (GPU) compatible to maximize its computational speed. For an intensity modulated radiation therapy (∼12 segments) and a volumetric modulated arc

  20. Efficiency criterion for teleportation via channel matrix, measurement matrix and collapsed matrix

    Directory of Open Access Journals (Sweden)

    Xin-Wei Zha

    Full Text Available In this paper, three kinds of coefficient matrixes (channel matrix, measurement matrix, collapsed matrix associated with the pure state for teleportation are presented, the general relation among channel matrix, measurement matrix and collapsed matrix is obtained. In addition, a criterion for judging whether a state can be teleported successfully is given, depending on the relation between the number of parameter of an unknown state and the rank of the collapsed matrix. Keywords: Channel matrix, Measurement matrix, Collapsed matrix, Teleportation

  1. Matrix of transmission in structural dynamics

    International Nuclear Information System (INIS)

    Mukherjee, S.

    1975-01-01

    The problem of close-coupled systems and cantilever type buildings can be treated efficiently by means of the very general and versatile method of transmission matrix. The expression 'matrix of transmission' is used to point out the fact that the method to be described differs fundamentally from another method related to matrix calculus, and also successfully used in vibration problem. In this method, forces and displacements are introduced as the 'unknowns' of the problem. The 'matrix of transmission' relates these quantities at one point of the structure to those at the neighbouring point. The natural frequencies of a freely vibrating elastic system can be found by applying proper end conditions. The end conditions will yield the frequency determinate to zero. By using suitable numerical method, the natural frequencies and mode shapes are determined, by making a frequency sweep within the range of interest. Results of analysis of a typical nuclear building by this method show very close agreement with the results obtained by using ASKA and SAP IV Program

  2. Design of a QA method to characterize submillimeter-sized PBS beam properties using a 2D ionization chamber array

    Science.gov (United States)

    Lin, Yuting; Bentefour, Hassan; Flanz, Jacob; Kooy, Hanne; Clasie, Benjamin

    2018-05-01

    Pencil beam scanning (PBS) periodic quality assurance (QA) programs ensure the beam delivered to patients is within technical specifications. Two critical specifications for PBS delivery are the beam width and position. The aim of this study is to investigate whether a 2D ionization chamber array, such as the MatriXX detector (IBA Dosimetry, Schwarzenbruck, Germany), can be used to characterize submillimeter-sized PBS beam properties. The motivation is to use standard equipment, which may have pixel spacing coarser than the pencil beam size, and simplify QA workflow. The MatriXX pixels are cylindrical in shape with 4.5 mm diameter and are spaced 7.62 mm from center to center. Two major effects limit the ability of using the MatriXX to measure the spot position and width accurately. The first effect is that too few pixels sample the Gaussian shaped pencil beam profile and the second effect is volume averaging of the Gaussian profile over the pixel sensitive volumes. We designed a method that overcomes both limitations and hence enables the use of the MatriXX to characterize sub-millimeter-sized PBS beam properties. This method uses a cross-like irradiation pattern that is designed to increase the number of sampling data points and a modified Gaussian fitting technique to correct for volume averaging effects. Detector signals were calculated in this study and random noise and setup errors were added to simulate measured data. With the techniques developed in this work, the MatriXX detector can be used to characterize the position and width of sub-millimeter, σ  =  0.7 mm, sized pencil beams with uncertainty better than 3% relative to σ. With the irradiation only covering 60% of the MatriXX, the position and width of σ  =  0.9 mm sized pencil beams can be determined with uncertainty better than 3% relative to σ. If one were to not use a cross-like irradiation pattern, then the position and width of σ  =  3.6 mm sized pencil beams

  3. Neutrinoless double-β decay matrix elements in large shell-model spaces with the generator-coordinate method

    Science.gov (United States)

    Jiao, C. F.; Engel, J.; Holt, J. D.

    2017-11-01

    We use the generator-coordinate method (GCM) with realistic shell-model interactions to closely approximate full shell-model calculations of the matrix elements for the neutrinoless double-β decay of 48Ca, 76Ge, and 82Se. We work in one major shell for the first isotope, in the f5 /2p g9 /2 space for the second and third, and finally in two major shells for all three. Our coordinates include not only the usual axial deformation parameter β , but also the triaxiality angle γ and neutron-proton pairing amplitudes. In the smaller model spaces our matrix elements agree well with those of full shell-model diagonalization, suggesting that our Hamiltonian-based GCM captures most of the important valence-space correlations. In two major shells, where exact diagonalization is not currently possible, our matrix elements are only slightly different from those in a single shell.

  4. Improved Riccati Transfer Matrix Method for Free Vibration of Non-Cylindrical Helical Springs Including Warping

    Directory of Open Access Journals (Sweden)

    A.M. Yu

    2012-01-01

    Full Text Available Free vibration equations for non-cylindrical (conical, barrel, and hyperboloidal types helical springs with noncircular cross-sections, which consist of 14 first-order ordinary differential equations with variable coefficients, are theoretically derived using spatially curved beam theory. In the formulation, the warping effect upon natural frequencies and vibrating mode shapes is first studied in addition to including the rotary inertia, the shear and axial deformation influences. The natural frequencies of the springs are determined by the use of improved Riccati transfer matrix method. The element transfer matrix used in the solution is calculated using the Scaling and Squaring method and Pad'e approximations. Three examples are presented for three types of springs with different cross-sectional shapes under clamped-clamped boundary condition. The accuracy of the proposed method has been compared with the FEM results using three-dimensional solid elements (Solid 45 in ANSYS code. Numerical results reveal that the warping effect is more pronounced in the case of non-cylindrical helical springs than that of cylindrical helical springs, which should be taken into consideration in the free vibration analysis of such springs.

  5. Accurate single-scattering simulation of ice cloud using the invariant-imbedding T-matrix method and the physical-geometric optics method

    Science.gov (United States)

    Sun, B.; Yang, P.; Kattawar, G. W.; Zhang, X.

    2017-12-01

    The ice cloud single-scattering properties can be accurately simulated using the invariant-imbedding T-matrix method (IITM) and the physical-geometric optics method (PGOM). The IITM has been parallelized using the Message Passing Interface (MPI) method to remove the memory limitation so that the IITM can be used to obtain the single-scattering properties of ice clouds for sizes in the geometric optics regime. Furthermore, the results associated with random orientations can be analytically achieved once the T-matrix is given. The PGOM is also parallelized in conjunction with random orientations. The single-scattering properties of a hexagonal prism with height 400 (in units of lambda/2*pi, where lambda is the incident wavelength) and an aspect ratio of 1 (defined as the height over two times of bottom side length) are given by using the parallelized IITM and compared to the counterparts using the parallelized PGOM. The two results are in close agreement. Furthermore, the integrated single-scattering properties, including the asymmetry factor, the extinction cross-section, and the scattering cross-section, are given in a completed size range. The present results show a smooth transition from the exact IITM solution to the approximate PGOM result. Because the calculation of the IITM method has reached the geometric regime, the IITM and the PGOM can be efficiently employed to accurately compute the single-scattering properties of ice cloud in a wide spectral range.

  6. The Virasoro algebra in integrable hierarchies and the method of matrix models

    International Nuclear Information System (INIS)

    Semikhatov, A.M.

    1992-01-01

    The action of the Virasoro algebra on hierarchies of nonlinear integrable equations, and also the structure and consequences of Virasoro constraints on these hierarchies, are studied. It is proposed that a broad class of hierarchies, restricted by Virasoro constraints, can be defined in terms of dressing operators hidden in the structure of integrable systems. The Virasoro-algebra representation constructed on the dressing operators displays a number of analogies with structures in conformal field theory. The formulation of the Virasoro constraints that stems from this representation makes it possible to translate into the language of integrable systems a number of concepts from the method of the 'matrix models' that describe nonperturbative quantum gravity, and, in particular, to realize a 'hierarchical' version of the double scaling limit. From the Virasoro constraints written in terms of the dressing operators generalized loop equations are derived, and this makes it possible to do calculations on a reconstruction of the field-theoretical description. The reduction of the Kadomtsev-Petviashvili (KP) hierarchy, subject to Virasoro constraints, to generalized Korteweg-deVries (KdV) hierarchies is implemented, and the corresponding representation of the Virasoro algebra on these hierarchies is found both in the language of scalar differential operators and in the matrix formalism of Drinfel'd and Sokolov. The string equation in the matrix formalism does not replicate the structure of the scalar string equation. The symmetry algebras of the KP and N-KdV hierarchies restricted by Virasoro constraints are calculated: A relationship is established with algebras from the family W ∞ (J) of infinite W-algebras

  7. Perturbation theory corrections to the two-particle reduced density matrix variational method.

    Science.gov (United States)

    Juhasz, Tamas; Mazziotti, David A

    2004-07-15

    In the variational 2-particle-reduced-density-matrix (2-RDM) method, the ground-state energy is minimized with respect to the 2-particle reduced density matrix, constrained by N-representability conditions. Consider the N-electron Hamiltonian H(lambda) as a function of the parameter lambda where we recover the Fock Hamiltonian at lambda=0 and we recover the fully correlated Hamiltonian at lambda=1. We explore using the accuracy of perturbation theory at small lambda to correct the 2-RDM variational energies at lambda=1 where the Hamiltonian represents correlated atoms and molecules. A key assumption in the correction is that the 2-RDM method will capture a fairly constant percentage of the correlation energy for lambda in (0,1] because the nonperturbative 2-RDM approach depends more significantly upon the nature rather than the strength of the two-body Hamiltonian interaction. For a variety of molecules we observe that this correction improves the 2-RDM energies in the equilibrium bonding region, while the 2-RDM energies at stretched or nearly dissociated geometries, already highly accurate, are not significantly changed. At equilibrium geometries the corrected 2-RDM energies are similar in accuracy to those from coupled-cluster singles and doubles (CCSD), but at nonequilibrium geometries the 2-RDM energies are often dramatically more accurate as shown in the bond stretching and dissociation data for water and nitrogen. (c) 2004 American Institute of Physics.

  8. Information matrix estimation procedures for cognitive diagnostic models.

    Science.gov (United States)

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  9. On-Line Identification of Simulation Examples for Forgetting Methods to Track Time Varying Parameters Using the Alternative Covariance Matrix in Matlab

    Science.gov (United States)

    Vachálek, Ján

    2011-12-01

    The paper compares the abilities of forgetting methods to track time varying parameters of two different simulated models with different types of excitation. The observed parameters in the simulations are the integral sum of the Euclidean norm, deviation of the parameter estimates from their true values and a selected band prediction error count. As supplementary information, we observe the eigenvalues of the covariance matrix. In the paper we used a modified method of Regularized Exponential Forgetting with Alternative Covariance Matrix (REFACM) along with Directional Forgetting (DF) and three standard regularized methods.

  10. De novo Assembly and Analysis of the Chilean Pencil Catfish Trichomycterus areolatus Transcriptome

    Science.gov (United States)

    Schulze, Thomas T.; Ali, Jonathan M.; Bartlett, Maggie L.; McFarland, Madalyn M.; Clement, Emalie J.; Won, Harim I.; Sanford, Austin G.; Monzingo, Elyssa B.; Martens, Matthew C.; Hemsley, Ryan M.; Kumar, Sidharta; Gouin, Nicolas; Kolok, Alan S.; Davis, Paul H.

    2016-01-01

    Trichomycterus areolatus is an endemic species of pencil catfish that inhabits the riffles and rapids of many freshwater ecosystems of Chile. Despite its unique adaptation to Chile's high gradient watersheds and therefore potential application in the investigation of ecosystem integrity and environmental contamination, relatively little is known regarding the molecular biology of this environmental sentinel. Here, we detail the assembly of the Trichomycterus areolatus transcriptome, a molecular resource for the study of this organism and its molecular response to the environment. RNA-Seq reads were obtained by next-generation sequencing with an Illumina® platform and processed using PRINSEQ. The transcriptome assembly was performed using TRINITY assembler. Transcriptome validation was performed by functional characterization with KOG, KEGG, and GO analyses. Additionally, differential expression analysis highlights sex-specific expression patterns, and a list of endocrine and oxidative stress related transcripts are included. PMID:27672404

  11. Impact of the Choice of Normalization Method on Molecular Cancer Class Discovery Using Nonnegative Matrix Factorization.

    Science.gov (United States)

    Yang, Haixuan; Seoighe, Cathal

    2016-01-01

    Nonnegative Matrix Factorization (NMF) has proved to be an effective method for unsupervised clustering analysis of gene expression data. By the nonnegativity constraint, NMF provides a decomposition of the data matrix into two matrices that have been used for clustering analysis. However, the decomposition is not unique. This allows different clustering results to be obtained, resulting in different interpretations of the decomposition. To alleviate this problem, some existing methods directly enforce uniqueness to some extent by adding regularization terms in the NMF objective function. Alternatively, various normalization methods have been applied to the factor matrices; however, the effects of the choice of normalization have not been carefully investigated. Here we investigate the performance of NMF for the task of cancer class discovery, under a wide range of normalization choices. After extensive evaluations, we observe that the maximum norm showed the best performance, although the maximum norm has not previously been used for NMF. Matlab codes are freely available from: http://maths.nuigalway.ie/~haixuanyang/pNMF/pNMF.htm.

  12. Simplified microstrip discontinuity modeling using the transmission line matrix method interfaced to microwave CAD

    Science.gov (United States)

    Thompson, James H.; Apel, Thomas R.

    1990-07-01

    A technique for modeling microstrip discontinuities is presented which is derived from the transmission line matrix method of solving three-dimensional electromagnetic problems. In this technique the microstrip patch under investigation is divided into an integer number of square and half-square (triangle) subsections. An equivalent lumped-element model is calculated for each subsection. These individual models are then interconnected as dictated by the geometry of the patch. The matrix of lumped elements is then solved using either of two microwave CAD software interfaces with each port properly defined. Closed-form expressions for the lumped-element representation of the individual subsections is presented and experimentally verified through the X-band frequency range. A model demonstrating the use of symmetry and block construction of a circuit element is discussed, along with computer program development and CAD software interface.

  13. Photonic band structures solved by a plane-wave-based transfer-matrix method.

    Science.gov (United States)

    Li, Zhi-Yuan; Lin, Lan-Lan

    2003-04-01

    Transfer-matrix methods adopting a plane-wave basis have been routinely used to calculate the scattering of electromagnetic waves by general multilayer gratings and photonic crystal slabs. In this paper we show that this technique, when combined with Bloch's theorem, can be extended to solve the photonic band structure for 2D and 3D photonic crystal structures. Three different eigensolution schemes to solve the traditional band diagrams along high-symmetry lines in the first Brillouin zone of the crystal are discussed. Optimal rules for the Fourier expansion over the dielectric function and electromagnetic fields with discontinuities occurring at the boundary of different material domains have been employed to accelerate the convergence of numerical computation. Application of this method to an important class of 3D layer-by-layer photonic crystals reveals the superior convergency of this different approach over the conventional plane-wave expansion method.

  14. Photonic band structures solved by a plane-wave-based transfer-matrix method

    International Nuclear Information System (INIS)

    Li Zhiyuan; Lin Lanlan

    2003-01-01

    Transfer-matrix methods adopting a plane-wave basis have been routinely used to calculate the scattering of electromagnetic waves by general multilayer gratings and photonic crystal slabs. In this paper we show that this technique, when combined with Bloch's theorem, can be extended to solve the photonic band structure for 2D and 3D photonic crystal structures. Three different eigensolution schemes to solve the traditional band diagrams along high-symmetry lines in the first Brillouin zone of the crystal are discussed. Optimal rules for the Fourier expansion over the dielectric function and electromagnetic fields with discontinuities occurring at the boundary of different material domains have been employed to accelerate the convergence of numerical computation. Application of this method to an important class of 3D layer-by-layer photonic crystals reveals the superior convergency of this different approach over the conventional plane-wave expansion method

  15. The Matrix Pencil and its Applications to Speech Processing

    Science.gov (United States)

    2007-03-01

    example in ascending order, to form the ordered frequency list vector, F v . The nx1 vector F v is then input to the Column Duplicator, which forms the...already is. With regard to the pre-processing that has been described, one could also pre-condition the input frequency list based on phase and decay

  16. Disintegration of graphite matrix from the simulative high temperature gas-cooled reactor fuel element by electrochemical method

    International Nuclear Information System (INIS)

    Tian Lifang; Wen Mingfen; Li Linyan; Chen Jing

    2009-01-01

    Electrochemical method with salt as electrolyte has been studied to disintegrate the graphite matrix from the simulative high temperature gas-cooled reactor fuel elements. Ammonium nitrate was experimentally chosen as the appropriate electrolyte. The volume average diameter of disintegrated graphite fragments is about 100 μm and the maximal value is less than 900 μm. After disintegration, the weight of graphite is found to increase by about 20% without the release of a large amount of CO 2 probably owing to the partial oxidation to graphite in electrochemical process. The present work indicates that the improved electrochemical method has the potential to reduce the secondary nuclear waste and is a promising option to disintegrate graphite matrix from high temperature gas-cooled reactor spent fuel elements in the head-end of reprocessing.

  17. H{infinity} Filtering for Dynamic Compensation of Self-Powered Neutron Detectors - A Linear Matrix Inequality Based Method -

    Energy Technology Data Exchange (ETDEWEB)

    Park, M.G.; Kim, Y.H.; Cha, K.H.; Kim, M.K. [Korea Electric Power Research Institute, Taejon (Korea)

    1999-07-01

    A method is described to develop and H{infinity} filtering method for the dynamic compensation of self-powered neutron detectors normally used for fixed incore instruments. An H{infinity} norm of the filter transfer matrix is used as the optimization criteria in the worst-case estimation error sense. Filter modeling is performed for both continuous- and discrete-time models. The filter gains are optimized in the sense of noise attenuation level of H{infinity} setting. By introducing Bounded Real Lemma, the conventional algebraic Riccati inequalities are converted into Linear Matrix Inequalities (LMIs). Finally, the filter design problem is solved via the convex optimization framework using LMIs. The simulation results show that remarkable improvements are achieved in view of the filter response time and the filter design efficiency. (author). 15 refs., 4 figs., 3 tabs.

  18. Evaluation of the confusion matrix method in the validation of an automated system for measuring feeding behaviour of cattle.

    Science.gov (United States)

    Ruuska, Salla; Hämäläinen, Wilhelmiina; Kajava, Sari; Mughal, Mikaela; Matilainen, Pekka; Mononen, Jaakko

    2018-03-01

    The aim of the present study was to evaluate empirically confusion matrices in device validation. We compared the confusion matrix method to linear regression and error indices in the validation of a device measuring feeding behaviour of dairy cattle. In addition, we studied how to extract additional information on classification errors with confusion probabilities. The data consisted of 12 h behaviour measurements from five dairy cows; feeding and other behaviour were detected simultaneously with a device and from video recordings. The resulting 216 000 pairs of classifications were used to construct confusion matrices and calculate performance measures. In addition, hourly durations of each behaviour were calculated and the accuracy of measurements was evaluated with linear regression and error indices. All three validation methods agreed when the behaviour was detected very accurately or inaccurately. Otherwise, in the intermediate cases, the confusion matrix method and error indices produced relatively concordant results, but the linear regression method often disagreed with them. Our study supports the use of confusion matrix analysis in validation since it is robust to any data distribution and type of relationship, it makes a stringent evaluation of validity, and it offers extra information on the type and sources of errors. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Noniterative MAP reconstruction using sparse matrix representations.

    Science.gov (United States)

    Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J

    2009-09-01

    We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.

  20. A Monte Carlo pencil beam scanning model for proton treatment plan simulation using GATE/GEANT4

    Energy Technology Data Exchange (ETDEWEB)

    Grevillot, L; Freud, N; Sarrut, D [Universite de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Universite Lyon 1, Centre Leon Berard, Lyon (France); Bertrand, D; Dessy, F, E-mail: loic.grevillot@creatis.insa-lyon.fr [IBA, B-1348, Louvain-la Neuve (Belgium)

    2011-08-21

    This work proposes a generic method for modeling scanned ion beam delivery systems, without simulation of the treatment nozzle and based exclusively on beam data library (BDL) measurements required for treatment planning systems (TPS). To this aim, new tools dedicated to treatment plan simulation were implemented in the Gate Monte Carlo platform. The method was applied to a dedicated nozzle from IBA for proton pencil beam scanning delivery. Optical and energy parameters of the system were modeled using a set of proton depth-dose profiles and spot sizes measured at 27 therapeutic energies. For further validation of the beam model, specific 2D and 3D plans were produced and then measured with appropriate dosimetric tools. Dose contributions from secondary particles produced by nuclear interactions were also investigated using field size factor experiments. Pristine Bragg peaks were reproduced with 0.7 mm range and 0.2 mm spot size accuracy. A 32 cm range spread-out Bragg peak with 10 cm modulation was reproduced with 0.8 mm range accuracy and a maximum point-to-point dose difference of less than 2%. A 2D test pattern consisting of a combination of homogeneous and high-gradient dose regions passed a 2%/2 mm gamma index comparison for 97% of the points. In conclusion, the generic modeling method proposed for scanned ion beam delivery systems was applicable to an IBA proton therapy system. The key advantage of the method is that it only requires BDL measurements of the system. The validation tests performed so far demonstrated that the beam model achieves clinical performance, paving the way for further studies toward TPS benchmarking. The method involves new sources that are available in the new Gate release V6.1 and could be further applied to other particle therapy systems delivering protons or other types of ions like carbon.

  1. Experimental characterization and physical modelling of the dose distribution of scanned proton pencil beams

    International Nuclear Information System (INIS)

    Pedroni, E; Scheib, S; Boehringer, T; Coray, A; Grossmann, M; Lin, S; Lomax, A

    2005-01-01

    In this paper we present the pencil beam dose model used for treatment planning at the PSI proton gantry, the only system presently applying proton therapy with a beam scanning technique. The scope of the paper is to give a general overview on the various components of the dose model, on the related measurements and on the practical parametrization of the results. The physical model estimates from first physical principles absolute dose normalized to the number of incident protons. The proton beam flux is measured in practice by plane-parallel ionization chambers (ICs) normalized to protons via Faraday-cup measurements. It is therefore possible to predict and deliver absolute dose directly from this model without other means. The dose predicted in this way agrees very well with the results obtained with ICs calibrated in a cobalt beam. Emphasis is given in this paper to the characterization of nuclear interaction effects, which play a significant role in the model and are the major source of uncertainty in the direct estimation of the absolute dose. Nuclear interactions attenuate the primary proton flux, they modify the shape of the depth-dose curve and produce a faint beam halo of secondary dose around the primary proton pencil beam in water. A very simple beam halo model has been developed and used at PSI to eliminate the systematic dependences of the dose observed as a function of the size of the target volume. We show typical results for the relative (using a CCD system) and absolute (using calibrated ICs) dosimetry, routinely applied for the verification of patient plans. With the dose model including the nuclear beam halo we can predict quite precisely the dose directly from treatment planning without renormalization measurements, independently of the dose, shape and size of the dose fields. This applies also to the complex non-homogeneous dose distributions required for the delivery of range-intensity-modulated proton therapy, a novel therapy technique

  2. Rank-Optimized Logistic Matrix Regression toward Improved Matrix Data Classification.

    Science.gov (United States)

    Zhang, Jianguang; Jiang, Jianmin

    2018-02-01

    While existing logistic regression suffers from overfitting and often fails in considering structural information, we propose a novel matrix-based logistic regression to overcome the weakness. In the proposed method, 2D matrices are directly used to learn two groups of parameter vectors along each dimension without vectorization, which allows the proposed method to fully exploit the underlying structural information embedded inside the 2D matrices. Further, we add a joint [Formula: see text]-norm on two parameter matrices, which are organized by aligning each group of parameter vectors in columns. This added co-regularization term has two roles-enhancing the effect of regularization and optimizing the rank during the learning process. With our proposed fast iterative solution, we carried out extensive experiments. The results show that in comparison to both the traditional tensor-based methods and the vector-based regression methods, our proposed solution achieves better performance for matrix data classifications.

  3. Employing the Matrix Method as a tool for the analysis of qualitative research data in the business domain

    NARCIS (Netherlands)

    Groenland, E.A.G.

    2014-01-01

    This article addresses three issues: 1. It explains the characteristics and the process of the analysis of empirical, qualitative data. 2. It introduces a method for qualitative analysis, as relevant to business research, i.e., the Matrix Method. 3. It presents a coherent approach about structuring

  4. Representing Matrix Cracks Through Decomposition of the Deformation Gradient Tensor in Continuum Damage Mechanics Methods

    Science.gov (United States)

    Leone, Frank A., Jr.

    2015-01-01

    A method is presented to represent the large-deformation kinematics of intraply matrix cracks and delaminations in continuum damage mechanics (CDM) constitutive material models. The method involves the additive decomposition of the deformation gradient tensor into 'crack' and 'bulk material' components. The response of the intact bulk material is represented by a reduced deformation gradient tensor, and the opening of an embedded cohesive interface is represented by a normalized cohesive displacement-jump vector. The rotation of the embedded interface is tracked as the material deforms and as the crack opens. The distribution of the total local deformation between the bulk material and the cohesive interface components is determined by minimizing the difference between the cohesive stress and the bulk material stress projected onto the cohesive interface. The improvements to the accuracy of CDM models that incorporate the presented method over existing approaches are demonstrated for a single element subjected to simple shear deformation and for a finite element model of a unidirectional open-hole tension specimen. The material model is implemented as a VUMAT user subroutine for the Abaqus/Explicit finite element software. The presented deformation gradient decomposition method reduces the artificial load transfer across matrix cracks subjected to large shearing deformations, and avoids the spurious secondary failure modes that often occur in analyses based on conventional progressive damage models.

  5. Concerning an application of the method of least squares with a variable weight matrix

    Science.gov (United States)

    Sukhanov, A. A.

    1979-01-01

    An estimate of a state vector for a physical system when the weight matrix in the method of least squares is a function of this vector is considered. An iterative procedure is proposed for calculating the desired estimate. Conditions for the existence and uniqueness of the limit of this procedure are obtained, and a domain is found which contains the limit estimate. A second method for calculating the desired estimate which reduces to the solution of a system of algebraic equations is proposed. The question of applying Newton's method of tangents to solving the given system of algebraic equations is considered and conditions for the convergence of the modified Newton's method are obtained. Certain properties of the estimate obtained are presented together with an example.

  6. Unified approach to numerical transfer matrix methods for disordered systems: applications to mixed crystals and to elasticity percolation

    International Nuclear Information System (INIS)

    Lemieux, M.A.; Breton, P.; Tremblay, A.M.S.

    1985-01-01

    It is shown that the Negative Eigenvalue Theorem and transfer matrix methods may be considered within a unified framework and generalized to compute projected densities of states or, more generally, any linear combination of matrix elements of the inverse of large symmetric random matrices. As examples of applications, extensive simulations for one- and two-mode behaviour in the Raman spectrum of one-dimensional mixed crystals and a finite-size analysis of critical exponents for the central force percolation universality class are presented

  7. Matrix methods applied to engineering rigid body mechanics

    Science.gov (United States)

    Crouch, T.

    The purpose of this book is to present the solution of a range of rigorous body mechanics problems using a matrix formulation of vector algebra. Essential theory concerning kinematics and dynamics is formulated in terms of matrix algebra. The solution of kinematics and dynamics problems is discussed, taking into account the velocity and acceleration of a point moving in a circular path, the velocity and acceleration determination for a linkage, the angular velocity and angular acceleration of a roller in a taper-roller thrust race, Euler's theroem on the motion of rigid bodies, an automotive differential, a rotating epicyclic, the motion of a high speed rotor mounted in gimbals, and the vibration of a spinning projectile. Attention is given to the activity of a force, the work done by a conservative force, the work and potential in a conservative system, the equilibrium of a mechanism, bearing forces due to rotor misalignment, and the frequency of vibrations of a constrained rod.

  8. Method and apparatus for producing tomographic images

    International Nuclear Information System (INIS)

    Annis, M.

    1989-01-01

    A device useful in producing a tomographic image of a selected slice of an object to be examined is described comprising: a source of penetrating radiation, sweep means for forming energy from the source into a pencil beam and repeatedly sweeping the pencil beam over a line in space to define a sweep plane, first means for supporting an object to be examined so that the pencil beam intersections the object along a path passing through the object and the selected slice, line collimating means for filtering radiation scattered by the object, the line collimating means having a field of view which intersects and sweep plane in a bounded line so that the line collimating means passes only radiation scattered by elementary volumes of the object lying along the bounded line, and line collimating means including a plurality of channels such substantially planar in form to collectively define the field of view, the channels oriented so that pencil beam sweeps along the bounded line as a function of time, and radiation detector means responsive to radiation passed by the line collimating means

  9. On matrix fractional differential equations

    Directory of Open Access Journals (Sweden)

    Adem Kılıçman

    2017-01-01

    Full Text Available The aim of this article is to study the matrix fractional differential equations and to find the exact solution for system of matrix fractional differential equations in terms of Riemann–Liouville using Laplace transform method and convolution product to the Riemann–Liouville fractional of matrices. Also, we show the theorem of non-homogeneous matrix fractional partial differential equation with some illustrative examples to demonstrate the effectiveness of the new methodology. The main objective of this article is to discuss the Laplace transform method based on operational matrices of fractional derivatives for solving several kinds of linear fractional differential equations. Moreover, we present the operational matrices of fractional derivatives with Laplace transform in many applications of various engineering systems as control system. We present the analytical technique for solving fractional-order, multi-term fractional differential equation. In other words, we propose an efficient algorithm for solving fractional matrix equation.

  10. Cross-Mode Comparability of Computer-Based Testing (CBT) versus Paper-Pencil Based Testing (PPT): An Investigation of Testing Administration Mode among Iranian Intermediate EFL Learners

    Science.gov (United States)

    Khoshsima, Hooshang; Hosseini, Monirosadat; Toroujeni, Seyyed Morteza Hashemi

    2017-01-01

    Advent of technology has caused growing interest in using computers to convert conventional paper and pencil-based testing (Henceforth PPT) into Computer-based testing (Henceforth CBT) in the field of education during last decades. This constant promulgation of computers to reshape the conventional tests into computerized format permeated the…

  11. Finding all real roots of a polynomial by matrix algebra and the Adomian decomposition method

    Directory of Open Access Journals (Sweden)

    Hooman Fatoorehchi

    2014-10-01

    Full Text Available In this paper, we put forth a combined method for calculation of all real zeroes of a polynomial equation through the Adomian decomposition method equipped with a number of developed theorems from matrix algebra. These auxiliary theorems are associated with eigenvalues of matrices and enable convergence of the Adomian decomposition method toward different real roots of the target polynomial equation. To further improve the computational speed of our technique, a nonlinear convergence accelerator known as the Shanks transform has optionally been employed. For the sake of illustration, a number of numerical examples are given.

  12. Virtual design software for mechanical system dynamics using transfer matrix method of multibody system and its application

    Directory of Open Access Journals (Sweden)

    Hai-gen Yang

    2015-09-01

    Full Text Available The complex mechanical systems such as high-speed trains, multiple launch rocket system, self-propelled artillery, and industrial robots are becoming increasingly larger in scale and more complicated in structure. Designing these products often requires complex model design, multibody system dynamics calculation, and analysis of large amounts of data repeatedly. In recent 20 years, the transfer matrix method of multibody system has been widely applied in engineering fields and welcomed at home and in abroad for the following features: without global dynamic equations of the system, low orders of involved system matrices, high computational efficiency, and high programming. In order to realize the rapid and visual simulation for complex mechanical system virtual design using transfer matrix method of multibody system, a virtual design software named MSTMMSim is designed and implemented. In the MSTMMSim, the transfer matrix method of multibody system is used as the solver for dynamic modeling and calculation; the Open CASCADE is used for solid geometry modeling. Various auxiliary analytical tools such as curve plot and animation display are provided in the post-processor to analyze and process the simulation results. Two numerical examples are given to verify the validity and accuracy of the software, and a multiple launch rocket system engineering example is given at the end of this article to show that the software provides a powerful platform for complex mechanical systems simulation and virtual design.

  13. On matrix fractional differential equations

    OpenAIRE

    Adem Kılıçman; Wasan Ajeel Ahmood

    2017-01-01

    The aim of this article is to study the matrix fractional differential equations and to find the exact solution for system of matrix fractional differential equations in terms of Riemann–Liouville using Laplace transform method and convolution product to the Riemann–Liouville fractional of matrices. Also, we show the theorem of non-homogeneous matrix fractional partial differential equation with some illustrative examples to demonstrate the effectiveness of the new methodology. The main objec...

  14. A GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation

    International Nuclear Information System (INIS)

    Gu Xuejun; Jia Xun; Jiang, Steve B; Jelen, Urszula; Li Jinsheng

    2011-01-01

    Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (∼5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning.

  15. Electroanalysis of cardioselective beta-adrenoreceptor blocking agent acebutolol by disposable graphite pencil electrodes with detailed redox mechanism

    Directory of Open Access Journals (Sweden)

    Atmanand M. Bagoji

    2016-12-01

    Full Text Available A simple economic graphite pencil electrode (GPE was used for analysis of cardioselective, hydrophilic-adrenoreceptor blocking agent, acebutolol (ACBT using the cyclic voltammetric, linear sweep voltammetric, differential pulse voltammetric (DPV, and square-wave voltammetric (SWV techniques. The dependence of the current on pH, concentration, and scan rate was investigated to optimize the experimental condition for determination of ACBT. The electrochemical behavior of the ACBT at GPE was a diffusion-controlled process. A probable electro-redox mechanism was proposed. Under the optimal conditions, the anodic peak current was linearly proportional to the concentration of ACBT in the range from 1.00 to 15.0 μM with a limit of detection 1.26 × 10−8 M for DPV and 1.28 × 10−8 M for the SWV. This method was applied for quantitative determination of the ACBT levels in urine as real samples. The obtained recovery ranges for ACBT in urine were from 95.4 to101% as found by the standard addition technique. Further interference study was also carried with some common interfering substances.

  16. Convergence of Transition Probability Matrix in CLVMarkov Models

    Science.gov (United States)

    Permana, D.; Pasaribu, U. S.; Indratno, S. W.; Suprayogi, S.

    2018-04-01

    A transition probability matrix is an arrangement of transition probability from one states to another in a Markov chain model (MCM). One of interesting study on the MCM is its behavior for a long time in the future. The behavior is derived from one property of transition probabilty matrix for n steps. This term is called the convergence of the n-step transition matrix for n move to infinity. Mathematically, the convergence of the transition probability matrix is finding the limit of the transition matrix which is powered by n where n moves to infinity. The convergence form of the transition probability matrix is very interesting as it will bring the matrix to its stationary form. This form is useful for predicting the probability of transitions between states in the future. The method usually used to find the convergence of transition probability matrix is through the process of limiting the distribution. In this paper, the convergence of the transition probability matrix is searched using a simple concept of linear algebra that is by diagonalizing the matrix.This method has a higher level of complexity because it has to perform the process of diagonalization in its matrix. But this way has the advantage of obtaining a common form of power n of the transition probability matrix. This form is useful to see transition matrix before stationary. For example cases are taken from CLV model using MCM called Model of CLV-Markov. There are several models taken by its transition probability matrix to find its convergence form. The result is that the convergence of the matrix of transition probability through diagonalization has similarity with convergence with commonly used distribution of probability limiting method.

  17. Max–min distance nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan; Gao, Xin

    2014-01-01

    Nonnegative Matrix Factorization (NMF) has been a popular representation method for pattern classification problems. It tries to decompose a nonnegative matrix of data samples as the product of a nonnegative basis matrix and a nonnegative coefficient matrix. The columns of the coefficient matrix can be used as new representations of these data samples. However, traditional NMF methods ignore class labels of the data samples. In this paper, we propose a novel supervised NMF algorithm to improve the discriminative ability of the new representation by using the class labels. Using the class labels, we separate all the data sample pairs into within-class pairs and between-class pairs. To improve the discriminative ability of the new NMF representations, we propose to minimize the maximum distance of the within-class pairs in the new NMF space, and meanwhile to maximize the minimum distance of the between-class pairs. With this criterion, we construct an objective function and optimize it with regard to basis and coefficient matrices, and slack variables alternatively, resulting in an iterative algorithm. The proposed algorithm is evaluated on three pattern classification problems and experiment results show that it outperforms the state-of-the-art supervised NMF methods.

  18. Max–min distance nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-10-26

    Nonnegative Matrix Factorization (NMF) has been a popular representation method for pattern classification problems. It tries to decompose a nonnegative matrix of data samples as the product of a nonnegative basis matrix and a nonnegative coefficient matrix. The columns of the coefficient matrix can be used as new representations of these data samples. However, traditional NMF methods ignore class labels of the data samples. In this paper, we propose a novel supervised NMF algorithm to improve the discriminative ability of the new representation by using the class labels. Using the class labels, we separate all the data sample pairs into within-class pairs and between-class pairs. To improve the discriminative ability of the new NMF representations, we propose to minimize the maximum distance of the within-class pairs in the new NMF space, and meanwhile to maximize the minimum distance of the between-class pairs. With this criterion, we construct an objective function and optimize it with regard to basis and coefficient matrices, and slack variables alternatively, resulting in an iterative algorithm. The proposed algorithm is evaluated on three pattern classification problems and experiment results show that it outperforms the state-of-the-art supervised NMF methods.

  19. Generating matrix elements of the hamiltonian of the algebraic version of resonating group method on intrinsic wave functions with various oscillator lengths

    International Nuclear Information System (INIS)

    Badalov, S.A.; Filippov, G.F.

    1986-01-01

    The receipts to calculate the generating matrix elements of the algebraic version of resonating group method (RGM) are given for two- and three-cluster nucleon systems, the center of mass motion being separeted exactly. For the Hamiltonian with Gaussian nucleon-nucleon potential dependence the generating matrix elements of the RGM algebraic version can be written down explictly if matrix elements of the corresponding system on wave functions of the Brink cluster model are known

  20. A spectral method to detect community structure based on distance modularity matrix

    Science.gov (United States)

    Yang, Jin-Xuan; Zhang, Xiao-Dong

    2017-08-01

    There are many community organizations in social and biological networks. How to identify these community structure in complex networks has become a hot issue. In this paper, an algorithm to detect community structure of networks is proposed by using spectra of distance modularity matrix. The proposed algorithm focuses on the distance of vertices within communities, rather than the most weakly connected vertex pairs or number of edges between communities. The experimental results show that our method achieves better effectiveness to identify community structure for a variety of real-world networks and computer generated networks with a little more time-consumption.

  1. Fast sparse matrix-vector multiplication by partitioning and reordering

    NARCIS (Netherlands)

    Yzelman, A.N.

    2011-01-01

    The thesis introduces a cache-oblivious method for the sparse matrix-vector (SpMV) multiplication, which is an important computational kernel in many applications. The method works by permuting rows and columns of the input matrix so that the resulting reordered matrix induces cache-friendly

  2. Modern Nondestructive Test Methods for Army Ceramic Matrix Composites

    National Research Council Canada - National Science Library

    Strand, Douglas J

    2008-01-01

    .... Ceramic matrix composites (CMC) are potentially good high-temperature structural materials because of their low density, high elastic moduli, high strength, and for those with weak interfaces, surprisingly good damage tolerance...

  3. Control of Pan-tilt Mechanism Angle using Position Matrix Method

    Directory of Open Access Journals (Sweden)

    Hendri Maja Saputra

    2013-12-01

    Full Text Available Control of a Pan-Tilt Mechanism (PTM angle for the bomb disposal robot Morolipi-V2 using inertial sensor measurement unit, x-IMU, has been done. The PTM has to be able to be actively controlled both manually and automatically in order to correct the orientation of the moving Morolipi-V2 platform. The x-IMU detects the platform orientation and sends the result in order to automatically control the PTM. The orientation is calculated using the quaternion combined with Madwick and Mahony filter methods. The orientation data that consists of angles of roll (α, pitch (β, and yaw (γ from the x-IMU are then being sent to the camera for controlling the PTM motion (pan & tilt angles after calculating the reverse angle using position matrix method. Experiment results using Madwick and Mahony methods show that the x-IMU can be used to find the robot platform orientation. Acceleration data from accelerometer and flux from magnetometer produce noise with standard deviation of 0.015 g and 0.006 G, respectively. Maximum absolute errors caused by Madgwick and Mahony method with respect to Xaxis are 48.45º and 33.91º, respectively. The x-IMU implementation as inertia sensor to control the Pan-Tilt Mechanism shows a good result, which the probability of pan angle tends to be the same with yaw and tilt angle equal to the pitch angle, except a very small angle shift due to the influence of roll angle..

  4. WE-E-BRB-02: Implementation of Pencil Beam Scanning (PBS) Proton Therapy Treatment for Liver Patient

    Energy Technology Data Exchange (ETDEWEB)

    Lin, L. [University of Pennsylvania (United States)

    2016-06-15

    Strategies for treating thoracic and liver tumors using pencil beam scanning proton therapy Thoracic and liver tumors have not been treated with pencil beam scanning (PBS) proton therapy until recently. This is because of concerns about the significant interplay effects between proton spot scanning and patient’s respiratory motion. However, not all tumors have unacceptable magnitude of motion for PBS proton therapy. Therefore it is important to analyze the motion and understand the significance of the interplay effect for each patient. The factors that affect interplay effect and its washout include magnitude of motion, spot size, spot scanning sequence and speed. Selection of beam angle, scanning direction, repainting and fractionation can all reduce the interplay effect. An overview of respiratory motion management in PBS proton therapy including assessment of tumor motion and WET evaluation will be first presented. As thoracic tumors have very different motion patterns from liver tumors, examples would be provided for both anatomic sites. As thoracic tumors are typically located within highly heterogeneous environments, dose calculation accuracy is a concern for both treatment target and surrounding organs such as spinal cord or esophagus. Strategies for mitigating the interplay effect in PBS will be presented and the pros and cons of various motion mitigation strategies will be discussed. Learning Objectives: Motion analysis for individual patients with respect to interplay effect Interplay effect and mitigation strategies for treating thoracic/liver tumors with PBS Treatment planning margins for PBS The impact of proton dose calculation engines over heterogeneous treatment target and surrounding organs I have a current research funding from Varian Medical System under the master agreement between University of Pennsylvania and Varian; L. Lin, I have a current funding from Varian Medical System under the master agreement between University of Pennsylvania and

  5. WE-E-BRB-02: Implementation of Pencil Beam Scanning (PBS) Proton Therapy Treatment for Liver Patient

    International Nuclear Information System (INIS)

    Lin, L.

    2016-01-01

    Strategies for treating thoracic and liver tumors using pencil beam scanning proton therapy Thoracic and liver tumors have not been treated with pencil beam scanning (PBS) proton therapy until recently. This is because of concerns about the significant interplay effects between proton spot scanning and patient’s respiratory motion. However, not all tumors have unacceptable magnitude of motion for PBS proton therapy. Therefore it is important to analyze the motion and understand the significance of the interplay effect for each patient. The factors that affect interplay effect and its washout include magnitude of motion, spot size, spot scanning sequence and speed. Selection of beam angle, scanning direction, repainting and fractionation can all reduce the interplay effect. An overview of respiratory motion management in PBS proton therapy including assessment of tumor motion and WET evaluation will be first presented. As thoracic tumors have very different motion patterns from liver tumors, examples would be provided for both anatomic sites. As thoracic tumors are typically located within highly heterogeneous environments, dose calculation accuracy is a concern for both treatment target and surrounding organs such as spinal cord or esophagus. Strategies for mitigating the interplay effect in PBS will be presented and the pros and cons of various motion mitigation strategies will be discussed. Learning Objectives: Motion analysis for individual patients with respect to interplay effect Interplay effect and mitigation strategies for treating thoracic/liver tumors with PBS Treatment planning margins for PBS The impact of proton dose calculation engines over heterogeneous treatment target and surrounding organs I have a current research funding from Varian Medical System under the master agreement between University of Pennsylvania and Varian; L. Lin, I have a current funding from Varian Medical System under the master agreement between University of Pennsylvania and

  6. Matrix of transmission in structural dynamics

    International Nuclear Information System (INIS)

    Mukherjee, S.

    1975-01-01

    Within the last few years numerous papers have been published on the subject of matrix method in elasto-mechanics. 'Matrix of Transmission' is one of the methods in this field which has gained considerable attention in recent years. The basic philosophy adopted in this method is based on the idea of breaking up a complicated system into component parts with simple elastic and dynamic properties which can be readily expressed in matrix form. These component matrices are considered as building blocks, which are fitted together according to a set of predetermined rules which then provide the static and dynamic properties of the entire system. A common type of system occuring in engineering practice consists of a number of elements linked together end to end in the form of a chain. The 'Transfer Matrix' is ideally suited for such a system, because only successive multiplication is necessary to connect these elements together. The number of degrees of freedom and intermediate conditions present no difficulty. Although the 'Transfer Matrix' method is suitable for the treatment of branched and coupled systems its application to systems which do not have predominant chain topology is not effective. Apart from the requirement that the system be linearely elastic, no other restrictions are made. In this paper, it is intended to give a general outline and theoretical formulation of 'Transfer Matrix' and then its application to actual problems in structural dynamics related to seismic analysis. The natural frequencies of a freely vibrating elastic system can be found by applying proper end conditions. The end conditions will yield the frequency determinate to zero. By using a suitable numerical method, the natural frequencies and mode shapes are determined by making a frequency sweep within the range of interest. Results of an analysis of a typical nuclear building by this method show very close agreement with the results obtained by using ASKA and SAP IV program. Therefore

  7. Low-Rank Matrix Factorization With Adaptive Graph Regularizer.

    Science.gov (United States)

    Lu, Gui-Fu; Wang, Yong; Zou, Jian

    2016-05-01

    In this paper, we present a novel low-rank matrix factorization algorithm with adaptive graph regularizer (LMFAGR). We extend the recently proposed low-rank matrix with manifold regularization (MMF) method with an adaptive regularizer. Different from MMF, which constructs an affinity graph in advance, LMFAGR can simultaneously seek graph weight matrix and low-dimensional representations of data. That is, graph construction and low-rank matrix factorization are incorporated into a unified framework, which results in an automatically updated graph rather than a predefined one. The experimental results on some data sets demonstrate that the proposed algorithm outperforms the state-of-the-art low-rank matrix factorization methods.

  8. The impact of MCS models and EFAC values on the dose simulation for a proton pencil beam

    International Nuclear Information System (INIS)

    Chen, Shih-Kuan; Chiang, Bing-Hao; Lee, Chung-Chi; Tung, Chuan-Jong; Hong, Ji-Hong; Chao, Tsi-Chian

    2017-01-01

    The Multiple Coulomb Scattering (MCS) model plays an important role in accurate MC simulation, especially for small field applications. The Rossi model is used in MCNPX 2.7.0, and the Lewis model in Geant4.9.6.p02. These two models may generate very different angular and spatial distributions in small field proton dosimetry. Beside angular and spatial distributions, step size is also an important issue that causes path length effects. The Energy Fraction (EFAC) value can be used in MCNPX 2.7.0 to control step sizes of MCS. In this study, we use MCNPX 2.7.0, Geant4.9.6.p02, and one pencil beam algorithm to evaluate the effect of dose deposition because of different MCS models and different EFAC values in proton disequilibrium situation. Different MCS models agree well with each other under a proton equilibrium situation. Under proton disequilibrium situations, the MCNPX and Geant4 results, however, show a significant deviation (up to 43%). In addition, the path length effects are more significant when EFAC is equal to 0.917 and 0.94 in small field proton dosimetry, and using a 0.97 EFAC value is the best for both accuracy and efficiency - Highlights: • MCS and EFAC are important in accurate MC simulation for proton pencil beams. • Bragg curves of MCNPX and Geant4 have a dose deviation up to 43%. • Lateral profiles from MCNPX is wider than those from Geant4. • Large EFAC caused path length effect, but no effects on lateral profiles. • 0.97 EFAC value is the best for both accuracy and efficiency.

  9. Visualizing Matrix Multiplication

    Science.gov (United States)

    Daugulis, Peteris; Sondore, Anita

    2018-01-01

    Efficient visualizations of computational algorithms are important tools for students, educators, and researchers. In this article, we point out an innovative visualization technique for matrix multiplication. This method differs from the standard, formal approach by using block matrices to make computations more visual. We find this method a…

  10. Registration of pencil beam proton radiography data with X-ray CT.

    Science.gov (United States)

    Deffet, Sylvain; Macq, Benoît; Righetto, Roberto; Vander Stappen, François; Farace, Paolo

    2017-10-01

    Proton radiography seems to be a promising tool for assessing the quality of the stopping power computation in proton therapy. However, range error maps obtained on the basis of proton radiographs are very sensitive to small misalignment between the planning CT and the proton radiography acquisitions. In order to be able to mitigate misalignment in postprocessing, the authors implemented a fast method for registration between pencil proton radiography data obtained with a multilayer ionization chamber (MLIC) and an X-ray CT acquired on a head phantom. The registration was performed by optimizing a cost function which performs a comparison between the acquired data and simulated integral depth-dose curves. Two methodologies were considered, one based on dual orthogonal projections and the other one on a single projection. For each methodology, the robustness of the registration algorithm with respect to three confounding factors (measurement noise, CT calibration errors, and spot spacing) was investigated by testing the accuracy of the method through simulations based on a CT scan of a head phantom. The present registration method showed robust convergence towards the optimal solution. For the level of measurement noise and the uncertainty in the stopping power computation expected in proton radiography using a MLIC, the accuracy appeared to be better than 0.3° for angles and 0.3 mm for translations by use of the appropriate cost function. The spot spacing analysis showed that a spacing larger than the 5 mm used by other authors for the investigation of a MLIC for proton radiography led to results with absolute accuracy better than 0.3° for angles and 1 mm for translations when orthogonal proton radiographs were fed into the algorithm. In the case of a single projection, 6 mm was the largest spot spacing presenting an acceptable registration accuracy. For registration of proton radiography data with X-ray CT, the use of a direct ray-tracing algorithm to compute

  11. Global calculation of PWR reactor core using the two group energy solution by the response matrix method

    International Nuclear Information System (INIS)

    Conti, C.F.S.; Watson, F.V.

    1991-01-01

    A computational code to solve a two energy group neutron diffusion problem has been developed base d on the Response Matrix Method. That method solves the global problem of PWR core, without using the cross sections homogenization process, thus it is equivalent to a pontwise core calculation. The present version of the code calculates the response matrices by the first order perturbative method and considers developments on arbitrary order Fourier series for the boundary fluxes and interior fluxes. (author)

  12. Convex nonnegative matrix factorization with manifold regularization.

    Science.gov (United States)

    Hu, Wenjun; Choi, Kup-Sze; Wang, Peiliang; Jiang, Yunliang; Wang, Shitong

    2015-03-01

    Nonnegative Matrix Factorization (NMF) has been extensively applied in many areas, including computer vision, pattern recognition, text mining, and signal processing. However, nonnegative entries are usually required for the data matrix in NMF, which limits its application. Besides, while the basis and encoding vectors obtained by NMF can represent the original data in low dimension, the representations do not always reflect the intrinsic geometric structure embedded in the data. Motivated by manifold learning and Convex NMF (CNMF), we propose a novel matrix factorization method called Graph Regularized and Convex Nonnegative Matrix Factorization (GCNMF) by introducing a graph regularized term into CNMF. The proposed matrix factorization technique not only inherits the intrinsic low-dimensional manifold structure, but also allows the processing of mixed-sign data matrix. Clustering experiments on nonnegative and mixed-sign real-world data sets are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Quantitative methods for compensation of matrix effects and self-absorption in Laser Induced Breakdown Spectroscopy signals of solids

    Science.gov (United States)

    Takahashi, Tomoko; Thornton, Blair

    2017-12-01

    This paper reviews methods to compensate for matrix effects and self-absorption during quantitative analysis of compositions of solids measured using Laser Induced Breakdown Spectroscopy (LIBS) and their applications to in-situ analysis. Methods to reduce matrix and self-absorption effects on calibration curves are first introduced. The conditions where calibration curves are applicable to quantification of compositions of solid samples and their limitations are discussed. While calibration-free LIBS (CF-LIBS), which corrects matrix effects theoretically based on the Boltzmann distribution law and Saha equation, has been applied in a number of studies, requirements need to be satisfied for the calculation of chemical compositions to be valid. Also, peaks of all elements contained in the target need to be detected, which is a bottleneck for in-situ analysis of unknown materials. Multivariate analysis techniques are gaining momentum in LIBS analysis. Among the available techniques, principal component regression (PCR) analysis and partial least squares (PLS) regression analysis, which can extract related information to compositions from all spectral data, are widely established methods and have been applied to various fields including in-situ applications in air and for planetary explorations. Artificial neural networks (ANNs), where non-linear effects can be modelled, have also been investigated as a quantitative method and their applications are introduced. The ability to make quantitative estimates based on LIBS signals is seen as a key element for the technique to gain wider acceptance as an analytical method, especially in in-situ applications. In order to accelerate this process, it is recommended that the accuracy should be described using common figures of merit which express the overall normalised accuracy, such as the normalised root mean square errors (NRMSEs), when comparing the accuracy obtained from different setups and analytical methods.

  14. Subthreshold resonances and resonances in the R -matrix method for binary reactions and in the Trojan horse method

    Science.gov (United States)

    Mukhamedzhanov, A. M.; Shubhchintak, Bertulani, C. A.

    2017-08-01

    In this paper we discuss the R -matrix approach to treat the subthreshold resonances for the single-level and one-channel and for the single-level and two-channel cases. In particular, the expression relating the asymptotic normalization coefficient (ANC) with the observable reduced width, when the subthreshold bound state is the only channel or coupled with an open channel, which is a resonance, is formulated. Since the ANC plays a very important role in nuclear astrophysics, these relations significantly enhance the power of the derived equations. We present the relationship between the resonance width and the ANC for the general case and consider two limiting cases: wide and narrow resonances. Different equations for the astrophysical S factors in the R -matrix approach are presented. After that we discuss the Trojan horse method (THM) formalism. The developed equations are obtained using the surface-integral formalism and the generalized R -matrix approach for the three-body resonant reactions. It is shown how the Trojan horse (TH) double-differential cross section can be expressed in terms of the on-the-energy-shell astrophysical S factor for the binary subreaction. Finally, we demonstrate how the THM can be used to calculate the astrophysical S factor for the neutron generator 13C(α ,n )16O in low-mass AGB stars. At astrophysically relevant energies this astrophysical S factor is controlled by the threshold level 1 /2+,Ex=6356 keV. Here, we reanalyzed recent TH data taking into account more accurately the three-body effects and using both assumptions that the threshold level is a subthreshold bound state or it is a resonance state.

  15. Improved method for eliminating center-of-mass coordinates from matrix elements in oscillator basis

    International Nuclear Information System (INIS)

    Richardson, R.H.; Shapiro, J.Y.

    1986-01-01

    This paper presents a concise, efficient method of reducing potential energy matrix elements to relative coordinates, when one is using an oscillator basis. It is especially suited to computer calculations. One nice feature of the method is its modular form, which allows a wide range of calculations. Separate FORTRAN subroutines have been written which calculate and store tables of the one-dimensional brackets of an equation that is presented and the single particle brackets from the isotropic to the axially symmetric oscillator equations. The tables are used by other subroutines which calculate the modified brackets and the brackets with spin. The methods developed here are a substantial improvement over what has been done heretofore, and open up new possibilities for performing nuclear structure calculations

  16. Study of electron-molecule collision via finite-element method and r-matrix propagation technique: Exact exchange

    International Nuclear Information System (INIS)

    Abdolsalami, F.; Abdolsalami, M.; Perez, L.; Gomez, P.

    1995-01-01

    The authors have applied the finite-element method to electron-molecule collision with the exchange effect implemented rigorously. All the calculations are done in the body-frame within the fixed-nuclei approximation, where the exact treatment of exchange as a nonlocal effect results in a set of coupled integro-differential equations. The method is applied to e-H 2 and e-N 2 scatterings and the cross sections obtained are in very good agreement with the corresponding results the authors have generated from the linear-algebraic approach. This confirms the significant difference observed between their results generated by linear-algebraic method and the previously published e-N 2 cross sections. Their studies show that the finite-element method is clearly superior to the linear-algebraic approach in both memory usage and CPU time especially for large systems such as e-N 2 . The system coefficient matrix obtained from the finite-element method is often sparse and smaller in size by a factor of 12 to 16, compared to the linear-algebraic technique. Moreover, the CPU time required to obtain stable results with the finite-element method is significantly smaller than the linear-algebraic approach for one incident electron energy. The usage of computer resources in the finite-element method can even be reduced much further when (1) scattering calculations involving multiple electron energies are performed in one computer run and (2) exchange, which is a short range effect, is approximated by a sparse matrix. 17 refs., 7 figs., 5 tabs

  17. Measurement of angle-correlated differential (n,2n) reaction cross section with pencil-beam DV neutron source

    International Nuclear Information System (INIS)

    Takaki, S.; Kondo, K.; Shido, S.; Miyamaru, H.; Murata, I.; Ochiai, Kentaro; Nishitani, Takeo

    2006-01-01

    Angle-correlated differential cross-section for 9 Be(n,2n) reaction has been measured with the coincidence detection technique and a pencil-beam DT neutron source at FNS, JAEA. Energy spectra of two emitted neutrons were obtained for azimuthal and polar direction independently. It was made clear from the experiment that there are noise signals caused by inter-detector scattering. The ratio of the inter-detector scattering components in the detected signals was estimated by MCNP calculation to correct the measured result. By considering the inter-detector scattering components, the total 9 Be(n,2n) reaction cross-section agreed with the evaluated nuclear data within the experimental error. (author)

  18. Effects of SiO2 nano-particles on tribological and mechanical properties of aluminum matrix composites by different dispersion methods

    Science.gov (United States)

    Azadi, Mahboobeh; Zolfaghari, Mehrdad; Rezanezhad, Saeid; Azadi, Mohammad

    2018-05-01

    This study has been presented with mechanical properties of aluminum matrix composites, reinforced by SiO2 nano-particles. The stir casting method was employed to produce various aluminum matrix composites. Different composites by varying the SiO2 nano-particle content (including 0.5 and 1 weight percents) and two dispersion methods (including ball-milling and pre-heating) were made. Then, the density, the hardness, the compression strength, the wear resistance and the microstructure of nano-composites have been studied in this research. Besides, the distribution of nano-particles in the aluminum matrix for all composites has been also evaluated by the field emission scanning electron microscopy (FESEM). Obtained results showed that the density, the elongation and the ultimate compressive strength of various nano-composites decreased by the presence of SiO2 nano-particles; however, the hardness, the wear resistance, the yield strength and the elastic modulus of composites increased by auditioning of nano-particles to the aluminum alloy. FESEM images indicated better wetting of the SiO2 reinforcement in the aluminum matrix, prepared by the pre-heating dispersion method, comparing to ball-milling. When SiO2 nano-particles were added to the aluminum alloy, the morphology of the Si phase and intermetallic phases changed, which enhanced mechanical properties. In addition, the wear mechanism plus the friction coefficient value were changed for various nano-composites with respect to the aluminum alloy.

  19. Basic matrix algebra and transistor circuits

    CERN Document Server

    Zelinger, G

    1963-01-01

    Basic Matrix Algebra and Transistor Circuits deals with mastering the techniques of matrix algebra for application in transistors. This book attempts to unify fundamental subjects, such as matrix algebra, four-terminal network theory, transistor equivalent circuits, and pertinent design matters. Part I of this book focuses on basic matrix algebra of four-terminal networks, with descriptions of the different systems of matrices. This part also discusses both simple and complex network configurations and their associated transmission. This discussion is followed by the alternative methods of de

  20. SRTC criticality safety technical review: Nuclear criticality safety evaluation 94-02, uranium solidification facility pencil tank module spacing

    International Nuclear Information System (INIS)

    Rathbun, R.

    1994-01-01

    Review of NMP-NCS-94-0087, ''Nuclear Criticality Safety Evaluation 94-02: Uranium Solidification Facility Pencil Tank Module Spacing (U), April 18, 1994,'' was requested of the SRTC Applied Physics Group. The NCSE is a criticality assessment to show that the USF process module spacing, as given in Non-Conformance Report SHM-0045, remains safe for operation. The NCSE under review concludes that the module spacing as given in Non-Conformance Report SHM-0045 remains in a critically safe configuration for all normal and single credible abnormal conditions. After a thorough review of the NCSE, this reviewer agrees with that conclusion

  1. Transfer matrix representation for periodic planar media

    Science.gov (United States)

    Parrinello, A.; Ghiringhelli, G. L.

    2016-06-01

    Sound transmission through infinite planar media characterized by in-plane periodicity is faced by exploiting the free wave propagation on the related unit cells. An appropriate through-thickness transfer matrix, relating a proper set of variables describing the acoustic field at the two external surfaces of the medium, is derived by manipulating the dynamic stiffness matrix related to a finite element model of the unit cell. The adoption of finite element models avoids analytical modeling or the simplification on geometry or materials. The obtained matrix is then used in a transfer matrix method context, making it possible to combine the periodic medium with layers of different nature and to treat both hard-wall and semi-infinite fluid termination conditions. A finite sequence of identical sub-layers through the thickness of the medium can be handled within the transfer matrix method, significantly decreasing the computational burden. Transfer matrices obtained by means of the proposed method are compared with analytical or equivalent models, in terms of sound transmission through barriers of different nature.

  2. Dynamic shear-lag model for understanding the role of matrix in energy dissipation in fiber-reinforced composites.

    Science.gov (United States)

    Liu, Junjie; Zhu, Wenqing; Yu, Zhongliang; Wei, Xiaoding

    2018-07-01

    Lightweight and high impact performance composite design is a big challenge for scientists and engineers. Inspired from well-known biological materials, e.g., the bones, spider silk, and claws of mantis shrimp, artificial composites have been synthesized for engineering applications. Presently, the design of ballistic resistant composites mainly emphasizes the utilization of light and high-strength fibers, whereas the contribution from matrix materials receives less attention. However, recent ballistic experiments on fiber-reinforced composites challenge our common sense. The use of matrix with "low-grade" properties enhances effectively the impact performance. In this study, we establish a dynamic shear-lag model to explore the energy dissipation through viscous matrix materials in fiber-reinforced composites and the associations of energy dissipation characteristics with the properties and geometries of constituents. The model suggests that an enhancement in energy dissipation before the material integrity is lost can be achieved by tuning the shear modulus and viscosity of a matrix. Furthermore, our model implies that an appropriately designed staggered microstructure, adopted by many natural composites, can repeatedly activate the energy dissipation process and thus improve dramatically the impact performance. This model demonstrates the role of matrix in energy dissipation, and stimulates new advanced material design concepts for ballistic applications. Biological composites found in nature often possess exceptional mechanical properties that man-made materials haven't be able to achieve. For example, it is predicted that a pencil thick spider silk thread can stop a flying Boeing airplane. Here, by proposing a dynamic shear-lag model, we investigate the relationships between the impact performance of a composite with the dimensions and properties of its constituents. Our analysis suggests that the impact performance of fiber-reinforced composites could improve

  3. Nonenzymatic glucose sensor based on disposable pencil graphite electrode modified by copper nanoparticles

    Directory of Open Access Journals (Sweden)

    Sima Pourbeyram

    2016-10-01

    Full Text Available A nonenzymatic glucose sensor based on a disposable pencil graphite electrode (PGE modified by copper nanoparticles [Cu(NP] was prepared for the first time. The prepared Cu(NP exhibited an absorption peak centered at ∼562 nm using UV-visible spectrophotometry and an almost homogenous spherical shape by scanning electron microscopy. Cyclic voltammetry of Cu(NP-PGE showed an adsorption controlled charge transfer process up to 90.0 mVs−1. The sensor was applied for the determination of glucose using an amperometry technique with a detection limit of [0.44 (±0.01 μM] and concentration sensitivity of [1467.5 (±1.3 μA/mMcm−2]. The preparation of the Cu(NP-PGE sensor was reproducible (relative standard deviation = 2.10%, n = 10, very simple, fast, and inexpensive, and the Cu(NP-PGE is suitable to be used as a disposable glucose sensor.

  4. Massively parallel sparse matrix function calculations with NTPoly

    Science.gov (United States)

    Dawson, William; Nakajima, Takahito

    2018-04-01

    We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.

  5. Electrochemical monitoring of biointeraction by graphene-based material modified pencil graphite electrode.

    Science.gov (United States)

    Eksin, Ece; Zor, Erhan; Erdem, Arzum; Bingol, Haluk

    2017-06-15

    Recently, the low-cost effective biosensing systems based on advanced nanomaterials have received a key attention for development of novel assays for rapid and sequence-specific nucleic acid detection. The electrochemical biosensor based on reduced graphene oxide (rGO) modified disposable pencil graphite electrodes (PGEs) were developed herein for electrochemical monitoring of DNA, and also for monitoring of biointeraction occurred between anticancer drug, Daunorubicin (DNR), and DNA. First, rGO was synthesized chemically and characterized by using UV-Vis, TGA, FT-IR, Raman Spectroscopy and SEM techniques. Then, the quantity of rGO assembling onto the surface of PGE by passive adsorption was optimized. The electrochemical behavior of rGO-PGEs was examined by cyclic voltammetry (CV). rGO-PGEs were then utilized for electrochemical monitoring of surface-confined interaction between DNR and DNA using differential pulse voltammetry (DPV) technique. Additionally, voltammetric results were complemented with electrochemical impedance spectroscopy (EIS) technique. Electrochemical monitoring of DNR and DNA was resulted with satisfying detection limits 0.55µM and 2.71µg/mL, respectively. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Reaction and nucleation mechanisms of copper electrodeposition on disposable pencil graphite electrode

    Energy Technology Data Exchange (ETDEWEB)

    Majidi, M.R. [Department of Analytical Chemistry, Faculty of Chemistry, University of Tabriz, 29th Bahman Bolvard, Tabriz 51664 (Iran, Islamic Republic of)], E-mail: sr.majidi@gmail.com; Asadpour-Zeynali, K.; Hafezi, B. [Department of Analytical Chemistry, Faculty of Chemistry, University of Tabriz, 29th Bahman Bolvard, Tabriz 51664 (Iran, Islamic Republic of)

    2009-01-01

    The reaction and nucleation mechanism of copper electrodeposition on disposable pencil graphite electrode (PGE) in acidic sulphate solution were investigated using cyclic voltammetry (CV) and chronoamperometry (CA) techniques, respectively. Electrochemical experiments were followed by morphological studies with scanning electron microscopy (SEM). The effect of some experimental parameters, namely copper concentration, pH, scan rate, background electrolyte, deposition potential, and conditioning surface of the electrode were described. At the surface of PGE, Cu{sup 2+} ions were reduced at -250 mV vs. SCE. It was found that electrodeposition of copper is affected by rough surface of PGE. The nucleation mechanisms were examined by fitting the experimental CA data into Scharifker-Hills nucleation models. The nuclei population densities were also determined by means of two common fitting models developed for three-dimensional nucleation and growth (Scharifker-Mostany and Mirkin-Nilov-Herrman-Tarallo). It was found that deposition potential and background electrolyte affect the distribution of the deposited copper. The morphology of the deposited copper is affected by background electrolyte.

  7. General beam position controlling method for 3D optical systems based on the method of solving ray matrix equations

    Science.gov (United States)

    Chen, Meixiong; Yuan, Jie; Long, Xingwu; Kang, Zhenglong; Wang, Zhiguo; Li, Yingying

    2013-12-01

    A general beam position controlling method for 3D optical systems based on the method of solving ray matrix equations has been proposed in this paper. As a typical 3D optical system, nonplanar ring resonator of Zero-Lock Laser Gyroscopes has been chosen as an example to show its application. The total mismatching error induced by Faraday-wedge in nonplanar ring resonator has been defined and eliminated quite accurately with the error less than 1 μm. Compared with the method proposed in Ref. [14], the precision of the beam position controlling has been improved by two orders of magnitude. The novel method can be used to implement automatic beam position controlling in 3D optical systems with servo circuit. All those results have been confirmed by related alignment experiments. The results in this paper are important for beam controlling, ray tracing, cavity design and alignment in 3D optical systems.

  8. The density matrix renormalization group method. Application to the EPP model of a cyclic polyene chain

    International Nuclear Information System (INIS)

    Fano, G.; Ortolani, F.; Ziosi, L.

    1997-10-01

    The density matrix renormalization group (DMRG) method introduced by White for the study of strongly interacting electron systems is reviewed; the method is variational and considers a system of localized electrons as the union of two adjacent fragments A,B. A density matrix ρ is introduced, whose eigenvectors corresponding to the largest eigenvalues are the most significant, the most probable states of A in the presence of B; these states are retained, while states corresponding to small eigenvalues of ρ are neglected. It is conjectured that the decreasing behaviour of the eigenvalues is gaussian. The DMRG method is tested on the Pariser-Parr-Pople Hamiltonian of a cyclic polyene (CH) N up to N = 34. A Hilbert space of dimension 5. x 10 18 is explored. The ground state energy is 10 -3 eV within the full Cl value in the case N = 18. The DMRG method compares favourably also with coupled cluster approximations. The unrestricted Hartree-Fock solution (which presents spin density waves) is briefly reviewed, and a comparison is made with the DMRG energy values. Finally, the spin-spin and density-density correlation functions are computed; the results suggest that the antiferromagnetic order of the exact solution does not extend up to large distances but exists locally. No charge density waves are present. (author)

  9. Least square methods and covariance matrix applied to the relative efficiency calibration of a Ge(Li) detector

    International Nuclear Information System (INIS)

    Geraldo, L.P.; Smith, D.L.

    1989-01-01

    The methodology of covariance matrix and square methods have been applied in the relative efficiency calibration for a Ge(Li) detector apllied in the relative efficiency calibration for a Ge(Li) detector. Procedures employed to generate, manipulate and test covariance matrices which serve to properly represent uncertainties of experimental data are discussed. Calibration data fitting using least square methods has been performed for a particular experimental data set. (author) [pt

  10. The mass angular scattering power method for determining the kinetic energies of clinical electron beams

    International Nuclear Information System (INIS)

    Blais, N.; Podgorsak, E.B.

    1992-01-01

    A method for determining the kinetic energy of clinical electron beams is described, based on the measurement in air of the spatial spread of a pencil electron beam which is produced from the broad clinical electron beam. As predicted by the Fermi-Eyges theory, the dose distribution measured in air on a plane, perpendicular to the incident direction of the initial pencil electron beam, is Gaussian. The square of its spatial spread is related to the mass angular scattering power which in turn is related to the kinetic energy of the electron beam. The measured spatial spread may thus be used to determine the mass angular scattering power, which is then used to determine the kinetic energy of the electron beam from the known relationship between mass angular scattering power and kinetic energy. Energies obtained with the mass angular scattering power method agree with those obtained with the electron range method. (author)

  11. Localized eigenvectors of the non-backtracking matrix

    International Nuclear Information System (INIS)

    Kawamoto, Tatsuro

    2016-01-01

    In the case of graph partitioning, the emergence of localized eigenvectors can cause the standard spectral method to fail. To overcome this problem, the spectral method using a non-backtracking matrix was proposed. Based on numerical experiments on several examples of real networks, it is clear that the non-backtracking matrix does not exhibit localization of eigenvectors. However, we show that localized eigenvectors of the non-backtracking matrix can exist outside the spectral band, which may lead to deterioration in the performance of graph partitioning. (paper: interdisciplinary statistical mechanics)

  12. Rotation Matrix Method Based on Ambiguity Function for GNSS Attitude Determination.

    Science.gov (United States)

    Yang, Yingdong; Mao, Xuchu; Tian, Weifeng

    2016-06-08

    Global navigation satellite systems (GNSS) are well suited for attitude determination. In this study, we use the rotation matrix method to resolve the attitude angle. This method achieves better performance in reducing computational complexity and selecting satellites. The condition of the baseline length is combined with the ambiguity function method (AFM) to search for integer ambiguity, and it is validated in reducing the span of candidates. The noise error is always the key factor to the success rate. It is closely related to the satellite geometry model. In contrast to the AFM, the LAMBDA (Least-squares AMBiguity Decorrelation Adjustment) method gets better results in solving the relationship of the geometric model and the noise error. Although the AFM is more flexible, it is lack of analysis on this aspect. In this study, the influence of the satellite geometry model on the success rate is analyzed in detail. The computation error and the noise error are effectively treated. Not only is the flexibility of the AFM inherited, but the success rate is also increased. An experiment is conducted in a selected campus, and the performance is proved to be effective. Our results are based on simulated and real-time GNSS data and are applied on single-frequency processing, which is known as one of the challenging case of GNSS attitude determination.

  13. Rotation Matrix Method Based on Ambiguity Function for GNSS Attitude Determination

    Directory of Open Access Journals (Sweden)

    Yingdong Yang

    2016-06-01

    Full Text Available Global navigation satellite systems (GNSS are well suited for attitude determination. In this study, we use the rotation matrix method to resolve the attitude angle. This method achieves better performance in reducing computational complexity and selecting satellites. The condition of the baseline length is combined with the ambiguity function method (AFM to search for integer ambiguity, and it is validated in reducing the span of candidates. The noise error is always the key factor to the success rate. It is closely related to the satellite geometry model. In contrast to the AFM, the LAMBDA (Least-squares AMBiguity Decorrelation Adjustment method gets better results in solving the relationship of the geometric model and the noise error. Although the AFM is more flexible, it is lack of analysis on this aspect. In this study, the influence of the satellite geometry model on the success rate is analyzed in detail. The computation error and the noise error are effectively treated. Not only is the flexibility of the AFM inherited, but the success rate is also increased. An experiment is conducted in a selected campus, and the performance is proved to be effective. Our results are based on simulated and real-time GNSS data and are applied on single-frequency processing, which is known as one of the challenging case of GNSS attitude determination.

  14. Method and apparatus for fabricating a composite structure consisting of a filamentary material in a metal matrix

    Science.gov (United States)

    Banker, J.G.; Anderson, R.C.

    1975-10-21

    A method and apparatus are provided for preparing a composite structure consisting of filamentary material within a metal matrix. The method is practiced by the steps of confining the metal for forming the matrix in a first chamber, heating the confined metal to a temperature adequate to effect melting thereof, introducing a stream of inert gas into the chamber for pressurizing the atmosphere in the chamber to a pressure greater than atmospheric pressure, confining the filamentary material in a second chamber, heating the confined filamentary material to a temperature less than the melting temperature of the metal, evacuating the second chamber to provide an atmosphere therein at a pressure, placing the second chamber in registry with the first chamber to provide for the forced flow of the molten metal into the second chamber to effect infiltration of the filamentary material with the molten metal, and thereafter cooling the metal infiltrated-filamentary material to form said composite structure.

  15. Method and apparatus for fabricating a composite structure consisting of a filamentary material in a metal matrix

    International Nuclear Information System (INIS)

    Banker, J.G.; Anderson, R.C.

    1975-01-01

    A method and apparatus are provided for preparing a composite structure consisting of filamentary material within a metal matrix. The method is practiced by the steps of confining the metal for forming the matrix in a first chamber, heating the confined metal to a temperature adequate to effect melting thereof, introducing a stream of inert gas into the chamber for pressurizing the atmosphere in the chamber to a pressure greater than atmospheric pressure, confining the filamentary material in a second chamber, heating the confined filamentary material to a temperature less than the melting temperature of the metal, evacuating the second chamber to provide an atmosphere therein at a pressure, placing the second chamber in registry with the first chamber to provide for the forced flow of the molten metal into the second chamber to effect infiltration of the filamentary material with the molten metal, and thereafter cooling the metal infiltrated-filamentary material to form said composite structure

  16. Generating quantitative models describing the sequence specificity of biological processes with the stabilized matrix method

    Directory of Open Access Journals (Sweden)

    Sette Alessandro

    2005-05-01

    Full Text Available Abstract Background Many processes in molecular biology involve the recognition of short sequences of nucleic-or amino acids, such as the binding of immunogenic peptides to major histocompatibility complex (MHC molecules. From experimental data, a model of the sequence specificity of these processes can be constructed, such as a sequence motif, a scoring matrix or an artificial neural network. The purpose of these models is two-fold. First, they can provide a summary of experimental results, allowing for a deeper understanding of the mechanisms involved in sequence recognition. Second, such models can be used to predict the experimental outcome for yet untested sequences. In the past we reported the development of a method to generate such models called the Stabilized Matrix Method (SMM. This method has been successfully applied to predicting peptide binding to MHC molecules, peptide transport by the transporter associated with antigen presentation (TAP and proteasomal cleavage of protein sequences. Results Herein we report the implementation of the SMM algorithm as a publicly available software package. Specific features determining the type of problems the method is most appropriate for are discussed. Advantageous features of the package are: (1 the output generated is easy to interpret, (2 input and output are both quantitative, (3 specific computational strategies to handle experimental noise are built in, (4 the algorithm is designed to effectively handle bounded experimental data, (5 experimental data from randomized peptide libraries and conventional peptides can easily be combined, and (6 it is possible to incorporate pair interactions between positions of a sequence. Conclusion Making the SMM method publicly available enables bioinformaticians and experimental biologists to easily access it, to compare its performance to other prediction methods, and to extend it to other applications.

  17. Time discretization of the point kinetic equations using matrix exponential method and First-Order Hold

    International Nuclear Information System (INIS)

    Park, Yujin; Kazantzis, Nikolaos; Parlos, Alexander G.; Chong, Kil To

    2013-01-01

    Highlights: • Numerical solution for stiff differential equations using matrix exponential method. • The approximation is based on First Order Hold assumption. • Various input examples applied to the point kinetics equations. • The method shows superior useful and effective activity. - Abstract: A system of nonlinear differential equations is derived to model the dynamics of neutron density and the delayed neutron precursors within a point kinetics equation modeling framework for a nuclear reactor. The point kinetic equations are mathematically characterized as stiff, occasionally nonlinear, ordinary differential equations, posing significant challenges when numerical solutions are sought and traditionally resulting in the need for smaller time step intervals within various computational schemes. In light of the above realization, the present paper proposes a new discretization method inspired by system-theoretic notions and technically based on a combination of the matrix exponential method (MEM) and the First-Order Hold (FOH) assumption. Under the proposed time discretization structure, the sampled-data representation of the nonlinear point kinetic system of equations is derived. The performance of the proposed time discretization procedure is evaluated using several case studies with sinusoidal reactivity profiles and multiple input examples (reactivity and neutron source function). It is shown, that by applying the proposed method under a First-Order Hold for the neutron density and the precursor concentrations at each time step interval, the stiffness problem associated with the point kinetic equations can be adequately addressed and resolved. Finally, as evidenced by the aforementioned detailed simulation studies, the proposed method retains its validity and accuracy for a wide range of reactor operating conditions, including large sampling periods dictated by physical and/or technical limitations associated with the current state of sensor and

  18. A framework for general sparse matrix-matrix multiplication on GPUs and heterogeneous processors

    DEFF Research Database (Denmark)

    Liu, Weifeng; Vinter, Brian

    2015-01-01

    General sparse matrix-matrix multiplication (SpGEMM) is a fundamental building block for numerous applications such as algebraic multigrid method (AMG), breadth first search and shortest path problem. Compared to other sparse BLAS routines, an efficient parallel SpGEMM implementation has to handle...... extra irregularity from three aspects: (1) the number of nonzero entries in the resulting sparse matrix is unknown in advance, (2) very expensive parallel insert operations at random positions in the resulting sparse matrix dominate the execution time, and (3) load balancing must account for sparse data...... memory space and efficiently utilizes the very limited on-chip scratchpad memory. Parallel insert operations of the nonzero entries are implemented through the GPU merge path algorithm that is experimentally found to be the fastest GPU merge approach. Load balancing builds on the number of necessary...

  19. Development of a golden beam data set for the commissioning of a proton double-scattering system in a pencil-beam dose calculation algorithm

    International Nuclear Information System (INIS)

    Slopsema, R. L.; Flampouri, S.; Yeung, D.; Li, Z.; Lin, L.; McDonough, J. E.; Palta, J.

    2014-01-01

    Purpose: The purpose of this investigation is to determine if a single set of beam data, described by a minimal set of equations and fitting variables, can be used to commission different installations of a proton double-scattering system in a commercial pencil-beam dose calculation algorithm. Methods: The beam model parameters required to commission the pencil-beam dose calculation algorithm (virtual and effective SAD, effective source size, and pristine-peak energy spread) are determined for a commercial double-scattering system. These parameters are measured in a first room and parameterized as function of proton energy and nozzle settings by fitting four analytical equations to the measured data. The combination of these equations and fitting values constitutes the golden beam data (GBD). To determine the variation in dose delivery between installations, the same dosimetric properties are measured in two additional rooms at the same facility, as well as in a single room at another facility. The difference between the room-specific measurements and the GBD is evaluated against tolerances that guarantee the 3D dose distribution in each of the rooms matches the GBD-based dose distribution within clinically reasonable limits. The pencil-beam treatment-planning algorithm is commissioned with the GBD. The three-dimensional dose distribution in water is evaluated in the four treatment rooms and compared to the treatment-planning calculated dose distribution. Results: The virtual and effective SAD measurements fall between 226 and 257 cm. The effective source size varies between 2.4 and 6.2 cm for the large-field options, and 1.0 and 2.0 cm for the small-field options. The pristine-peak energy spread decreases from 1.05% at the lowest range to 0.6% at the highest. The virtual SAD as well as the effective source size can be accurately described by a linear relationship as function of the inverse of the residual energy. An additional linear correction term as function of

  20. Development of a golden beam data set for the commissioning of a proton double-scattering system in a pencil-beam dose calculation algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Slopsema, R. L., E-mail: rslopsema@floridaproton.org; Flampouri, S.; Yeung, D.; Li, Z. [University of Florida Proton Therapy Institute, 2015 North Jefferson Street, Jacksonville, Florida 32205 (United States); Lin, L.; McDonough, J. E. [Department of Radiation Oncology, University of Pennsylvania, 3400 Civic Boulevard, 2326W TRC, PCAM, Philadelphia, Pennsylvania 19104 (United States); Palta, J. [VCU Massey Cancer Center, Virginia Commonwealth University, 401 College Street, Richmond, Virginia 23298 (United States)

    2014-09-15

    Purpose: The purpose of this investigation is to determine if a single set of beam data, described by a minimal set of equations and fitting variables, can be used to commission different installations of a proton double-scattering system in a commercial pencil-beam dose calculation algorithm. Methods: The beam model parameters required to commission the pencil-beam dose calculation algorithm (virtual and effective SAD, effective source size, and pristine-peak energy spread) are determined for a commercial double-scattering system. These parameters are measured in a first room and parameterized as function of proton energy and nozzle settings by fitting four analytical equations to the measured data. The combination of these equations and fitting values constitutes the golden beam data (GBD). To determine the variation in dose delivery between installations, the same dosimetric properties are measured in two additional rooms at the same facility, as well as in a single room at another facility. The difference between the room-specific measurements and the GBD is evaluated against tolerances that guarantee the 3D dose distribution in each of the rooms matches the GBD-based dose distribution within clinically reasonable limits. The pencil-beam treatment-planning algorithm is commissioned with the GBD. The three-dimensional dose distribution in water is evaluated in the four treatment rooms and compared to the treatment-planning calculated dose distribution. Results: The virtual and effective SAD measurements fall between 226 and 257 cm. The effective source size varies between 2.4 and 6.2 cm for the large-field options, and 1.0 and 2.0 cm for the small-field options. The pristine-peak energy spread decreases from 1.05% at the lowest range to 0.6% at the highest. The virtual SAD as well as the effective source size can be accurately described by a linear relationship as function of the inverse of the residual energy. An additional linear correction term as function of